Saturday 16 July 2022

Access Checking Active Directory

Like many Windows related technologies Active Directory uses a security descriptor and the access check process to determine what access a user has to parts of the directory. Each object in the directory contains an nTSecurityDescriptor attribute which stores the binary representation of the security descriptor. When a user accesses the object through LDAP the remote user's token is used with the security descriptor to determine if they have the rights to perform the operation they're requesting.

Weak security descriptors is a common misconfiguration that could result in the entire domain being compromised. Therefore it's important for an administrator to be able to find and remediate security weaknesses. Unfortunately Microsoft doesn't provide a means for an administrator to audit the security of AD, at least in any default tool I know of. There is third-party tooling, such as Bloodhound, which will perform this analysis offline but from reading the implementation of the checking they don't tend to use the real access check APIs and so likely miss some misconfigurations.

I wrote my own access checker for AD which is included in my NtObjectManager PowerShell module. I've used it to find a few vulnerabilities, such as CVE-2021-34470 which was an issue with Exchange's changes to AD. This works "online", as in you need to have an active account in the domain to run it, however AFAIK it should provide the most accurate results if what you're interested in what access an specific user has to AD objects. While the command is available in the module it's perhaps not immediately obvious how to use it an interpret the result, therefore I decide I should write a quick blog post about it.

A Complex Process

The access check process is mostly documented by Microsoft in [MS-ADTS]: Active Directory Technical Specification. Specifically in section 5.1.3. However, this leaves many questions unanswered. I'm not going to go through how it works in full either, but let me give a quick overview.  I'm going to assume you have a basic knowledge of the structure of the AD and its objects.

An AD object contains many resources that access might want to be granted or denied on for a particular user. For example you might want to allow the user to create only certain types of child objects, or only modify certain attributes. There are many ways that Microsoft could have implemented security, but they decided on extending the ACL format to introduce the object ACE. For example the ACCESS_ALLOWED_OBJECT_ACE structure adds two GUIDs to the normal ACCESS_ALLOWED_ACE

The first GUID, ObjectType indicates the type of object that the ACE applies to. For example this can be set to the schema ID of an attribute and the ACE will grant access to only that attribute nothing else. The second GUID, InheritedObjectType is only used during ACL inheritance. It represents the schema ID of the object's class that is allowed to inherit this ACE. For example if it's set to the schema ID of the computer class, then the ACE will only be inherited if such a class is created, it will not be if say a user object is created instead. We only need to care about the first of these GUIDs when doing an access check.

To perform an access check you need to use an API such as AccessCheckByType which supports checking the object ACEs. When calling the API you pass a list of object type GUIDs you want to check for access on. When processing the DACL if an ACE has an ObjectType GUID which isn't in the passed list it'll be ignored. Otherwise it'll be handled according to the normal access check rules. If the ACE isn't an object ACE then it'll also be processed.

If all you want to do is check if a local user has access to a specific object or attribute then it's pretty simple. Just get the access token for that user, add the object's GUID to the list and call the access check API. The resulting granted access can be one of the following specific access rights, not the names in parenthesis are the ones I use in the PowerShell module for simplicity:
  • ACTRL_DS_CREATE_CHILD (CreateChild) - Create a new child object
  • ACTRL_DS_DELETE_CHILD (DeleteChild) - Delete a child object
  • ACTRL_DS_LIST (List) - Enumerate child objects
  • ACTRL_DS_SELF (Self) - Grant a write-validated extended right
  • ACTRL_DS_READ_PROP (ReadProp) - Read an attribute
  • ACTRL_DS_WRITE_PROP (WriteProp) - Write an attribute
  • ACTRL_DS_DELETE_TREE (DeleteTree) - Delete a tree of objects
  • ACTRL_DS_LIST_OBJECT (ListObject) - List a tree of objects
  • ACTRL_DS_CONTROL_ACCESS (ControlAccess) - Grant a control extended right
You can also be granted standard rights such as READ_CONTROL, WRITE_DAC or DELETE which do what you'd expect them to do. However, if you want see what the maximum granted access on the DC would be it's slightly more difficult. We have the following problems:
  • The list of groups granted to a local user is unlikely to match what they're granted on the DC where the real access check takes place.
  • AccessCheckByType only returns a single granted access value, if we have a lot of object types to test it'd be quick expensive to call 100s if not 1000s of times for a single security descriptor.
While you could solve the first problem by having sufficient local privileges to manually create an access token and the second by using an API which returns a list of granted access such as AccessCheckByTypeResultList there's an "simpler" solution. You can use the Authz APIs, these allow you to manually build a security context with any groups you like without needing to create an access token and the AuthzAccessCheck API supports returning a list of granted access for each object in the type list. It just so happens that this API is the one used by the AD LDAP server itself.

Therefore to perform a "correct" maximum access check you need to do the following steps.
  1. Enumerate the user's group list for the DC from the AD. Local group assignments are stored in the directory's CN=Builtin container.
  2. Build an Authz security context with the group list.
  3. Read a directory object's security descriptor.
  4. Read the object's schema class and build a list of specific schema objects to check:
    • All attributes from the class and its super, auxiliary and dynamic auxiliary classes.
    • All allowable child object classes
    • All assignable control, write-validated and property set extended rights.
  5. Convert the gathered schema information into the object type list for the access check.
  6. Run the access check and handled the results.
  7. Repeat from 3 for every object you want to check.
Trust me when I say this process is actually easier said than done. There's many nuances that just produce surprising results, I guess this is why most tooling just doesn't bother. Also my code includes a fair amount of knowledge gathered from reverse engineering the real implementation, but I'm sure I could have missed something.

Using Get-AccessibleDsObject and Interpreting the Results

Let's finally get to using the PowerShell command which is the real purpose of this blog post. For a simple check run the following command. This can take a while on the first run to gather information about the domain and the user.

PS> Get-AccessibleDsObject -NamingContext Default
Name   ObjectClass UserName       Modifiable Controllable
----   ----------- --------       ---------- ------------
domain domainDNS   DOMAIN\alice   False      True

This uses the NamingContext property to specify what object to check. The property allows you to easily specify the three main directories, Default, Configuration and Schema. You can also use the DistinguishedName property to specify an explicit DN. Also the Domain property is used to specify the domain for the LDAP server if you don't want to inspect the current user's domain. You can also specify the Recurse property to recursively enumerate objects, in this case we just access check the root object.

The access check defaults to using the current user's groups, based on what they would be on the DC. This is obviously important, especially if the current user is a local administrator as they wouldn't be guaranteed to have administrator rights on the DC. You can specify different users to check either by SID using the UserSid property, or names using the UserName property. These properties can take multiple values which will run multiple checks against the list of enumerated objects. For example to check using the domain administrator you could do the following:

PS> Get-AccessibleDsObject -NamingContext Default -UserName DOMAIN\Administrator
Name   ObjectClass UserName             Modifiable Controllable
----   ----------- --------             ---------- ------------
domain domainDNS   DOMAIN\Administrator True       True

The basic table format for the access check results shows give columns, the common name of the object, it's schema class, the user that was checked and whether the access check resulted in any modifiable or controllable access being granted. Modifiable is things like being able to write attributes or create/delete child objects. Controllable indicates one or more controllable extended right was granted to the user, such as allowing the user's password to be changed.

As this is PowerShell the access check result is an object with many properties. The following properties are probably the ones of most interest when determining what access is granted to the user.
  • GrantedAccess - The granted access when only specifying the object's schema class during the check. If an access is granted at this level it'd apply to all values of that type, for example if WriteProp is granted then any attribute in the object can be written by the user.
  • WritableAttributes - The list of attributes a user can modify.
  • WritablePropertySets - The list of writable property sets a user can modify. Note that this is more for information purposes, the modifiable attributes will also be in the WritableAttributes property which is going to be easier to inspect.
  • GrantedControl - The list of control extended rights granted to a user.
  • GrantedWriteValidated - The list of write validated extended rights granted to a user.
  • CreateableClasses - The list of child object classes that can be created.
  • DeletableClasses - The list of child object classes that can be deleted.
  • DistinguishedName - The full DN of the object.
  • SecurityDescriptor - The security descriptor used for the check.
  • TokenInfo - The user's information used in the check, such as the list of groups.
The command should be pretty easy to use. That said it does come with a few caveats. First you can only use the command with direct access to the AD using a domain account. Technically there's no reason you couldn't implement a gatherer like Bloodhound and doing the access check offline, but I just don't. I've not tested it in weirder setups such as complex domain hierarchies or RODCs.

If you're using a low-privileged user there's likely to be AD objects that you can't enumerate or read the security descriptor from. This means the results are going to depend on the user you use to enumerate with. The best results would be using a domain/enterprise administrator will full access to everything.

Based on my testing when I've found an access being granted to a user that seems to be real, however it's possible I'm not always 100% correct or that I'm missing accesses. Also it's worth noting that just having access doesn't mean there's not some extra checking done by the LDAP server. For example there's an explicit block on creating Group Managed Service Accounts in Computer objects, even though that will seem to be a granted child object.

Sunday 26 June 2022

Finding Running RPC Server Information with NtObjectManager

When doing security research I regularly use my NtObjectManager PowerShell module to discover and call RPC servers on Windows. Typically I'll use the Get-RpcServer command, passing the name of a DLL or EXE file to extract the embedded RPC servers. I can then use the returned server objects to create a client to access the server and call its methods. A good blog post about how some of this works was written recently by blueclearjar.

Using Get-RpcServer only gives you a list of what RPC servers could possibly be running, not whether they are running and if so in what process. This is where the RpcView does better, as it parses a process' in-memory RPC structures to find what is registered and where. Unfortunately this is something that I'm yet to implement in NtObjectManager

However, it turns out there's various ways to get the running RPC server information which are provided by OS and the RPC runtime which we can use to get a more or less complete list of running servers. I've exposed all the ones I know about with some recent updates to the module. Let's go through the various ways you can piece together this information.

NOTE some of the examples of PowerShell code will need a recent build of the NtObjectManager module. For various reasons I've not been updating the version of the PS gallery, so get the source code from github and build it yourself.

RPC Endpoint Mapper

If you're lucky this is simplest way to find out if a particular RPC server is running. When an RPC server is started the service can register an RPC interface with the function RpcEpRegister specifying the interface UUID and version along with the binding information with the RPC endpoint mapper service running in RPCSS. This registers all current RPC endpoints the server is listening on keyed against the RPC interface. 

You can query the endpoint table using the RpcMgmtEpEltInqBegin and RpcMgmtEpEltInqNext APIs. I expose this through the Get-RpcEndpoint command. Running Get-RpcEndpoint with no parameters returns all interfaces the local endpoint mapper knows about as shown below.

PS> Get-RpcEndpoint
UUID                                 Version Protocol     Endpoint      Annotation
----                                 ------- --------     --------      ----------
51a227ae-825b-41f2-b4a9-1ac9557a1018 1.0     ncacn_ip_tcp 49669         
0497b57d-2e66-424f-a0c6-157cd5d41700 1.0     ncalrpc      LRPC-5f43...  AppInfo
201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0     ncalrpc      LRPC-5f43...  AppInfo
...

Note that in addition to the interface UUID and version the output shows the binding information for the endpoint, such as the protocol sequence and endpoint. There is also a free form annotation field, but that can be set to anything the server likes when it calls RpcEpRegister.

The APIs also allow you to specify a remote server hosting the endpoint mapper. You can use this to query what RPC servers are running on a remote server, assuming the firewall doesn't block you. To do this you'd need to specify a binding string for the SearchBinding parameter as shown.

PS> Get-RpcEndpoint -SearchBinding 'ncacn_ip_tcp:primarydc'
UUID                                 Version Protocol     Endpoint     Annotation
----                                 ------- --------     --------     ----------
d95afe70-a6d5-4259-822e-2c84da1ddb0d 1.0     ncacn_ip_tcp 49664
5b821720-f63b-11d0-aad2-00c04fc324db 1.0     ncacn_ip_tcp 49688
650a7e26-eab8-5533-ce43-9c1dfce11511 1.0     ncacn_np     \PIPE\ROUTER Vpn APIs
...

The big issue with the RPC endpoint mapper is it only contains RPC interfaces which were explicitly registered against an endpoint. The server could contain many more interfaces which could be accessible, but as they weren't registered they won't be returned from the endpoint mapper. Registration will typically only be used if the server is using an ephemeral name for the endpoint, such as a random TCP port or auto-generated ALPC name.

Pros:

  • Simple command to run to get a good list of running RPC servers.
  • Can be run against remote servers to find out remotely accessible RPC servers.
Cons:
  • Only returns the RPC servers intentionally registered.
  • Doesn't directly give you the hosting process, although the optional annotation might give you a clue.
  • Doesn't give you any information about what the RPC server does, you'll need to find what executable it's hosted in and parse it using Get-RpcServer.

Service Executable

If the RPC servers you extract are in a registered system service executable then the module will try and work out what service that corresponds to by querying the SCM. The default output from the Get-RpcServer command will show this as the Service column shown below.

PS> Get-RpcServer C:\windows\system32\appinfo.dll
Name        UUID                                 Ver Procs EPs Service Running
----        ----                                 --- ----- --- ------- -------
appinfo.dll 0497b57d-2e66-424f-a0c6-157cd5d41700 1.0 7     1   Appinfo True
appinfo.dll 58e604e8-9adb-4d2e-a464-3b0683fb1480 1.0 1     1   Appinfo True
appinfo.dll fd7a0523-dc70-43dd-9b2e-9c5ed48225b1 1.0 1     1   Appinfo True
appinfo.dll 5f54ce7d-5b79-4175-8584-cb65313a0e98 1.0 1     1   Appinfo True
appinfo.dll 201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0 7     1   Appinfo True

The output also shows the appinfo.dll executable is the implementation of the Appinfo service, which is the general name for the UAC service. Note here that is also shows whether the service is running, but that's just for convenience. You can use this information to find what process is likely to be hosting the RPC server by querying for the service PID if it's running. 

PS> Get-Win32Service -Name Appinfo
Name    Status  ProcessId
----    ------  ---------
Appinfo Running 6020

The output also shows that each of the interfaces have an endpoint which is registered against the interface UUID and version. This is extracted from the endpoint mapper which makes it again only for convenience. However, if you pick an executable which isn't a service implementation the results are less useful:

PS> Get-RpcServer C:\windows\system32\efslsaext.dll
Name          UUID                   Ver Procs EPs Service Running      
----          ----                   --- ----- --- ------- -------      
efslsaext.dll c681d488-d850-11d0-... 1.0 21    0           False

The efslsaext.dll implements one of the EFS implementations, which are all hosted in LSASS. However, it's not a registered service so the output doesn't show any service name. And it's also not registered with the endpoint mapper so doesn't show any endpoints, but it is running.

Pros:

  • If the executable's a service it gives you a good idea of who's hosting the RPC servers and if they're currently running.
  • You can get the RPC server interface information along with that information.
Cons:
  • If the executable isn't a service it doesn't directly help.
  • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
  • Even if the service is running it might not have enabled the RPC servers.

Enumerating Process Modules

Extracting the RPC servers from an arbitrary executable is fine offline, but what if you want to know what RPC servers are running right now? This is similar to RpcView's process list GUI, you can look at a process and find all all the services running within it.

It turns out there's a really obvious way of getting a list of the potential services running in a process, enumerate the loaded DLLs using an API such as EnumerateLoadedModules, and then run Get-RpcServer on each one to extract the potential services. To use the APIs you'd need to have at least read access to the target process, which means you'd really want to be an administrator, but that's no different to RpcView's limitations.

The big problem is just because a module is loaded it doesn't mean the RPC server is running. For example the WinHTTP DLL has a built-in RPC server which is only loaded when running the WinHTTP proxy service, but the DLL could be loaded in any process which uses the APIs.

To simplify things I expose this approach through the Get-RpcServer function with the ProcessId parameter. You can also use the ServiceName parameter to lookup a service PID if you're interested in a specific service.

PS> Get-RpcEndpoint -ServiceName Appinfo
Name        UUID                        Ver Procs EPs Service Running                ----        ----                        --- ----- --- ------- -------
RPCRT4.dll  afa8bd80-7d8a-11c9-bef4-... 1.0 5     0           False
combase.dll e1ac57d7-2eeb-4553-b980-... 0.0 0     0           False
combase.dll 00000143-0000-0000-c000-... 0.0 0     0           False

Pros:

  • You can determine all RPC servers which could be potentially running for an arbitrary process.
Cons:
  • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
  • You can't directly enumerate the module list, except for the main executable, from a protected process (there's are various tricks do so, but out of scope here).

Asking an RPC Endpoint Nicely

The final approach is just to ask an RPC endpoint nicely to tell you what RPC servers is supports. We don't need to go digging into the guts of a process to do this, all we need is the binding string for the endpoint we want to query and then call the RpcMgmtInqIfIds API.

This will only return the UUID and version of the RPC server that's accessible from the endpoint, not the RPC server information. But it will give you an exact list of all supported RPC servers, in fact it's so detailed it'll give you all the COM interfaces that the process is listening on as well. To query this list you only need to access to the endpoint transport, not the process itself.

How do you get the endpoints though? One approach is if you do have access to the process you can enumerate its server ALPC ports by getting a list of handles for the process, finding the ports with the \RPC Control\ prefix in their name and then using that to form the binding string. This approach is exposed through Get-RpcEndpoint's ProcessId parameter. Again it also supports a ServiceName parameter to simplify querying services.

PS> Get-RpcEndpoint -ServiceName AppInfo
UUID              Version Protocol Endpoint     
----              ------- -------- --------  
0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
...

If you don't have access to the process you can do it in reverse by enumerating potential endpoints and querying each one. For example you could enumerate the \RPC Control object directory and query each one. Since Windows 10 19H1 ALPC clients can now query the server's PID, so you can not only find out the exposed RPC servers but also what process they're running in. To query from the name of an ALPC port use the AlpcPort parameter with Get-RpcEndpoint.

PS> Get-RpcEndpoint -AlpcPort LRPC-0ee3261d56342eb7ac
UUID              Version Protocol Endpoint     
----              ------- -------- --------  
0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
...

Pros:

  • You can determine exactly what RPC servers are running in a process.
Cons:
  • You can't directly determine what the RPC server does as the list gives you no information about which module is hosting it.

Combining Approaches

Obviously no one approach is perfect. However, you can get most of the way towards RpcView process list by combining the module enumeration approach with asking the endpoint nicely. For example, you could first get a list of potential interfaces by enumerating the modules and parsing the RPC servers, then filter that list to only the ones which are running by querying the endpoint directly. This will also get you a list of the ALPC server ports that the RPC server is running on so you can directly connect to it with a manually built client. And example script for doing this is on github.

We are still missing some crucial information that RpcView can access such as the interface registration flags from any approach. Still, hopefully that gives you a few ways to approach analyzing the RPC attack surface of the local system and determining what endpoints you can call.

Friday 13 May 2022

Exploiting RBCD Using a Normal User Account*

* Caveats apply.

Resource Based Constrained Delegate (RBCD) privilege escalation, described by Elad Shamir in the "Wagging the Dog" blog post is a devious way of exploiting Kerberos to elevate privileged on a local  Windows machine. All it requires is write access to local computer's domain account to modify the msDS-AllowedToActOnBehalfOfOtherIdentity LDAP attribute to add another account's SID. You can then use that account with the Services For User (S4U) protocols to get a Kerberos service ticket for the local machine as any user on the domain including local administrators. From there you can create a new service or whatever else you need to do.

The key is how you write to the LDAP server under the local computer's domain account. There's been various approaches usually abusing authentication relay. For example, I described one relay vector which abused DCOM. Someone else has then put this together in a turnkey tool, KrbRelayUp

One additional criteria for this to work is having access to another computer account to perform the attack. Well this isn't strictly true, there's the Shadow Credentials attack which allows you to reuse the same local computer account, but in general you need a computer account you control. Normally this isn't a problem, as the DC allows normal users to create new computer accounts up to a limit set by the domain's ms-DS-MachineAccountQuota attribute value. This attribute defaults to 10, but an administrator could set it to 0 and block the attack, which is probably recommend.

But I wondered why this wouldn't work as a normal user. The msDS-AllowedToActOnBehalfOfOtherIdentity attribute just needs the SID for the account to be allowed to delegate to the computer. Why can't we just add the user's SID and perform the S4U dance? To give us the best chance I'll assume we have knowledge of a user's password, how you get this is entirely up to you. Running the attack through Rubeus shows our problem.

PS C:\> Rubeus.exe s4u /user:charlie /domain:domain.local /dc:primarydc.domain.local /rc4:79bf93c9501b151506adc21ba0397b33 /impersonateuser:Administrator /msdsspn:cifs/WIN10TEST.domain.local

   ______        _
  (_____ \      | |
   _____) )_   _| |__  _____ _   _  ___
  |  __  /| | | |  _ \| ___ | | | |/___)
  | |  \ \| |_| | |_) ) ____| |_| |___ |
  |_|   |_|____/|____/|_____)____/(___/
  v2.0.3
[*] Action: S4U
[*] Using rc4_hmac hash: 79bf93c9501b151506adc21ba0397b33
[*] Building AS-REQ (w/ preauth) for: 'domain.local\charlie'
[*] Using domain controller: 10.0.0.10:88
[+] TGT request successful!
[*] base64(ticket.kirbi):
      doIFc...
[*] Action: S4U
[*] Building S4U2self request for: 'charlie@DOMAIN.LOCAL'
[*] Using domain controller: primarydc.domain.local (10.0.0.10)
[*] Sending S4U2self request to 10.0.0.10:88
[X] KRB-ERROR (7) : KDC_ERR_S_PRINCIPAL_UNKNOWN
[X] S4U2Self failed, unable to perform S4U2Proxy.

We don't even get past the first S4U2Self stage of the attack, it fails with a KDC_ERR_S_PRINCIPAL_UNKNOWN error. This error typically indicates the KDC doesn't know what encryption key to use for the generated ticket. If you add an SPN to the user's account however it all succeeds. This would imply it's not a problem with a user account per-se, but instead just a problem of the KDC not being able to select the correct key.

Technically speaking there should be no reason that the KDC couldn't use the user's long term key if you requested a ticket for their UPN, but it doesn't (contrary to an argument I had on /r/netsec the other day with someone who was adamant that SPN's are a convenience, not a fundamental requirement of Kerberos). 

So what to do? There is a way of getting a ticket encrypted for a UPN by using the User 2 User (U2U) extension. Would this work here? Looking at the Rubeus code it seems requesting a U2U S4U2Self ticket is supported, but the parameters are not set for the S4U attack. Let's set those parameters to request a U2U ticket and see if it works.

[+] S4U2self success!
[*] Got a TGS for 'Administrator' to 'charlie@DOMAIN.LOCAL'
[*] base64(ticket.kirbi): doIF...bGll

[*] Impersonating user 'Administrator' to target SPN 'cifs/WIN10TEST.domain.local'
[*] Building S4U2proxy request for service: 'cifs/WIN10TEST.domain.local'
[*] Using domain controller: primarydc.domain.local (10.0.0.10)
[*] Sending S4U2proxy request to domain controller 10.0.0.10:88
[X] KRB-ERROR (13) : KDC_ERR_BADOPTION

Okay, we're getting closer. The S4U2Self request was successful, unfortunately the S4U2Proxy request was not, failing with a KDC_ERR_BADOPTION error. After a bit of playing around this is almost certainly because the KDC can't decrypt the ticket sent in the S4U2Proxy request. It'll try the user's long term key, but that will obviously fail. I tried to see if I could send the user's TGT with the request (in addition to the S4U2Self service ticket) but it still failed. Is this not going to be possible?

Thinking about this a bit more, I wondered, could I decrypt the S4U2Self ticket and then encrypt with the long term key I already know for the user? Technically speaking this would create a valid Kerberos ticket, however it wouldn't create a valid PAC. This is because the PAC contains a Server Signature which is a HMAC of the PAC using the key used to encrypt the ticket. The KDC checks this to ensure the PAC hasn't been modified or put into a new ticket, and if it's incorrect it'll fail the request.

As we know the key, we could just update this value. However, the Server Signature is protected by the KDC Signature which is a HMAC keyed with the KDC's own key. We don't know this key and so we can't update this second signature to match the modified Server Signature. Looks like we're stuck.

Still, what would happen if the user's long term key happened to match the TGT session key we used to encrypt the S4U2Self ticket? It's pretty unlikely to happen by chance, but with knowledge of the user's password we could conceivably change the user's password on the DC between the S4U2Self and the S4U2Proxy requests so that when submitting the ticket the KDC can decrypt it and perhaps we can successfully get the delegated ticket.

As we know the TGT's session key, one obvious approach would be to "crack" the hash value back to a valid Unicode password. For AES keys I think this is going to be difficult and even if successful could be time consuming. However, RC4 keys are just a MD4 hash with no additional protection against brute force cracking. Fortunately the code in Rubeus defaults to requesting an RC4 session key for the TGT, and MS have yet to disable RC4 by default in Windows domains. This seems like it might be doable, even if it takes a long time. We would also need the "cracked" password to be valid per the domain's password policy which adds extra complications.

However, I recalled when playing with the SAM RPC APIs that there is a SamrChangePasswordUser method which will change a user's password to an arbitrary NT hash. The only requirement is knowledge of the existing NT hash and we can set any new NT hash we like. This doesn't need to honor the password policy, except for the minimum age setting. We don't even need to deal with how to call the RPC API correctly as the SAM DLL exports the SamiChangePasswordUser API which does all the hard work. 

I took some example C# code written by Vincent Le Toux and plugged that into Rubeus at the correct point, passing the current TGT's session key as the new NT hash. Let's see if it works:

SamConnect OK
SamrOpenDomain OK
rid is 1208
SamOpenUser OK
SamiChangePasswordUser OK

[*] Impersonating user 'Administrator' to target SPN 'cifs/WIN10TEST.domain.local'
[*] Building S4U2proxy request for service: 'cifs/WIN10TEST.domain.local'
[*] Using domain controller: primarydc.domain.local (10.0.0.10)
[*] Sending S4U2proxy request to domain controller 10.0.0.10:88
[+] S4U2proxy success!
[*] base64(ticket.kirbi) for SPN 'cifs/WIN10TEST.domain.local':
      doIG3...

And it does! Now the caveats:

  • This will obviously only work if RC4 is still enabled on the domain. 
  • You will need the user's password or NT hash. I couldn't think of a way of doing this with only a valid TGT.
  • The user is sacrificial, it might be hard to login using a password afterwards. If you can't immediately reset the password due to the domain's policy the user might be completely broken. 
  • It's not very silent, but that's not my problem.
  • You're probably better to just do the shadow credentials attack, if PKINIT is enabled.
As I'm feeling lazy I'm not going to provide the changes to Rubeus. Except for the call to SamiChangePasswordUser all the code is already there to perform the attack, it just needs to be wired up. I'm sure they'd welcome the addition.

Sunday 20 March 2022

Bypassing UAC in the most Complex Way Possible!

While it's not something I spend much time on, finding a new way to bypass UAC is always amusing. When reading through some of the features of the Rubeus tool I realised that there was a possible way of abusing Kerberos to bypass UAC, well on domain joined systems at least. It's unclear if this has been documented before, this post seems to discuss something similar but relies on doing the UAC bypass from another system, but what I'm going to describe works locally. Even if it has been described as a technique before I'm not sure it's been documented how it works under the hood.

The Background!

Let's start with how the system prevents you bypassing the most pointless security feature ever. By default LSASS will filter any network authentication tokens to remove admin privileges if the users is a local administrator. However there's an important exception, if the user a domain user and a local administrator then LSASS will allow the network authentication to use the full administrator token. This is a problem if say you're using Kerberos to authenticate locally. Wouldn't this be a trivial UAC bypass? Just authenticate to the local service as a domain user and you'd get the network token which would bypass the filtering?

Well no, Kerberos has specific additions to block this attack vector. If I was being charitable I'd say this behaviour also ensures some level of safety.  If you're not running as the admin token then accessing say the SMB loopback interface shouldn't suddenly grant you administrator privileges through which you might accidentally destroy your system.

Back in January last year I read a post from Steve Syfuhs of Microsoft on how Kerberos prevents this local UAC bypass. The TL;DR; is when a user wants to get a Kerberos ticket for a service LSASS will send a TGS-REQ request to the KDC. In the request it'll embed some security information which indicates the user is local. This information will be embedded in the generated ticket. 

When that ticket is used to authenticate to the same system Kerberos can extract the information and see if it matches one it knows about. If so it'll take that information and realize that the user is not elevated and filter the token appropriately. Unfortunately much as enjoy Steve's posts this one was especially light on details. I guessed I'd have to track down how it works myself. Let's dump the contents of a Kerberos ticket and see if we can see what could be the ticket information:

PS> $c = New-LsaCredentialHandle -Package 'Kerberos' -UseFlag Outbound
PS> $x = New-LsaClientContext -CredHandle $c -Target HOST/$env:COMPUTERNAME
PS> $key = Get-KerberosKey -HexKey 'XXX' -KeyType AES256_CTS_HMAC_SHA1_96 -Principal $env:COMPTUERNAME
PS> $u = Unprotect-LsaAuthToken -Token $x.Token -Key $key
PS> Format-LsaAuthToken $u

<KerberosV5 KRB_AP_REQ>
Options         : None
<Ticket>
Ticket Version  : 5
...

<Authorization Data - KERB_AD_RESTRICTION_ENTRY>
Flags           : LimitedToken
Integrity Level : Medium
Machine ID      : 6640665F...

<Authorization Data - KERB_LOCAL>
Security Context: 60CE03337E01000025FC763900000000

I've highlighted the two ones of interest, the KERB-AD-RESTRICTION-ENTRY and the KERB-LOCAL entry. Of course I didn't guess these names, these are sort of documented in the Microsoft Kerberos Protocol Extensions (MS-KILE) specification. The KERB_AD_RESTRICTION_ENTRY is most obviously of interest, it contains both the works "LimitedToken" and "Medium Integrity Level"

When accepting a Kerberos AP-REQ from a network client via SSPI the Kerberos module in LSASS will call the LSA function LsaISetSupplementalTokenInfo to apply the information from KERB-AD-RESTRICTION-ENTRY to the token if needed. The pertinent code is roughly the following:

NTSTATUS LsaISetSupplementalTokenInfo(PHANDLE phToken, 
                        PLSAP_TOKEN_INFO_INTEGRITY pTokenInfo) {
  // ...
  BOOL bLoopback = FALSE:
  BOOL bFilterNetworkTokens = FALSE;

  if (!memcmp(&LsapGlobalMachineID, pTokenInfo->MachineID,
       sizeof(LsapGlobalMachineID))) {
    bLoopback = TRUE;
  }

  if (LsapGlobalFilterNetworkAuthenticationTokens) {
    if (pTokenInfo->Flags & LimitedToken) {
      bFilterToken = TRUE;
    }
  }

  PSID user = GetUserSid(*phToken);
  if (!RtlEqualPrefixSid(LsapAccountDomainMemberSid, user)
    || LsapGlobalLocalAccountTokenFilterPolicy 
    || NegProductType == NtProductLanManNt) {
    if ( !bFilterToken && !bLoopback )
      return STATUS_SUCCESS;
  }

  /// Filter token if needed and drop integrity level.
}

I've highlighted the three main checks in this function, the first compares if the MachineID field of the KERB-AD-RESTRICTION-ENTRY matches the one stored in LSASS. If it is then the bLoopback flag is set. Then it checks an AFAIK undocumented LSA flag to filter all network tokens, at which point it'll check for the LimitedToken flag and set the bFilterToken flag accordingly. This filtering mode defaults to off so in general bFilterToken won't be set.

Finally the code queries for the current created token SID and checks if any of the following is true:
  • The user SID is not a member of the local account domain.
  • The LocalAccountTokenFilterPolicy LSA policy is non-zero, which disables the local account filtering.
  • The product type is NtProductLanManNt, which actually corresponds to a domain controller.
If any are true then as long as the token information is neither loopback or filtering is forced the function will return success and no filtering will take place. Therefore in a default installation for a domain user to not be filtered comes down whether the machine ID matches or not. 

For the integrity level, if filtering is taking place then it will be dropped to the value in the KERB-AD-RESTRICTION-ENTRY authentication data. However it won't increase the integrity level above what the created token has by default, so this can't be abused to get System integrity.

Note Kerberos will call LsaISetSupplementalTokenInfo with the KERB-AD-RESTRICTION-ENTRY authentication data from the ticket in the AP-REQ first. If that doesn't exist then it'll try calling it with the entry from the authenticator. If neither the ticket or authenticator has an entry then it will never be called. How can we remove these values?

Well, about that!

Okay how can we abuse this to bypass UAC? Assuming you're authenticated as a domain user the funniest way to abuse it is get the machine ID check to fail. How would we do that? The LsapGlobalMachineID value is a random value generated when LSASS starts up. We can abuse the fact that if you query the user's local Kerberos ticket cache it will return the session key for service tickets even if you're not an administrator (it won't return TGT session keys by default).

Therefore one approach is to generate a service ticket for the local system, save the resulting KRB-CRED to disk, reboot the system to get LSASS to reinitialize and then when back on the system reload the ticket. This ticket will now have a different machine ID and therefore Kerberos will ignore the restrictions entry. You could do it with the builtin klist and Rubeus with the following commands:

PS> klist get RPC/$env:COMPUTERNAME
PS> Rubeus.exe /dump /server:$env:COMPUTERNAME /nowrap
... Copy the base64 ticket to a file.

Reboot then:

PS> Rubeus.exe ptt /ticket:<BASE64 TICKET> 

You can use Kerberos authentication to access the SCM over named pipes or TCP using the RPC/HOSTNAME SPN.  Note the Win32 APIs for the SCM always use Negotiate authentication which throws a spanner in the works, but there are alternative RPC clients ;-) While LSASS will add a valid restrictions entry to the authenticator in the AP-REQ it won't be used as the one in the ticket will be used first which will fail to apply due to the different machine ID.

The other approach is to generate our own ticket, but won't we need credentials for that? There's a trick, I believe discovered by Benjamin Delpy and put into kekeo that allows you to abuse unconstrained delegation to get a local TGT with a session key. With this TGT you can generate your own service tickets, so you can do the following:
  1. Query for the user's TGT using the delegation trick.
  2. Make a request to the KDC for a new service ticket for the local machine using the TGT. Add a KERB-AD-RESTRICTION-ENTRY but fill in a bogus machine ID.
  3. Import the service ticket into the cache.
  4. Access the SCM to bypass UAC.
Ultimately this is a reasonable amount lot of code for a UAC bypass, at least compared to the just changing an environment variable. However, you can probably bodge it together using existing tools such as kekeo and Rubeus, but I'm not going to release a turn key tool to do this, you're on your own :-)

Didn't you forget KERB-LOCAL?

What is the purpose of KERB-LOCAL? It's a way of reusing the local user's credentials, this is similar to NTLM loopback where LSASS is able to determine that the call is actually from a locally authenticated user and use their interactive token. The value passed in the ticket and authenticator can be checked against a list of known credentials in the Kerberos package and if there's a match the existing token will be used.

Would this not always eliminate the need for the filtering the token based on the KERB-AD-RESTRICTION-ENTRY value? It seems that this behavior is used very infrequently due to how it's designed. First it only works if the accepting server is using the Negotiate package, it doesn't work if using the Kerberos package directly (sort of...). That's usually not an impediment as most local services use Negotiate anyway for convenience. 

The real problem is that as a rule if you use Negotiate to the local machine as a client it'll select NTLM as the default. This will use the loopback already built into NTLM rather than Kerberos so this feature won't be used. Note that even if NTLM is disabled globally on the domain network it will still work for local loopback authentication. I guess KERB-LOCAL was added for feature parity with NTLM.

Going back to the formatted ticket at the start of the blog what does the KERB-LOCAL value mean? It can be unpacked into two 64bit values, 0x17E3303CE60 and 0x3976FC25. The first value is the heap address of the KERB_CREDENTIAL structure in LSASS's heap!! The second value is the ticket count when the KERB-LOCAL structure was created.

Fortunately LSSAS doesn't just dereference the credentials pointer, it must be in the list of valid credential structures. But the fact that this value isn't blinded or references a randomly generated value seems a mistake as heap addresses would be fairly easy to brute force. Of course it's not quite so simple, Kerberos does verify that the SID in the ticket's PAC matches the SID in the credentials so you can't just spoof the SYSTEM session, but well, I'll leave that as a thought to be going on with.

Hopefully this gives some more insight into how this feature works and some fun you can have trying to bypass UAC in a new way.

UPDATE: This simple C++ file can be used to modify the Win32 SCM APIs to use Kerberos for local authentication.

Monday 6 September 2021

LowBox Token Permissive Learning Mode

I was recently asked about this topic and so I thought it'd make sense to put it into a public blog post so that everyone can benefit. Windows 11 (and Windows Server 2022) has a new feature for tokens which allow the kernel to perform the normal LowBox access check, but if it fails log the error rather than failing with access denied. 

This feature allows you to start an AppContainer sandbox process, run a task, and determine what parts of that would fail if you actually tried to sandbox a process. This makes it much easier to determine what capabilities you might need to grant to prevent your application from crashing if you tried to actually apply the sandbox. It's a very useful diagnostic tool, although whether it'll be documented by Microsoft remains to be seen. Let's go through a quick example of how to use it.

First you need to start an ETW trace for the Microsoft-Windows-Kernel-General provider with the KERNEL_GENERAL_SECURITY_ACCESSCHECK keyword (value 0x20) enabled. In an administrator PowerShell console you can run the following:

PS> $name = 'AccessTrace'
PS> New-NetEventSession -Name $name -LocalFilePath "$env:USERPROFILE\access_trace.etl" | Out-Null
PS> Add-NetEventProvider -SessionName $name -Name "Microsoft-Windows-Kernel-General" -MatchAllKeyword 0x20 | Out-Null
PS> Start-NetEventSession -Name $name

This will start the trace session and log the events to access_trace.etl file if your home directory. As this is ETW you could probably do a real-time trace or enable stack tracing to find out what code is actually failing, however for this example we'll do the least amount of work possible. This log is also used for things like Adminless which I've blogged about before.

Now you need to generate some log events. You just need to add the permissiveLearningMode capability when creating the lowbox token or process. You can almost certainly add it to your application's manifest as well when developing a sandboxed UWP application, but we'll assume here that we're setting up the sandbox manually.

PS> $cap = Get-NtSid -CapabilityName 'permissiveLearningMode'
PS> $token = Get-NtToken -LowBox -PackageSid ABC -CapabilitySid $cap
PS> Invoke-NtToken $token { "Hello" | Set-Content "$env:USERPOFILE\test.txt" }

The previous code creates a lowbox token with the capability and writes to a file in the user's profile. This would normally fail as the user's profile doesn't grant any AppContainer access to write to it. However, you should find the write succeeded. Now, back in the admin PowerShell console you'll want to stop the trace and cleanup the session.

PS> Stop-NetEventSession -Name $name
PS> Remove-NetEventSession -Name $name

You should find an access_trace.etl file in your user's profile directory which will contain the logged events. There are various ways to read this file, the simplest is to use the Get-WinEvent command. As you need to do a bit of parsing of the contents of the log to get out various values I've put together a simple script do that. It's available on github here. Just run the script passing the name of the log file to convert the events into PowerShell objects.

PS> parse_access_check_log.ps1 "$env:USERPROFILE\access_trace.etl"
ProcessName        : ...\v1.0\powershell.exe
Mask               : MaximumAllowed
PackageSid         : S-1-15-2-1445519891-4232675966-...
Groups             : INSIDERDEV\user
Capabilities       : NAMED CAPABILITIES\Permissive Learning Mode
SecurityDescriptor : O:BAG:BAD:(A;OICI;KA;;;S-1-5-21-623841239-...

The log events don't seem to contain the name of the resource being opened, but it does contain the security descriptor and type of the object, what access mask was requested and basic information about the access token used. Hopefully this information is useful to someone.

Saturday 21 August 2021

How the Windows Firewall RPC Filter Works

I did promise that I'd put out a blog post on how the Windows RPC filter works. Now that I released my more general blog post on the Windows firewall I thought I'd come back to a shorter post about the RPC filter itself. If you don't know the context, the Windows firewall has the ability to restrict access to RPC interfaces. This is interesting due to the renewed interest in all things RPC, especially the PetitPotam trick. For example you can block any access to the EFSRPC interfaces using the following script which you run with the netsh command.

rpc
filter
add rule layer=um actiontype=block
add condition field=if_uuid matchtype=equal data=c681d488-d850-11d0-8c52-00c04fd90f7e
add filter
add rule layer=um actiontype=block
add condition field=if_uuid matchtype=equal data=df1941c5-fe89-4e79-bf10-463657acf44d
add filter
quit

This script adds two rules which will block any calls on the RPC interfaces with UUIDs of c681d488-d850-11d0-8c52-00c04fd90f7e and df1941c5-fe89-4e79-bf10-463657acf44d. These correspond to the two EFSRPC interfaces.

How does this work within the context of the firewall? Does the kernel components of the Windows Filtering Platform have a builtin RPC protocol parser to block the connection? That'd be far too complex, instead everything is done in user-mode by some special layers. If you use NtObjectManager's firewall Get-FwLayer command you can check for layers registered to run in user-mode by filtering on the IsUser property.

PS> Get-FwLayer | Where-Object IsUser
KeyName                      Name
-------                      ----
FWPM_LAYER_RPC_PROXY_CONN    RPC Proxy Connect Layer
FWPM_LAYER_IPSEC_KM_DEMUX_V4 IPsec KM Demux v4 Layer
FWPM_LAYER_RPC_EP_ADD        RPC EP ADD Layer
FWPM_LAYER_KM_AUTHORIZATION  Keying Module Authorization Layer
FWPM_LAYER_IKEEXT_V4         IKE v4 Layer
FWPM_LAYER_IPSEC_V6          IPsec v6 Layer
FWPM_LAYER_IPSEC_V4          IPsec v4 Layer
FWPM_LAYER_IKEEXT_V6         IKE v6 Layer
FWPM_LAYER_RPC_UM            RPC UM Layer
FWPM_LAYER_RPC_PROXY_IF      RPC Proxy Interface Layer
FWPM_LAYER_RPC_EPMAP         RPC EPMAP Layer
FWPM_LAYER_IPSEC_KM_DEMUX_V6 IPsec KM Demux v6 Layer

In the output we can see 5 layers with RPC in the name of the layer. 
  • FWPM_LAYER_RPC_EP_ADD - Filter new endpoints created by a process.
  • FWPM_LAYER_RPC_EPMAP - Filter access to endpoint mapper information.
  • FWPM_LAYER_RPC_PROXY_CONN - Filter connections to the RPC proxy.
  • FWPM_LAYER_RPC_PROXY_IF - Filter interface calls through an RPC proxy.
  • FWPM_LAYER_RPC_UM - Filter interface calls to an RPC server
Each of these layers is potentially interesting, and you can add rules through netsh for all of them. But we'll just focus on how the FWPM_LAYER_RPC_UM layer works as that's the one the script introduced at the start works with. If you run the following command after adding the RPC filter rules you can view the newly created rules:

PS> Get-FwFilter -LayerKey FWPM_LAYER_RPC_UM -Sorted | Format-FwFilter
Name       : RPCFilter
Action Type: Block
Key        : d4354417-02fa-11ec-95da-00155d010a06
Id         : 78253
Description: RPC Filter
Layer      : FWPM_LAYER_RPC_UM
Sub Layer  : FWPM_SUBLAYER_UNIVERSAL
Flags      : Persistent
Weight     : 567453553048682496
Conditions :
FieldKeyName               MatchType Value
------------               --------- -----
FWPM_CONDITION_RPC_IF_UUID Equal     df1941c5-fe89-4e79-bf10-463657acf44d


Name       : RPCFilter
Action Type: Block
Key        : d4354416-02fa-11ec-95da-00155d010a06
Id         : 78252
Description: RPC Filter
Layer      : FWPM_LAYER_RPC_UM
Sub Layer  : FWPM_SUBLAYER_UNIVERSAL
Flags      : Persistent
Weight     : 567453553048682496
Conditions :
FieldKeyName               MatchType Value
------------               --------- -----
FWPM_CONDITION_RPC_IF_UUID Equal     c681d488-d850-11d0-8c52-00c04fd90f7e

If you're read my general blog post the output should made some sense. The FWPM_CONDITION_RPC_IF_UUID condition key is used to specify the UUID for the interface to match on. The FWPM_LAYER_RPC_UM has many possible fields to filter on, which you can query by inspecting the layer object's Fields property.

PS> (Get-FwLayer -Key FWPM_LAYER_RPC_UM).Fields

KeyName                              Type      DataType
-------                              ----      --------
FWPM_CONDITION_REMOTE_USER_TOKEN     RawData   TokenInformation
FWPM_CONDITION_RPC_IF_UUID           RawData   ByteArray16
FWPM_CONDITION_RPC_IF_VERSION        RawData   UInt16
FWPM_CONDITION_RPC_IF_FLAG           RawData   UInt32
FWPM_CONDITION_DCOM_APP_ID           RawData   ByteArray16
FWPM_CONDITION_IMAGE_NAME            RawData   ByteBlob
FWPM_CONDITION_RPC_PROTOCOL          RawData   UInt8
FWPM_CONDITION_RPC_AUTH_TYPE         RawData   UInt8
FWPM_CONDITION_RPC_AUTH_LEVEL        RawData   UInt8
FWPM_CONDITION_SEC_ENCRYPT_ALGORITHM RawData   UInt32
FWPM_CONDITION_SEC_KEY_SIZE          RawData   UInt32
FWPM_CONDITION_IP_LOCAL_ADDRESS_V4   IPAddress UInt32
FWPM_CONDITION_IP_LOCAL_ADDRESS_V6   IPAddress ByteArray16
FWPM_CONDITION_IP_LOCAL_PORT         RawData   UInt16
FWPM_CONDITION_PIPE                  RawData   ByteBlob
FWPM_CONDITION_IP_REMOTE_ADDRESS_V4  IPAddress UInt32
FWPM_CONDITION_IP_REMOTE_ADDRESS_V6  IPAddress ByteArray16

There's quite a few potential configuration options for the filter. You can filter based on the remote user token that's authenticated to the interface. Or you can filters based on the authentication level and type. This could allow you to protect an RPC interface so that all callers have to use Kerberos with at RPC_C_AUTHN_LEVEL_PKT_PRIVACY level. 

Anyway, configuring it is less important to us, you probably want to know how it works, as the first step to trying to find a way to bypass it is to know where this filter layer is processed (note, I've not found a bypass, but you never know). 

Perhaps unsurprisingly due to the complexity of the RPC protocol the filtering is implemented within the RPC server process through the RpcRtRemote extension DLL. Except for RPCSS this DLL isn't loaded by default. Instead it's only loaded if there exists a value for the WNF_RPCF_FWMAN_RUNNING WNF state. The following shows the state after adding the two RPC filter rules with netsh.

PS> $wnf = Get-NtWnf -Name 'WNF_RPCF_FWMAN_RUNNING'
PS> $wnf.QueryStateData()

Data ChangeStamp
---- -----------
{}             2

The RPC runtime sets up a subscription to load the DLL if the WNF value is ever changed. Once loaded the RPC runtime will register all current interfaces to check the firewall. The filter rules are checked when a call is made to the interface during the normal processing of the security callback. The runtime will invoke the FwFilter function inside RpcRtRemote, passing all the details about the firewall interface call. The filter call is only made for DCE/RPC protocols, so not ALPC. It also will only be called if the caller is remote. This is always the case if the call comes via TCP, but for named pipes it will only be called if the pipe was opened via SMB.

Here's where we can finally determine how the RPC filter is processed. The FwFilter function builds a list of firewall values corresponding to the list of fields for the FWPM_LAYER_RPC_UM layer and passes them to the FwpsClassifyUser0 API along with the numeric ID of the layer. This API will enumerate all filters for the layer and apply the condition checks returning the classification, e.g. block or permit. Based on this classification the RPC runtime can permit or refuse the call. 

In order for a filter to be accessible for classification the RPC server must have FWPM_ACTRL_OPEN access to the engine and FWPM_ACTRL_CLASSIFY access to the filter. By default the Everyone group has these access rights, however AppContainers and potentially other sandboxes do not. However, in general AppContainer processes don't tend to create privileged RPC servers, at least any which a remote attacker would find useful. You can check the access on various firewall objects using the Get-AccessibleFwObject command.

PS> $token = Get-NtToken -Filtered -Flags LuaToken
PS> Get-AccessibleFwObject -Token $token | Where-Object Name -eq RPCFilter

TokenId Access             Name
------- ------             ----
4ECF80  Classify|Open RPCFilter
4ECF80  Classify|Open RPCFilter

I hope this gives enough information for someone to dig into it further to see if there's any obvious bypass I missed. I'm sure there's probably some fun trick you could do to circumvent restrictions if you look hard enough :-)

Saturday 14 August 2021

How to secure a Windows RPC Server, and how not to.

The PetitPotam technique is still fresh in people's minds. While it's not directly an exploit it's a useful step to get unauthenticated NTLM from a privileged account to forward to something like the AD CS Web Enrollment service to compromise a Windows domain. Interestingly after Microsoft initially shrugged about fixing any of this they went and released a fix, although it seems to be insufficient at the time of writing.

While there's plenty of details about how to abuse the EFSRPC interface, there's little on why it's exploitable to begin with. I thought it'd be good to have a quick overview of how Windows RPC interfaces are secured and then by extension why it's possible to use the EFSRPC interface unauthenticated. 

Caveat: No doubt I might be missing other security checks in RPC, these are the main ones I know about :-)

RPC Server Security

The server security of RPC is one which has seemingly built up over time. Therefore there's various ways of doing it, and some ways are better than others. There are basically three approaches, which can be mixed and matched:
  1. Securing the endpoint
  2. Securing the interface
  3. Ad-hoc security
Let's take each one in turn to determine how each one secures the RPC server.

Securing the Endpoint

You register the endpoint that the RPC server will listen on using the RpcServerUseProtseqEp API. This API takes the type of endpoint, such as ncalrpc (ALPC), ncacn_np (named pipe) or ncacn_ip_tcp (TCP socket) and creates the listening endpoint. For example the following would create a named pipe endpoint called DEMO.

RpcServerUseProtseqEp(
    L"ncacn_np",
    RPC_C_PROTSEQ_MAX_REQS_DEFAULT,
    L"\\pipe\\DEMO",
    nullptr);

The final parameter is optional but represents a security descriptor (SD) you assign to the endpoint to limit who has access. This can only be enforced on ALPC and named pipes as something like a TCP socket doesn't (technically) have an access check when it's connected to. If you don't specify an SD then a default is assigned. For a named pipe the default DACL grants the following uses write access:
  • Everyone
  • NT AUTHORITY\ANONYMOUS LOGON
  • SELF
Where SELF is the creating user's SID. This is a pretty permissive SD. One interesting thing about RPC endpoints is they are multiplexed. You don't explicit associate an endpoint with the RPC interface you want to access. Instead you can connect to any endpoint that the process has created. The end result is that if there's a less secure endpoint in the same process it might be possible to access an interface using the least secure one. In general this makes relying on endpoint security risky, especially in processes which run multiple services, such as LSASS. In any case if you want to use a TCP endpoint you can't rely on the endpoint security as it doesn't exist.

Securing the Interface

The next way of securing the RPC server is to secure the interface itself. You register the interface structure that was generated by MIDL using one of the following APIs:
Each has a varying number of parameters some of which determine the security of the interface. The latest APIs are RpcServerRegisterIf3 and RpcServerInterfaceGroupCreate which were introduced in Windows 8. The latter is just a way of registering multiple interfaces in one call so we'll just focus on the former. The RpcServerRegisterIf3 has three parameters which affect security, SecurityDescriptor, IfCallback and Flags. 

The SecurityDescriptor parameter is easiest to explain. It assigns an SD to the interface, when a call is made on that interface then the caller's token is checked against the SD and access is only granted if the check passes. If no SD is specified a default is used which grants the following SIDs access (assuming a non-AppContainer process)
  • NT AUTHORITY\ANONYMOUS LOGON
  • Everyone
  • NT AUTHORITY\RESTRICTED
  • BUILTIN\Administrators
  • SELF
The token to use for the access check is based either on the client's authentication (we'll discuss this later) or the authentication for the endpoint. ALPC and named pipe are authenticated transports, where as TCP is not. When using an unauthenticated transport the access check will be against the anonymous token. This means if the SD does not contain an allow ACE for ANONYMOUS LOGON it will be blocked.

Note, due to a quirk of the access check process the RPC runtime grants access if the caller has any access granted, not a specific access right. What this means is that if the caller is considered the owner, which is normally set to the creating user SID they might only be granted READ_CONTROL but that's sufficient to bypass the check. This could also be useful if the caller has SeTakeOwnershipPrivilege or similar as it'd be possible to generically bypass the interface SD check (though of course that privilege is dangerous in its own right).

The second parameter, IfCallback, takes an RPC_IF_CALLBACK function pointer. This callback function will be invoked when a call is made to the interface, although it will be called after the SD is checked. If the callback function returns RPC_S_OK then the call will be allowed, anything else will deny the call. The callback gets a pointer to the interface and the binding handle and can do various checks to determine if the caller is allowed to access the interface.

A common check is for the client's authentication level. The client can specify the level to use when connecting to the server using the RpcBindingSetAuthInfo API however the server can't directly specify the minimum authentication level it accepts. Instead the callback can use the RpcBindingInqAuthClient API to determine what the client used and grant or deny access based on that. The authentication levels we typically care about are as follows:
  • RPC_C_AUTHN_LEVEL_NONE - No authentication
  • RPC_C_AUTHN_LEVEL_CONNECT - Authentication at connect time, but not per-call.
  • RPC_C_AUTHN_LEVEL_PKT_INTEGRITY - Authentication at connect time, each call has integrity protection.
  • RPC_C_AUTHN_LEVEL_PKT_PRIVACY - Authentication at connect time, each call is encrypted and has integrity protection.
The authentication is implemented using a defined authentication service, such as NTLM or Kerberos, though that doesn't really matter for our purposes. Also note that this is only used for RPC services available over remote protocols such as named pipes or TCP. If the RPC server listens on ALPC then it's assumed to always be RPC_C_AUTHN_LEVEL_PKT_PRIVACY. Other checks the server could do would be the protocol sequence the client used, this would allow rejecting access via TCP but permit named pipes.

The final parameter is the flags. The flag most obviously related to security is RPC_IF_ALLOW_SECURE_ONLY (0x8). This blocks access to the interface if the current authentication level is RPC_C_AUTHN_LEVEL_NONE. This means the caller must be able to authenticate to the server using one of the permitted authentication services. It's not sufficient to use a NULL session, at least on any modern version of Windows. Of course this doesn't say much about who has authenticated, a server might still want to check the caller's identity.

The other important flag is RPC_IF_ALLOW_CALLBACKS_WITH_NO_AUTH (0x10). If the server specifies a security callback and this flag is not set then any unauthenticated client will be automatically rejected. 

If this wasn't complex enough there's at least one other related setting which applies system wide which will determine what type of clients can access what RPC server. The Restrict Unauthenticated RPC Clients group policy. By default this is set to None if the RPC server is running on a server SKU of Windows and Authenticated on a client SKU. 

In general what this policy does is limit whether a client can use an unauthenticated transport such as TCP when they haven't also separately authenticated to an valid authentication level. When set to None RPC servers can be accessed via an unauthenticated transport subject to any other restrictions the interface is registered with. If set to Authenticated then calls over unauthenticated transports are rejected, unless the RPC_IF_ALLOW_CALLBACKS_WITH_NO_AUTH flag is set for the interface or the client has authenticated separately. There's a third option, Authenticated without exceptions, which will block the call in all circumstances if the caller isn't using an authenticated transport. 

Ad-hoc Security

The final types of checks are basically anything else the server does to verify the caller. A common approach would be to perform a check within a specific function on the interface. For example, a server could generally allow unauthenticated clients, except when calling a method to read a important secret value. At that point is could insert an authentication level check to ensure the client has authenticated at RPC_C_AUTHN_LEVEL_PKT_PRIVACY so that the secret will be encrypted when returned to the client. 

Ultimately you'll have to check each function you're interested in to determine what, if any, security checks are in place. As with all ad-hoc checks it's possible that there's a logic bug in there which can be exploited to bypass the security restrictions.

Digging into EFSRPC

Okay, that covers the basics of how an RPC server is secured. Let's look at the specific example of the EFSRPC server abused by PetitPotam. Oddly there's two implementation of the RPC server, one in efslsaext.dll which the interface UUID of c681d488-d850-11d0-8c52-00c04fd90f7e and one in efssvc.dll with the interface UUID of df1941c5-fe89-4e79-bf10-463657acf44d. The one in efslsaext.dll is the one which is accessible unauthenticated, so let's start there. We'll go through the three approaches to securing the server to determine what it's doing.

First, the server does not register any of its own protocol sequences, with SDs or not. What this means is who can call the RPC server is dependent on what other endpoints have been registered by the hosting process, which in this case is LSASS.

Second, checking the for calls to one of the RPC server interface registration functions there's a single call to RpcServerRegisterIfEx in InitializeLsaExtension. This allows the caller to specify the security callback but not an SD. However in this case it doesn't specify any security callback. The InitializeLsaExtension function also does not specify either of the two security flags (it sets RPC_IF_AUTOLISTEN which doesn't have any security impact). This means that in general any authenticated caller is permitted.

Finally, from an ad-hoc security perspective all the main functions such as EfsRpcOpenFileRaw call the function EfsRpcpValidateClientCall which looks something like the following (error check removed).

void EfsRpcpValidateClientCall(RPC_BINDING_HANDLE Binding, 
                               PBOOL ValidClient) {
  unsigned int ClientLocalFlag;
  I_RpcBindingIsClientLocal(NULL, &ClientLocalFlag);
  if (!ClientLocalFlag) {
    RPC_WSTR StringBinding;
    RpcBindingToStringBindingW(Binding, &StringBinding);
    RpcStringBindingParseW(StringBinding, NULL, &Protseq, 
                           NULL, NULL, NULL);
    if (CompareStringW(LOCALE_INVARIANT, NORM_IGNORECASE, 
        Protseq, -1, L"ncacn_np", -1) == CSTR_EQUAL)
        *ValidClient = TRUE;
    }
  }
}

Basically the ValidClient parameter will only be set to TRUE if the caller used the named pipe transport and the pipe wasn't opened locally, i.e. the named pipe was opened over SMB. This is basically all the security that's being checked for. Therefore the only security that could be enforced is limited by who's allowed to connect to a suitable named pipe endpoint.

At a minimum LSASS registers the \pipe\lsass named pipe endpoint. When it's setup in lsasrv.dll a SD is defined for the named pipe that grants the following users access:
  • Everyone
  • NT AUTHORITY\ANONYMOUS LOGON
  • BUILTIN\Administrators
Therefore in theory the anonymous user has access to the pipe, and as there are no other security checks in place in the interface definition. Now typically anonymous access isn't granted by default to named pipes via a NULL session, however domain controllers have an exception to this policy through the configured Network access: Named Pipes that can be accessed anonymously security option. For DCs this allows lsarpc, samr and netlogon pipes, which are all aliases for the lsass pipe, to be accessed anonymously.

You can now understand why the EFS RPC server is accessible anonymously on DCs. How does the other EFS RPC server block access? In that case it specifies an interface SD to limit access to only the Everyone group and BUILTIN\Administrators. By default the anonymous user isn't a member of Everyone (although it can be configured as such) therefore this blocks access even if you connected via the lsass pipe.

The Fix is In

What did Microsoft do to fix PetitPotam? One thing they definitely didn't do is change the interface registration or the named pipe endpoint security. Instead they added an additional ad-hoc check to EfsRpcOpenFileRaw. Specifically they added the following code:

DWORD AllowOpenRawDL = 0;
RegGetValueW(
  HKEY_LOCAL_MACHINE,
  L"SYSTEM\\CurrentControlSet\\Services\\EFS",
  L"AllowOpenRawDL",
  RRF_RT_REG_DWORD | RRF_ZEROONFAILURE,
  NULL,
  &AllowOpenRawDL);
if (AllowOpenRawDL == 1 && 
    !EfsRpcpValidateClientCall(hBinding, &ValidClient) && ValidClient) {
  // Call allowed.
}

Basically unless the AllowOpenRawDL registry value is set to one then the call is blocked entirely regardless of the authenticating client. This seems to be a perfectly valid fix, except that EfsRpcOpenFileRaw isn't the only function usable to start an NTLM authentication session. As pointed out by Lee Christensen you can also do it via EfsRpcEncryptFileSrv or EfsRpcQueryUsersOnFile or others. Therefore as no other changes were put in place these other functions are accessible just as unauthenticated as the original.

It's really unclear how Microsoft didn't see this, but I guess they might have been blinded by them actually fixing something which they were adamant was a configuration issue that sysadmins had to deal with. 

UPDATE 2021/08/17: It's worth noting that while you can access the other functions unauthenticated it seems any network access is done using the "authenticated" caller, i.e. the ANONYMOUS user so it's probably not that useful. The point of this blog is not about abusing EFSRPC but why it's abusable :-)

Anyway I hope that explains why PetitPotam works unauthenticated (props to topotam77 for the find) and might give you some insight into how you can determine what RPC servers might be accessible going forward.