Using the VMware Fusion GDB stub for kernel debugging with LLDB

In a previous post I discussed kernel debugging with VMware Fusion and LLDB. In that approach we were connecting LLDB to the kernel via the Kernel Debugging Protocol (KDP). That method works thanks to a stub implemented in the (target) kernel itself. One drawback we discussed was not being able to halt the kernel execution from the debugger and instead requiring a slightly cumbersome keyboard shortcut to generate a NMI on the target VM.

After publishing the article I received some very great feedback including a tweet from Ryan Govostes:

VMware Fusion has a GDB stub built-in, which lldb can talk to if you load a target definitions file.

To be fair I didn’t have a clear idea of what this exactly meant when I first read it but since it sounded pretty interesting I started doing some research.

I found a great post from snare that explains how to use GDB to connect to the remote debug stub in VMware Fusion and debug the target kernel from the host machine.

I will briefly discuss the approach here and then show how we can instead use LLDB to connect to the remote.

GDB stub in VMware Fusion

It turns out that VMware Fusion implements the GDB stub. I don’t think it is a documented feature (all mentions I’ve found about it were from users in the VMware forums) but it can be enabled by setting a preference. Each VM file contains a .vmx config file in the .vmwarevm package that can be edited (make sure that the VM is not running while you edit it).

Open it in a text editor and add the following line:

# If you are debugging a 32-bit machine use `guest32`
debugStub.listen.guest64 = "TRUE"

With this in place and after rebooting, the VM will listen to connections on the 8864 port (8832 if you’re using guest32) on localhost.

If you wanted to connect from another machine you could use a different option instead and would need to connect to the IP used by the VM:

# If you are debugging a 32-bit machine use `guest32`
debugStub.listen.guest64.remote = "TRUE"

For our use case we will simply connect to localhost so no need for the remote part.

GDB debugging stub

Before explaining how to connect from GDB let’s quickly discuss what is the GDB stub.

In order to setup a communication between two hosts, we need (among other things) a transmission protocol and an application protocol that both client and server can understand. Then obviously both server and client need to have code that is able to send, receive and interpret packets that come through.

This whole system is implemented as GDB Remote and consists of mainly four parts:

  • TCP as the transmission protocol (KDP on the other hand uses UDP).
  • The Remote Serial Protocol as the application protocol. It is a well-documented protocol and one rarely needs to know the details of it.
  • The client side of the connection is GDB and, as expected, knows how to connect to the remote and understands the Remote Serial Protocol to send and receive packets.
  • The server side of the connection is the tricky part since it’s the guest system and rarely has any knowledge of GDB and how to act as a remote out of the box. In order for the debugged program to allow connecting to GDB, one would use either one of these two solutions:
    1. Using gdbserver, which is a control program for Unix-like systems that allows you to connect your program with a remote GDB. It can be a good option if you have no or little control over the target environment. The docs explain gdbserver in much more details.
    2. Implementing the GDB debugging stub on the target. By doing so a program can itself implement the target side of the communication protocol. The official docs have a lot more info if you’re interested in the particular implementation.

In the case of VMware Fusion, a full GDB remote stub is implemented by the virtual machine and can be enabled by setting the option described above, allowing a remote GDB session to connect to the VM.

GDB Remote

With the debugStub.listen.guest64 option set and the VM rebooted, we can start a GDB session on the host machine and attempt to connect to the VM.

(gdb) file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Reading symbols from /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development...Reading symbols from /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development.dSYM/Contents/Resources/DWARF/kernel.development...
done.
(gdb) target remote localhost:8864
Remote debugging using localhost:8864
0xffffff800f9f1e52 in ?? ()

And at this point we are connected to the remote through the debug stub and we can do anything in the debugger (forget about the missing symbols here, I haven’t looked too much into it). After continuing, one can stop the kernel execution by doing ^c in the debugger as usual.

However, I had to install GDB on my host just to try this out (GDB stopped shipping with OS X since Mavericks) and I’d really like to use LLDB wherever I can since it’s what I’m most familiar with nowadays.

Connecting LLDB to a GDB remote stub

LLDB actually has support for connecting to a GDB remote out of the box with the gdb-remote command. To quote the LLDB docs:

To enable remote debugging, LLDB employs a client-server architecture. The client part runs on the local system and the remote system runs the server. The client and server communicate using the gdb-remote protocol, usually transported over TCP/IP.

In particular, the LLDB-specific extensions are discussed in a fantastic document in the LLDB repo.

LLDB has added new GDB server packets to better support multi-threaded and remote debugging. Why? Normally you need to start the correct GDB and the correct GDB server when debugging. If you have mismatch, then things go wrong very quickly. LLDB makes extensive use of the GDB remote protocol and we wanted to make sure that the experience was a bit more dynamic where we can discover information about a remote target with having to know anything up front. […] Again with GDB, both sides pre-agree on how the registers will look (how many, their register number,name and offsets). We prefer to be able to dynamically determine what kind of architecture, OS and vendor we are debugging, as well as how things are laid out when it comes to the thread register contexts. Below are the details on the new packets we have added above and beyond the standard GDB remote protocol packets.

So we should be able to just connect to the remote system from LLDB? Let’s find out.

(lldb) file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Current executable set to '/Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development' (x86_64).
(lldb) gdb-remote 8864
Kernel UUID: C75BDFDD-9F27-3694-BB80-73CF991C13D8
Load Address: 0xffffff800f800000
Kernel slid 0xf600000 in memory.
Loaded kernel file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Loading 87 kext modules ....................................................................................... done.
Target arch: x86_64
Connected to live debugserver or arm core. Will associate on-core threads to registers reported by server.
Process 1 stopped
* thread #3: tid = 0x0066, name = '0xffffff801c91d9c0', queue = 'cpu-0', stop reason = signal SIGTRAP
    frame #0: 0xffffffffffffffff

Cool! So we were able to connect to the GDB stuff on the VM system. Let’s try and get a backtrace and see how things look.

(lldb) thread backtrace
* thread #3: tid = 0x0066, name = '0xffffff801c91d9c0', queue = 'cpu-0', stop reason = signal SIGTRAP
  frame #0: 0xffffffffffffffff

Hmm, that’s not a lot of information. Also, the only frame being at address 0xffffffffffffffff doesn’t sound right either.

LLDB target definition

Remember that Ryan’s tweet mentionned a target definitions file? I did some more research and found that other tweet from Shantonu Sen that pointed me to the right approach.

We can download the x86_64_target_definition.py file and use it as our plugin.process.gdb-remote.target-definition-file in LLDB’s settings.

# You can alternatively add this to the `.lldbinit` so that it's loaded whenever lldb starts
(lldb) settings set plugin.process.gdb-remote.target-definition-file /path/to/x86_64_target_definition.py

The file has a great comment explaining what the target definition does and why it is necessary.

This file can be used with the following setting: plugin.process.gdb-remote.target-definition-file

This setting should be used when you are trying to connect to a remote GDB server that doesn’t support any of the register discovery packets that LLDB normally uses.

Why is this necessary? LLDB doesn’t require a new build of LLDB that targets each new architecture you will debug with. Instead, all architectures are supported and LLDB relies on extra GDB server packets to discover the target we are connecting to so that is can show the right registers for each target. This allows the GDB server to change and add new registers without requiring a new LLDB build just so we can see new registers.

This file implements the x86_64 registers for the darwin version of GDB and allows you to connect to servers that use this register set.

Let’s try to use gdb-remote after setting the target definition file.

(lldb) settings set plugin.process.gdb-remote.target-definition-file /path/to/x86_64_target_definition.py
(lldb) file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Current executable set to '/Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development' (x86_64).
(lldb) gdb-remote 8864
Kernel UUID: C75BDFDD-9F27-3694-BB80-73CF991C13D8
Load Address: 0xffffff800f800000
Kernel slid 0xf600000 in memory.
Loaded kernel file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Loading 87 kext modules ....................................................................................... done.
Target arch: x86_64
Connected to live debugserver or arm core. Will associate on-core threads to registers reported by server.
Process 1 stopped
* thread #3: tid = 0x0066, 0xffffff800f9f1e52 kernel.development`machine_idle + 370 at pmCPU.c:174, name = '0xffffff801c91d9c0', queue = 'cpu-0', stop reason = signal SIGTRAP
    frame #0: 0xffffff800f9f1e52 kernel.development`machine_idle + 370 at pmCPU.c:174

It already looks better. Let’s now try to get a backtrace:

(lldb) thread backtrace
* thread #3: tid = 0x0066, 0xffffff800f9f1e52 kernel.development`machine_idle + 370 at pmCPU.c:174, name = '0xffffff801c91d9c0', queue = 'cpu-0', stop reason = signal SIGTRAP
  * frame #0: 0xffffff800f9f1e52 kernel.development`machine_idle + 370 at pmCPU.c:174
    frame #1: 0xffffff800f8fddb3 kernel.development`processor_idle(thread=0x0000000000000000, processor=0xffffff80100ef658) + 179 at sched_prim.c:4605
    frame #2: 0xffffff800f8fe300 kernel.development`idle_thread + 32 at sched_prim.c:4729
    frame #3: 0xffffff800f9ea347 kernel.development`call_continuation + 23

Perfect! We have a complete symbolicated trace and the addresses now look correct.

In practice

To make sure that things are working as expected, let’s set a breakpoint on forkproc (this function is used to create a new process structure given a parent process and is called from the fork syscall) and make sure that our breakpoint is hit and that we can inspect the frame arguments.

(lldb) breakpoint set --name forkproc
Breakpoint 1: where = kernel.development`forkproc + 20 at cpu_data.h:330, address = 0xffffff8006da6414
(lldb) continue
Process 1 resuming
Process 1 stopped
* thread #6: tid = 0x0f4c, 0xffffff8006da6414 kernel.development`forkproc(parent_proc=0xffffff8013f37b00) + 20 at cpu_data.h:330, name = '0xffffff8013e4f9c0', queue = 'cpu-1', stop reason = breakpoint 1.1
    frame #0: 0xffffff8006da6414 kernel.development`forkproc(parent_proc=0xffffff8013f37b00) + 20 at cpu_data.h:330
(lldb) thread backtrace
* thread #6: tid = 0x0f4c, 0xffffff8006da6414 kernel.development`forkproc(parent_proc=0xffffff8013f37b00) + 20 at cpu_data.h:330, name = '0xffffff8013e4f9c0', queue = 'cpu-1', stop reason = breakpoint 1.1
  * frame #0: 0xffffff8006da6414 kernel.development`forkproc(parent_proc=0xffffff8013f37b00) + 20 at cpu_data.h:330
    frame #1: 0xffffff8006da6d69 kernel.development`cloneproc(parent_task=0xffffff80135c7718, parent_coalition=0xffffff80135c4400, parent_proc=0xffffff8013f37b00, inherit_memory=0, memstat_internal=0) + 41 at kern_fork.c:977
    frame #2: 0xffffff8006da6038 kernel.development`fork1(parent_proc=0xffffff8013f37b00, child_threadp=0xffffff8014613ac0, kind=<unavailable>, coalition=<unavailable>) + 328 at kern_fork.c:554
    frame #3: 0xffffff8006d9b441 kernel.development`posix_spawn(ap=0xffffff8013f37b00, uap=<unavailable>, retval=0xffffff80135d0040) + 1937 at kern_exec.c:2078
    frame #4: 0xffffff8006e2c0c1 kernel.development`unix_syscall64(state=0xffffff80135db540) + 753 at systemcalls.c:368
    frame #5: 0xffffff8006a0e656 kernel.development`hndl_unix_scall64 + 22
(lldb) p *(struct proc *)$rdi
    (struct proc) $1 = {
      p_list = {
        le_next = 0xffffff80177e6cf0
        le_prev = 0xffffff801610d840
      }
      p_pid = 275
      task = 0xffffff801776cd08
      ...

Everything is working as expected, our breakpoint is hit, we can get a complete backtrace and print the first argument (a reference to the parent process structure that we want to fork from, I’ve cut the output, the proc struct is huge).

Conclusion

We showed an alternative approach to do remote debugging with VMware Fusion and LLDB. This method has some advantages over KDP since it lets us interrupt the execution of the program from the debugger at any time and doesn’t require us to use a NMI from the target VM to give control to the debugger on the host.

I’ve read that this method is also faster but I haven’t noticed a major difference in my testing so far. I’m sure heavy use of both methods will provide much more insights in that regard.

Thanks to Ryan Govostes for the idea, snare for the great post, Shantonu Sen for the target definition solution and VMware for making an awesome product.

Kernel debugging with LLDB and VMware Fusion

Being able to use LLDB to debug anything on my Mac has been the basis of my job for the last few years. Regardless of the particular use case – to debug my own program in Xcode or attach to another process – being able to set up breakpoints and inspect a program call stack frames and memory at runtime is just invaluable.

However, there’s one case where one cannot easily attach a debugger at any time: the kernel. Even if you could attach to the kernel on your local machine and halt its execution while you’re having a look around, the debugger wouldn’t be able to do much since it relies on the kernel to do, well anything. An alternative approach would be to use printf or IOLog and call it a day but that’s just not how I roll. If I need to debug something, I want a debugger.

In this article, we will discuss how one can set up a VMware Fusion virtual machine (VM) and use LLDB to do remote kernel debugging. Note that this approach could be used to debug any process remotely – which can be very useful for system critical processes such as launchd – but in this article we will mostly focus on the XNU kernel.

Setup

Setting up your environment for kernel debugging isn’t too hard but it can seem daunting at first since not much documentation is available.

In the old days, one would have needed to use two physical machines connected with an ethernet or firewire cable. However, nowadays one can easily install OS X on a VMware Fusion virtual machine. Since using multiple OS X systems on a single machine is pretty cool, we will setup the VM and discuss how to connect the debugger from the host.

I will assume that you are using VMware Fusion 7 and OS X 10.10.5. All this should very likely work with previous and later versions of VMware Fusion (or other virtualization products such as Parallels or VirtualBox, though I’m not sure since I’ve never used them) and OS X. It’s important to note however that El Capitan introduces System Integrity Protection that makes some breaking changes with respect to the boot-args arguments. I will update this article when El Capitan ships and the updated approach is final.

On the virtual machine

Install OS X 10.10

You will need to install the same version of OS X on the host system and the virtual machine. This is not technically required but it will make things easier. I’m currently using 10.10.5 but any version of Yosemite should work in the same way.

Install the Kernel Debug Kit

You should install the Kernel Debug Kit (KDK) from Apple Developer Downloads page. Make sure to pick the KDK that matches the version of OS X that you’re installed. The kit is a simple package install that you should run. It will install some components under /Library/Developer/KDKs.

If you take a look through this directory after installing, you should see something like this:

/Library/Developer/KDKs
    KDK_10.10.5_14F27.kdk
        ReadMe.html
        System
            Library
                Extensions
                    AppleUSBAudio.kext
                    AppleUSBAudio.kext.dSYM
                    ...
                Kernels
                    kernel.debug
                    kernel.debug.dSYM
                    ...

You now have a copy of the kernel itself and each kernel extension that Apple ships with the system, including the symbols for each of these executables. There is also a development and debug versions of the kernel that we will use in the VM.

The ReadMe.html file also has a ton of information about the KDK so make sure you at least skim through.

You will need the KDK to be installed on both the host and the VM:

  • It is needed on the VM to install the development version of the kernel
  • It is needed on the host in order for the debugger to use as the target or simply resolve the symbols

Install the development kernel

On OS X 10.10, the kernel executable is located under /Systems/Library/Kernels.

The KDK comes with a development and debug version of the kernel. To quote the KDK ReadMe.html docs:

The OS X Yosemite Kernel Debug Kit includes the DEVELOPMENT and DEBUG kernel builds. These both have additional assertions and error checking compared to the RELEASE kernel. The DEVELOPMENT kernel can be used for every-day use and has minimal performance overhead, while the DEBUG kernel has much more error checking.

It’s not technically required to install and boot into the development kernel to do kernel debugging (since the KDK has the symbols for the shipping kernel too and, as far as I can tell, the release kernel ships with the kdp_* symbols required for remote debugging) but we could definitely use the additional assertions and error checking so we will use it.

In order to install a distinct build of the kernel, there are two steps to take. We will discuss the boot-args changes in a following section so for now, we only need to copy the kernel in the right location.

To do so, you will need to copy the kernel.development executable located under the KDK install directory to /Systems/Library/Kernels:

# `KDK_10.10.5_14F27.kdk` is your current KDK install
damien$ sudo cp /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development /Systems/Library/Kernels

With this in place, your system can boot into the development kernel, assuming it’s told how to do so.

Update the NVRAM boot-args

The non-volatile random-access memory (NVRAM) is memory that retains its information when power is turned off. In particular, it can contain boot arguments that the system checks when booting. On OS X, its variables can be queried and manipulated by using the nvram command line utility. Note that this is drastically changing with OS X El Capitan but as far as Yosemite is concerned, nvram is the right tool to change boot arguments.

In order to set up our VM for debugging we will need to update the boot arguments by running the following command:

damien$ sudo nvram boot-args="debug=0x141 kext-dev-mode=1 kcsuffix=development pmuflags=1 -v"

where each boot argument does a very specific thing:

  • debug=0x141 enables the debug mode and sets a few debug options such as ensuring that the system on the VM waits for a remote debugger connection when booting. The Kernel Programming Guide has more info about the various options. Here we are using (DB_HALT | DB_ARP | DB_LOG_PI_SCRN).
  • kext-dev-mode=1 allows us to load unsigned kexts. This is not required if you’re only tweaking with the kernel itself but if you’re developing a kernel extension and don’t have a kext-enabled Developer ID certificate to sign it with, this will allow you to load your kext while debugging.
  • kcsuffix=development allows us to boot with the development kernel that we previously copied on the system (change to kcsuffix=debug to boot the debug one).
  • pmuflags=1 disables the watchdog timer. The Kernel Programming Guide has much more info about this option.
  • -v enables the kernel verbose mode that will be useful when debugging.

Retrieve the VM network configuration info (Optional)

In order to connect the debugger to the VM, we will need some info about its network configuration. We will need to change a few things on the host system but on the VM we only need to retrieve the network interface parameters and note them somewhere.

damien$ ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
  options=b<RXCSUM,TXCSUM,VLAN_HWTAGGING>
  ether 00:0d:29:56:8a:70
  inet6 fe80::20c:29ff:fe56:8a70%en0 prefixlen 64 scopeid 0x4
  inet 192.168.156.140 netmask 0xffffff00 broadcast 192.168.156.255
  nd6 options=1<PERFORMNUD>
  media: autoselect (1000baseT <full-duplex>)
  status: active
...

Make sure to write down the MAC address (ether, here 00:0d:29:56:8a:70) and IP address (inet, here 192.168.156.140) somewhere.

Invalidate the kext cache

We need to invalidate the kext cache to use the new kernel for debugging. This can easily be done by running the following command:

# / means the root of the current volume, you can specify another volume if needed
damien$ sudo kextcache -invalidate /

Reboot

Finally, we need to reboot the VM for all these changes to get picked up. If your system is then stuck on some console output, feel free to shut it down for now. We’ll look into this once we have the host setup for debugging.

damien$ sudo reboot

On the host machine system

With the VM setup, we now need to set things up for debugging on the host system.

Install Xcode

In order to debug you will need Xcode to be installed on your system. You can download it from the Apple Developer page. Make sure that you launch it at least once to accept the license and install the additional tools.

Install the Kernel Debug Kit

As we previously said, the Kernel Debug Kit will also have to be installed on the host system for the debugger to use as the target and correctly symbolicate. The installation instructions are identical as the ones for the VM above.

Update the address translation table (Optional)

Optionally, we need to add an entry to the Internet-to-Ethernet address translation table for the VM network interface. We need this for the debugger to pick up the right interface when connecting to the VM IP address. The address translation table is basically a lookup table for IP addresses to MAC addresses.

With the IP address and MAC address for the VM that we wrote down earlier, we can run the following command:

# The `-S` option will add a new entry to the table, making sure to remove any existent entry for this IP first
sudo arp -S 192.168.156.140 00:0d:29:56:8a:70

Note that we can skip this by using the DB_ARP debug boot-args option as previously explained. This option is defined in the Kernel Programming Guide as:

Allow debugger to ARP and route (allows debugging across routers and removes the need for a permanent ARP entry, but is a potential security hole)—not available in all kernels.

This essentially means that the debugger itself can take care of this for us. If you’re using 0x145 for the debug boot-args option as shown above, you won’t need to do update the table entry. If you’re worried about security, feel free to use 0x105 as the option and add the entry to the table manually.

That’s it! Everything is setup, let’s try things in practice.

In practice

In order to test our setup, let’s say that we’re debugging the kernel and want to check what is calling the hfs_vnop_setxattr function and what the arguments are (this function writes the extended attribute data for a given vnode to the HFS+ store). We will want the debugger to break whenever this function is hit.

First, we’ll have to launch LLDB on the host machine and set the target to the (local) development kernel binary located in the KDK:

damien$ lldb
(lldb) target create /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Current executable set to '/Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development' (x86_64).

Next, we need to launch the VM and wait for the boot screen to appear. After a few seconds, you should see the following message printed to the screen:

Darwin Bootstrapper Version 2.0.2: Sun Jul 5 21:53:13 PDT 2015; root:libxpc_executables-559.40.1~1/launchd/RELEASE_X86_64
boot-args = debug=0x001 kext-dev-mode=1 kcsuffix=development -v
IOKernelDebugger: registering debugger

ethernet MAC address: 00:0d:29:56:8a:70
ip address: 192.168.156.140

Waiting for remote debugger connection.

We can confirm that the MAC and IP addresses of the VM are the one that we noted before.

At this point, the host VM system is basically waiting for a debugger connection. Back to the LLDB session on the host machine, we can connect to the VM kernel by running the following command:

(lldb) kdp-remote 192.168.156.140

The VM will now print Connected to remote debugger and some info will also be printed to the LLDB console output.

(lldb) kdp-remote 192.168.156.140
Version: Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/DEVELOPMENT_X86_64; UUID=C75BDFDD-9F27-3694-BB80-73CF991C13D8; stext=0xffffff8011200000
Kernel UUID: C75BDFDD-9F27-3694-BB80-73CF991C13D8
Load Address: 0xffffff8011200000
Kernel slid 0x11000000 in memory.
Loaded kernel file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Loading 50 kext modules .................................................. done.
Target arch: x86_64
Instantiating threads completely from saved state in memory.
Process 1 stopped
* thread #2: tid = 0x0123, 0xffffff80112bb4c8 kernel.development`kdp_register_send_receive(send=<unavailable>, receive=<unavailable>) + 392 at kdp_udp.c:463, name = '0xffffff801eae94d0', queue = '0x0', stop reason = signal SIGSTOP
    frame #0: 0xffffff80112bb4c8 kernel.development`kdp_register_send_receive(send=<unavailable>, receive=<unavailable>) + 392 at kdp_udp.c:463

At this point, we are stopped in the debugger and the VM kernel is waiting for us to continue. We will first set up our breakpoint:

(lldb) breakpoint set --name hfs_vnop_setxattr
Breakpoint 1: where = kernel.development`hfs_vnop_setxattr + 34 at hfs_xattr.c:711, address = 0xffffff800b548da2

We can now continue for the kernel to complete booting:

(lldb) continue
Process 1 resuming

A bunch of text will be output on both the debugger and the system console and eventually the VM system will boot. Shortly after booting (or likely during the boot process), our breakpoint should be hit.

Process 1 stopped
* thread #15: tid = 0x0195, 0xffffff8013b48da2 kernel.development`hfs_vnop_setxattr(ap=0xffffff80e230b290) + 34 at hfs_xattr.c:711, name = '0xffffff802177d1b0', queue = '0x0', stop reason = breakpoint 1.1
    frame #0: 0xffffff8013b48da2 kernel.development`hfs_vnop_setxattr(ap=0xffffff80e230b290) + 34 at hfs_xattr.c:711

Back in the LLDB debugger we can print information about the current stack trace and arguments:

(lldb) thread backtrace
* thread #15: tid = 0x0195, 0xffffff8013b48da2 kernel.development`hfs_vnop_setxattr(ap=0xffffff80e230b290) + 34 at hfs_xattr.c:711, name = '0xffffff802177d1b0', queue = '0x0', stop reason = breakpoint 1.1
  * frame #0: 0xffffff8013b48da2 kernel.development`hfs_vnop_setxattr(ap=0xffffff80e230b290) + 34 at hfs_xattr.c:711
    frame #1: 0xffffff801394df64 kernel.development`VNOP_SETXATTR(vp=0xffffff8021dcc000, name=<unavailable>, uio=<unavailable>, options=<unavailable>, ctx=<unavailable>) + 84 at kpi_vfs.c:5162
    frame #2: 0xffffff801394165d kernel.development`vn_setxattr(vp=0xffffff8021dcc000, name=0xffffff7f94247d3d, uio=0xffffff80e230b780, options=8, context=0xffffff8020ff9d90) + 589 at vfs_xattr.c:216
    frame #3: 0xffffff8013d21a3d kernel.development`mac_vnop_setxattr(vp=0xffffff8021dcc000, name=<unavailable>, buf=0xffffff8022813008, len=29) + 189 at mac_vfs_subr.c:173
    frame #4: 0xffffff7f94246777 Quarantine`quarantine_set_ea + 125
    frame #5: 0xffffff7f94245f37 Quarantine`hook_vnode_notify_open + 430
    frame #6: 0xffffff8013d19645 kernel.development`mac_vnode_notify_open(ctx=<unavailable>, vp=0xffffff8021dcc000, acc_flags=35) + 181 at mac_vfs.c:405
    frame #7: 0xffffff801393ea6a kernel.development`vn_open_auth [inlined] vn_open_auth_finish(vp=<unavailable>, fmode=35, ctx=<unavailable>) + 14 at vfs_vnops.c:193
    frame #8: 0xffffff801393ea5c kernel.development`vn_open_auth(ndp=<unavailable>, fmodep=0xffffff80e230b9fc, vap=<unavailable>) + 2220 at vfs_vnops.c:614
    frame #9: 0xffffff801392a095 kernel.development`open1(ctx=<unavailable>, ndp=0xffffff80e230bbf0, uflags=<unavailable>, vap=0xffffff80e230bd88, fp_zalloc=<unavailable>, cra=<unavailable>, retval=<unavailable>) + 549 at vfs_syscalls.c:3344
    frame #10: 0xffffff801392ac70 kernel.development`open [inlined] open1at(fp_zalloc=<unavailable>, cra=<unavailable>) + 32 at vfs_syscalls.c:3541
    frame #11: 0xffffff801392ac50 kernel.development`open [inlined] openat_internal(ctx=0xffffff8020ff9d90, path=140423473571856, flags=546, mode=<unavailable>, fd=-2, segflg=UIO_USERSPACE, retval=0xffffff8020ff9ca0) + 281 at vfs_syscalls.c:3674
    frame #12: 0xffffff801392ab37 kernel.development`open [inlined] open_nocancel + 18 at vfs_syscalls.c:3689
    frame #13: 0xffffff801392ab25 kernel.development`open(p=<unavailable>, uap=<unavailable>, retval=0xffffff8020ff9ca0) + 85 at vfs_syscalls.c:3682
    frame #14: 0xffffff8013c2c0c1 kernel.development`unix_syscall64(state=0xffffff8021009ea0) + 753 at systemcalls.c:368
    frame #15: 0xffffff801380e656 kernel.development`hndl_unix_scall64 + 22

(lldb) p *(struct vnop_setxattr_args *)$rdi
(struct vnop_setxattr_args) $71 = {
  a_desc = 0xffffff8013e61430
  a_vp = 0xffffff8021dcc000
  a_name = 0xffffff7f94247d3d "com.apple.quarantine"
  a_uio = 0xffffff80e230b780
  a_options = 8
  a_context = 0xffffff8020ff9d90
}

Almost magical!

It’s important to note that once the kernel has launched and the debugger continued, the kernel cannot be halted again from the debugger. In fact, if you try you will get an error message:

(lldb) process interrupt
error: Failed to halt process: KDP cannot interrupt a running kernel

For this reason, you should make sure that all your breakpoints are registered in the debugger before running continue for the kernel to complete its boot.

A note about Non-Maskable Interrupts (NMI)

In the section about boot-args, we discussed using (DB_HALT | DB_ARP | DB_LOG_PI_SCRN) as the flag for the debug option. This flag contains DB_HALT that gets the kernel to wait for a debugger to attach when it boots.

As you can imagine, it is not always necessary to attach a debugger at system boot and often one only wants to attach at panic time (or even at execution time). Luckily, there’s an option to drop into the debugger on an NMI: DB_NMI.

In order to do this, we will have to change the flags for the debug boot-args option to 0x144 which is (DB_NMI | DB_ARP | DB_LOG_PI_SCRN).

With this in place, the VM kernel will boot normally without waiting for a debugger to attach. Then, at any time during execution you can generate a NMI which gives an opportunity for the remote debugger to connect.

The way to generate a NMI from a virtual machine is not obvious. Older versions of OS X would use the power button only, as documented in the Technical QA1264. However, as discussed in the WWDC 2013 session 707 this was changed to Left Command + Right Command + Power on OS X 10.9. I don’t think it is possible to simulate the power button being pressed on VMware Fusion so, at first, I thought it was actually impossible to generate a NMI from the virtual machine.

However, after reading through the Kernel Programming Guide in more details I noticed that the DB_NMI option was documented as:

Drop into debugger on NMI (Command–Power, Command-Option-Control-Shift-Escape, or interrupt switch).

And indeed, using Command-Option-Control-Shift-Escape works beautifully in the VM!

If you hit this key combo in your VM at any time (including at panic time), execution will stop. Back to the host system, you can fire up LLDB and connect to the VM.

(lldb) target create /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Current executable set to '/Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development' (x86_64).

(lldb) kdp-remote 192.168.156.140
Version: Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/DEVELOPMENT_X86_64; UUID=C75BDFDD-9F27-3694-BB80-73CF991C13D8; stext=0xffffff8008800000
Kernel UUID: C75BDFDD-9F27-3694-BB80-73CF991C13D8
Load Address: 0xffffff8008800000
Kernel slid 0x8600000 in memory.
Loaded kernel file /Library/Developer/KDKs/KDK_10.10.5_14F27.kdk/System/Library/Kernels/kernel.development
Loading 93 kext modules ............................................................................................ done.
Target arch: x86_64
Instantiating threads completely from saved state in memory.
Process 1 stopped
* thread #2: tid = 0x00b8, 0xffffff80089f4e77 kernel.development`Debugger(message=<unavailable>) + 759 at model_dep.c:1013, name = '0xffffff8015e169a0', queue = '0x0', stop reason = signal SIGSTOP
    frame #0: 0xffffff80089f4e77 kernel.development`Debugger(message=<unavailable>) + 759 at model_dep.c:1013

And that’s it, as you can see that a SIGSTOP was sent in the VM kernel and back in the host debugger that’s where we’ve attached. From here we can do anything we would do in a debugger: inspect memory, print a backtrace, set a few breakpoints, etc… Once you’re done, you simply have to continue for the kernel to resume executing.

A note about kernel extensions and kernel patches

In this example, we’ve only set up breakpoints and inspected memory of the release kernel. If you plan to run a modified build of the kernel or more importantly if you are building a kernel extension, you will have to copy the .dSYM symbol file built along your kext to the host machine. It can be located anywhere that is indexed by Spotlight. When encountering an unknown symbol (for example a function in your kext), LLDB will look for a .dSYM file that matches this symbol’s Mach-O binary UUID. If the .dSYM file is on your host machine and was indexed by Spotlight then LLDB will symbolicate things nicely.

Conclusion

In this article, we presented how to set up an environment for kernel debugging. While it might sound daunting at first, we saw that it is actually pretty trivial to set up and once everything is in place, debugging the kernel is not more difficult than debugging a simple user-space application.

Interprocess communication on iOS with Mach messages

In my last article I mentioned that CFMessagePortCreateLocal() was not usable on iOS, quoting the documentation as evidence. However, I should have known better since I had recently experimented (and succeeded, more about this soon) with registering a Mach port name with the Bootstrap Server on iOS by using XPC. And, as pointed out by Ari Weinstein, it looks like one can indeed use CFMessagePortCreateLocal() on iOS!

There are a few things at play here so let’s first review the API documentation.

Documentation deep dive

The documentation for CFMessagePort on the iOS Developer Library states the following:

Special Considerations

This method is not available on iOS 7 and later—it will return NULL and log a sandbox violation in syslog. See Concurrency Programming Guide for possible replacement technologies.

This sounds like a pretty clear statement and it sure feels like one should stay away for this API on iOS even though its header is fully available. So why do I believe this is actually a red-herring?

Well, while this consideration makes sense on iOS 7, things have changed quite a bit since the introduction of iOS 8. Starting with iOS 8, an iOS application looks a lot more like an OS X sandboxed application than it used to. The main reason for this, as previously discussed, are application groups.

There is not a lot of documentation regarding sandboxing rules for iOS but luckily the Mac Developer Library has a bunch. Since the technology behind sandboxing on both platforms is pretty identical we can get a good understanding about how things work on iOS by reading the OS X one.

To step back a little, let’s discuss what being sandboxed means for an application. In short, it means no access to the file system outside of the sandbox, no sharing of ports with other processes, no ingoing or outgoing connection, etc… Obviously, since applications wouldn’t do much if all these rules were enforced, Apple provide some entitlements that can restore targeted capabilities to the sandboxed target.

Among these, the com.apple.security.application-groups entitlement was introduced with 10.8 (well technically with 10.7.4 but things didn’t work too well back then) and iOS 8. As previously discussed, application groups give access to a shared container directory on disk and shared user defaults for applications in the same group.

But the extra capabilities provided by application groups are not limited to the file system. In particular, the section about the Application Group Container Directory in the Mac Sandbox Design Guide also has some very interesting information about IPC:

Note: Applications that are members of an application group also gain the ability to share Mach and POSIX semaphores and to use certain other IPC mechanisms in conjunction with other group members. See IPC and POSIX Semaphores and Shared Memory for more details.

Looking at the IPC and Posix Semaphores and Shared Memory section we learn that:

Normally, sandboxed apps cannot use Mach IPC, POSIX semaphores and shared memory, or UNIX domain sockets (usefully). However, by specifying an entitlement that requests membership in an application group, an app can use these technologies to communicate with other members of that application group.

We’ve already covered the UNIX domain sockets but the Mach IPC statement is definitely intriguing and luckily there’s some good news a few paragraphs below:

Any semaphore or Mach port that you wish to access within a sandboxed app must be named according to a special convention:

  • Mach port names must begin with the application group identifier, followed by a period (.), followed by a name of your choosing.

Boom!

So it sure looks like multiple applications in the same application group can send Mach messages through a Mach port communication channel, assuming the port name is carefully chosen to start with the application group identifier.

Remember that these docs are for OS X but I don’t see any reason why it wouldn’t work on iOS, despite what the CFMessagePort docs have to say.

OK, that was quite a lot of theory. Let’s take a look at the code and see if these assumptions actually hold in practice.

In practice

Let’s try to use CFMessagePortCreateLocal() to create a port with a dynamically created name (i.e. not prefixed by the application group identifier) and confirmed that it fails.

CFDataRef message_callback(CFMessagePortRef local, SInt32 msgid, CFDataRef data, void *info) {
    return NULL;
}

void create_port(void) {
    CFStringRef port_name = (__bridge CFStringRef)[[NSUUID UUID] UUIDString];
    CFMessagePortRef port = CFMessagePortCreateLocal(kCFAllocatorDefault, port_name, &message_callback, NULL, NULL);
}

Note that you will have to run this on a device since the iOS Simulator doesn’t seem to respect any sandboxing rule.

As expected, the call to CFMessagePortCreateLocal() returns NULL and the following error is printed to the console.

*** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x450f, name = 'E030B15B-3EC6-4A21-A415-668AD0CB1B52'
See /usr/include/servers/bootstrap_defs.h for the error codes.

(lldb) po port
<nil>

Looking at the bootstrap_defs.h header (which is really just a symbolic link to /usr/include/bootstrap.h) as suggested by the error message, we can see that the 1100 error code is BOOTSTRAP_NOT_PRIVILEGED.

So that’s confirmed, we definitely cannot register random dynamic port name in our sandboxed iOS application.

But if we change the port name to be prefixed with the application group identifier (such as com.ddeville.app.group.port) a port is returned and no error is printed to the console, as previously inferred from the docs.

We did also confirm that a local port can indeed be created on iOS when the port name is prefixed by the application group identifier. However, what does make that prefixed name so special and why does CFMessagePortCreateLocal() fail to create a port for names that do not match this requirement?

In order to figure this out we first need to discuss Mach bootstrap registration.

Mach bootstrap registration

Mach ports are used everywhere on OS X. Due to the fact that Apple treats the Mach layer as the very bottom layer in the kernel – with the BSD layer and other subsequent components being implemented on top of it – many operations on OS X eventually translate to Mach ports being involved down the stack.

Mach ports 101

In my opinion, the Kernel Programming Guide has the clearest documentation about Mach ports.

A port is an endpoint of a unidirectional communication channel between a client who requests a service and a server who provides the service. If a reply is to be provided to such a service request, a second port must be used. This is comparable to a (unidirectional) pipe in UNIX parlance.

and

Tasks have permissions to access ports in certain ways (send, receive, send-once); these are called port rights. A port can be accessed only via a right. Ports are often used to grant clients access to objects within Mach. Having the right to send to the object’s IPC port denotes the right to manipulate the object in prescribed ways. As such, port right ownership is the fundamental security mechanism within Mach. Having a right to an object is to have a capability to access or manipulate that object.

and

Traditionally in Mach, the communication channel denoted by a port was always a queue of messages.

and

Ports and port rights do not have systemwide names that allow arbitrary ports or rights to be manipulated directly. Ports can be manipulated by a task only if the task has a port right in its port namespace. A port right is specified by a port name, an integer index into a 32-bit port namespace. Each task has associated with it a single port namespace.

and finally

Tasks acquire port rights when another task explicitly inserts them into its namespace, when they receive rights in messages, by creating objects that return a right to the object, and via Mach calls for certain special ports (mach_thread_self, mach_task_self, and mach_reply_port.)

(Note: in Mach, a task is basically what you would call a process in the BSD world).

Phew! That’s a lot of information. To summarize, a port is a communication channel that lets multiple tasks send messages to each other, assuming that they have the appropriate right to do so and that the port right, specified by the port name, is available in the task namespace.

In user space, this basically translates to mach_port_t, mach_port_name_t and calls to mach_msg(). Since using Mach ports directly can be pretty hard, Core Foundation (CFMachPort and CFMessagePort) and Cocoa (NSMachPort and NSPortMessage) wrappers were created.

Bootstrap registration

When it is created, a Mach task is given a set of special ports including one called the bootstrap port whose purpose is to send messages to the bootstrap server.

Let’s discuss what is the bootstrap server (for more information about this section – and really about the whole article – make sure to read chapter 9 of Amit Singh’s Mac OS X Internals: A Systems Approach).

The Bootstrap Server

In short, the bootstrap server allows tasks to publish ports that other tasks on the same machine can send messages to. It achieves this by managing a list of name-port bindings. The bootstrap server’s functionality is provided by the bootstrap task, whose program encapsulation nowadays is the launchd program.

The reason why a bootstrap server is necessary is because Mach port namespaces are local to tasks. The bootstrap server allows service names and associated ports to be registered and looked up, across tasks.

Registration

In the pre-launchd days (before Mac OS X 10.4 Tiger), one would register a port name by means of the bootstrap_register() function:

kern_return_t
bootstrap_register(mach_port_t bootstrap_port,
                   name_t      service_name,
                   mach_port_t service_port);

The server side of the connection would thus register a name for the port it will read from. With this call, the bootstrap server would provide send rights for the bound port to the client.

On the client side, the bootstrap_look_up() function can be used to retrieve send rights for the service port of the service specified by the service name. Obviously, the service must have been previously registered under this name by the server.

kern_return_t
bootstrap_look_up(mach_port_t     bootstrap_port,
                  name_service_t  service_name,
                  mach_port_t    *service_port);

The register_service() function in the helper application source for mDNSResponder (Rest In Peace) provides a nice demonstration of this technique.

However, the bootstrap_register() function was deprecated with Mac OS X 10.5 Leopard and Apple now recommends to use launchd instead. I won’t go into the details of this decision here (there was a great discussion about it on the darwin-dev mailing list a while ago) but Apple was essentially trying to encourage a launch-on-demand pattern with launchd and this API just didn’t fit with it.

Since using a launchd service or submitting a job via the ServiceManagement is not always appropriate (or possible), there are Cocoa and Core Foundation APIs that take care of registering the name with the bootstrap server by means of an SPI: bootstrap_register2(). These are NSMachBootstrapServer and CFMessagePort.

Since Core Foundation is open source, one can check the implementation of CFMessagePortCreateLocal() and double check that the port name is indeed being registered. It’s also easy to disassemble -[NSMachBootstrapServer registerPort:name:] and realize that it’s essentially wrapping bootstrap_register2(). Remember that NSMachBootstrapServer is only available on OS X so it’s not actually useful to this discussion but it’s still worth keeping in mind.

The circle is now complete.

CFMessagePortCreateLocal and the magic name prefix

Now that we understand the process of registering the port name with the bootstrap server we can look into why using the application group identifier as a prefix for the port name magically works.

By calling into CFMessagePortCreateLocal() with a random name that doesn’t meet the sandbox criteria and setting a symbolic breakpoint on the function we can step through the instructions and find out where it fails.

We can quickly individuate the point of failure (remember that CFMessagePort.c is open source):

kern_return_t ret = bootstrap_register2(bs, (char *)utfname, mp, perPID ? BOOTSTRAP_PER_PID_SERVICE : 0);
if (ret != KERN_SUCCESS) {
    CFLog(kCFLogLevelDebug, CFSTR("*** CFMessagePort: bootstrap_register(): failed %d (0x%x) '%s', port = 0x%x, name = '%s'\nSee /usr/include/servers/bootstrap_defs.h for the error codes."), ret, ret, bootstrap_strerror(ret), mp, utfname);
    CFMachPortInvalidate(native);
    CFRelease(native);
    CFAllocatorDeallocate(kCFAllocatorSystemDefault, utfname);
    CFRelease(memory);
    return NULL;
}

It looks like bootstrap_register2() itself is failing leading CFMessagePortCreateLocal() to print the error and return NULL.

bootstrap_register2() probably ends up being implemented somewhere between launchd and the kernel so we can take a look at the launchd source to try and figure out why it would fail. launchd was not open sourced as part of 10.10 but the 10.9.5 source will do (remember, the source between iOS and OS X will likely be extremely similar if not identical and application groups were introduced on OS X 10.8).

I was not entirely sure where to look for but job_mig_register2 in core.c looks like a good candidate.

These few lines in particular are definitely interesting:

bool per_pid_service = flags & BOOTSTRAP_PER_PID_SERVICE;
#if HAVE_SANDBOX
    if (unlikely(sandbox_check(ldc->pid, "mach-register", per_pid_service ? SANDBOX_FILTER_LOCAL_NAME : SANDBOX_FILTER_GLOBAL_NAME, servicename) > 0)) {
        return BOOTSTRAP_NOT_PRIVILEGED;
    }
#endif

Once again, I had no idea where that sandbox_check() function was implemented so I poked around the included headers to see if anything jump to my eyes. sandbox.h definitely seemed promising but the version in /usr/include/sandbox.h doesn’t declare the function. After some more poking around /usr and disassembling a few libraries I found the implementation in /usr/lib/system/libsystem_sandbox.dylib!

sandbox_check() is pretty lame and is basically a proxy into sandbox_check_common(). The latter does the actual work of checking whether the process requesting the mach-register action can use the provided service name. We could spend another article going through the disassembly of the function so let’s just assume that it does a few checks based on the entitlements of the process and returns whether the service name is allowed or not. In our case, it’s obvious that the function checks whether the service name is prefixed with the application group identifier retrieved from the process entitlements and denies it if it doesn’t.

It’s worth noting that it would take Apple a simple change to the sandbox check to disallow that API on iOS. However, given how many APIs rely on the current mach-register behavior (XPC being one of them) I don’t see this happen any time soon.

Using Mach messages on iOS

Now that we’ve uncovered the conditions under which the API worked, let’s see how one would use it to do IPC on iOS.

Creating the ports

Similarly as Berkeley sockets, we will have a process acting as the server and another one acting as the client. The server will be in charge of registering the port name by creating a local port while the client will simply connect to it by creating a remote port for the same port name. Ordering is important since the remote port creation will fail if the server hasn’t had a chance to register the name yet.

As shown above, the server would create the port by doing:

CFDataRef callback(CFMessagePortRef local, SInt32 msgid, CFDataRef data, void *info)
{
    return NULL;
}

void create_port(void)
{
    CFStringRef port_name = CFSTR("com.ddeville.myapp.group.port");
    CFMessagePortRef port = CFMessagePortCreateLocal(kCFAllocatorDefault, port_name, &callback, NULL, NULL);
    CFMessagePortSetDispatchQueue(port, dispatch_get_main_queue());
}

We schedule the message callbacks to happen on the main queue so that we don’t need to setup a runloop source for the callbacks and manually having to run the runloop while waiting for a reply to a message.

Similarly, on the client side we create the remote message port:

void create_port(void)
{
    CFStringRef port_name = CFSTR("com.ddeville.myapp.group.port");
    CFMessagePortRef port = CFMessagePortCreateRemote(kCFAllocatorDefault, port_name));
}

Since the port creation will fail if the server hasn’t registered the local port yet, an appropriate solution would be to retry every few seconds until it succeeds.

Sending messages

It is important to note that the connection is somewhat unidirectional. While the client can send messages to the server, the server can only reply to the messages synchronously when they are received (you have probably noted that the client doesn’t have a way to set up a message callback).

In the simple case (that is where no reply is expected), the client can send a message by doing:

void send_message(void)
{
    SInt32 messageIdentifier = 1;
    CFDataRef messageData = (__bridge CFDataRef)[@"Hello server!" dataUsingEncoding:NSUTF8StringEncoding];
    CFMessagePortSendRequest(port, messageIdentifier, messageData, 1000, 0, NULL, NULL);
}

As you can see, any data can be sent in the message so LLBSDMessaging could be re-implemented on top of Mach messages. The message identifier integer is also a nice API to distinguish between message types.

Upon sending, on the server side, the callback function will be invoked and the message identifier and data passed through. Nice!

Replying to a message

As previously noted, the server can optionally reply to the message by returning some data synchronously in the callback function. For it to work client side, we need to slightly change the way we send the message.

// On the client
void send_message(void)
{
    SInt32 messageIdentifier = 1;
    CFDataRef messageData = (__bridge CFDataRef)[@"Hello server!" dataUsingEncoding:NSUTF8StringEncoding];

    CFDataRef response = NULL;
    SInt32 status = CFMessagePortSendRequest(port, messageIdentifier, messageData, 1000, 1000, kCFRunLoopDefaultMode, &response);
}

// On the server
CFDataRef callback(CFMessagePortRef local, SInt32 msgid, CFDataRef data, void *info)
{
    return (__bridge CFDataRef)[@"Hello client!" dataUsingEncoding:NSUTF8StringEncoding];
}

Upon return, if no error has happened (you can check the returned status integer) the response reference will point to the data that was sent back by the server.

It’s important to note that CFMessagePortSendRequest() will run the runloop in the specified mode (here kCFRunLoopDefaultMode) thus blocking until the response comes through. We can assume that IPC is pretty fast but the server might still be taking some time to reply. This is where the timeout becomes important: using an appropriate timeout will prevent a thread from being blocked for too long. It’s also probably not a great idea to block the main thread but should you use a background thread remember that it will need to have a serviced runloop ( threads created by a dispatch queue do not have one for example). Another option could be to provide a custom mode on the main thread but be extremely cautious if you need to do this.

Bidirectional communication

As mentioned above, while the server can reply to messages sent by the client, it cannot initiate a new message.

A way to workaround this issue would be to create another pair or ports where the current client act as the registrar. Upon the initial connection from the server, the client would register an additional local port with a new name and send the name to the server. Upon receiving, it would create a remote port matching that name.

This solution is slightly more complicated than the bidirectional-by-nature one provided by Berkeley sockets but it should work as expected. Also, most server-client architectures don’t actually require the server to ever initiate a request since it almost always acts as a response provider.

Conclusion

We were able to register a Mach port with the bootstrap server by creating a service name prefixed by the application group identifier and using CFMessagePortCreateLocal() to take care of the registration. With this set up, it was trivial to send messages between applications.

It wouldn’t be too hard to rewrite LLBSDConnection on top of Mach ports rather than Berkeley sockets. It would also likely be slightly faster. But I guess this post was already long enough so we’ll leave this for another time :-)

Interprocess communication on iOS with Berkeley sockets

With iOS 8, Apple introduced App Extensions. App Extensions are self-contained apps that developers can ship along with their main application. They can be launched on demand by the system to perform things such as sharing, editing a photo, displaying a widget in the Notification Center, presenting a custom keyboard or even provide content to an app running on the Apple Watch.

In order to allow an extension and its hosting application to share data and resources, application groups were also introduced (It’s worth noting that they were originally introduced on OS X in 10.7.4 to support sharing data between an application and a Login Item application shipped within its bundle).

An application group is an identifier shared between various applications in the same group (it can be enabled through an entitlement). Applications in the same group can share a group container directory that is located outside of their own sandbox but accessible by all applications in the group. Think of it as another sandbox shared between applications. An application and its extensions can share this group container but so can multiple applications released by the same developer (assuming they have all specified the same identifier for the com.apple.security.application-groups entitlement).

While it might not seem much at first, this means that applications belonging to the same group can now:

  • Share a directory on the file system. Multiple applications can read and write files in this directory (but also making sure that access is coordinated, more about this later) and could for example share a Core Data store or a sqlite database.
  • Share preferences. When creating an instance of NSUserDefaults with a suite name equal to the application group identifier, cfprefsd will store the preferences plist in the group container directory and all applications in the group will be able to share its content (A couple of years ago I built shared user defaults on the Mac before the application group suite was made available, it was fun).

While this is pretty cool and is a big step forward (previously, multiple apps by the same developer could only share keychain entries) there are times where you wish you could just send regular messages between applications without having to store them somewhere on the file system. Also, while NSUserDefaults, Core Data or sqlite support concurrent access neither actually notify other processes when one makes some changes to the shared data. What this means is that you could be inserting an object in one process but the other process wouldn’t be aware of the change until it attempts fetching the data again.

If we had an IPC mechanism to communicate between applications in the same group we could make sure that other processes are notified of changes in real-time without the need for polling.

This article will describe a complete and general solution to this problem on iOS. However, before diving into it I’d like to briefly discuss the IPC situation on OS X. If you can’t wait and just want to see the code, you can find the project on GitHub.

Current state of IPC on OS X

Mike Ash did an excellent job discussing the state of IPC on the Mac a few years ago so I will not repeat everything but will rather recommend that you read his post (it’s from 2009 but apart from XPC not yet being a thing it’s still a very good overview).

As you probably already know, both OS X and iOS are built on top of Darwin. Darwin itself is built around XNU, a hybrid kernel that combines the Mach 3 microkernel and various elements of BSD, itself a Unix derivative. The dual nature of XNU means that many features of both Mach and Unix are available on Darwin, including several IPC mechanisms.

Mach ports

Mach ports are the fundamental IPC mechanism on Mach. Using them directly is hard but there are Core Foundation (CFMachPort) and Foundation (NSMachPort) wrappers available that make things slightly easier. You rarely use a Mach port directly but assuming you have a sending and receiving ports available you should be able to construct a NSPortMessage and send it over.

While creating a local port is just a matter of initializing a new NSMachPort instance, retrieving the remote one requires the Mach bootstrap server. On OS X this usually means having the server side of the connection registering the port for the name with the shared NSMachBootstrapServer such as:

NSMachPort *port = [NSMachPort port];
NSString *name = @"com.ddeville.myapp.myport";
[[NSMachBootstrapServer sharedInstance] registerPort:port name:name];

On the client side of the connection, one could retrieve the remote port by doing:

NSString *name = @"com.ddeville.myapp.myport";
NSMachPort *port = [[NSMachBootstrapServer sharedInstance] portForName:name];

Alternatively, one could use CFMessagePortCreateLocal that registers the service with the bootstrap server under the cover and CFMessagePortCreateRemote that uses the bootstrap server to look up a registered service by name.

Note that if the application is sandboxed, the system will not let you register service names that aren’t starting with the application group identifier. When creating a Login Item application, LaunchServices implicitly registers a mach service for the login item whose name is the name as the login item’s bundle identifier. See the App Sandbox Design Guide and the iDecide sample project for more info.

Since working directly with Mach messages can be cumbersome, Distributed Objects, Distributed Notifications and Apple Events were built on top of them and simplify things a lot. I won’t discuss these here so go read Mike Ash’s post and the Distributed Objects Architecture if you want more info.

It’s also worth noting that with OS X 10.7 Lion, Apple shipped a revolutionary API built on top of Mach messages and libdispatch: XPC. As per its man page, XPC is “a structured, asynchronous interprocess communication library”. With 10.8, a new NSXPCConnection class was also released. NSXPCConnection brings back the Distributed Objects idea with an XPC flavor by letting you send messages to a remote proxy object through an XPC connection. Since the introduction of XPC, using Mach ports or one of its derivative is essentially unnecessary. However, it’s important to keep in mind that service name bootstrapping is still required with XPC. In particular, the docs for xpc_connection_create_mach_service clearly state that “the service name must exist in a Mach bootstrap that is accessible to the process and be advertised in a launchd.plist”.

POSIX file descriptors

By being a BSD derivative Darwin has all the Unix IPC goodies, in particular Berkeley sockets a.k.a. Unix Domain Sockets. Since these will be the core part of the article I’ll save their discussion for a later section.

Current state of IPC on iOS

As you’re probably aware, a big part of App Extensions on iOS is built around XPC. An XPC connection is set up between the hosting application and the extension in order to communicate. Most of the methods in NSExtensionRequestHandling and NSExtensionContext end up sending some message to the host application through an XPC connection.

However, XPC is also private on iOS and third-party developers cannot use it directly (yet). Similarly, Distributed Objects, Distributed Notifications and Apple Events are not available on the platform.

Regarding using Mach ports directly, while the Mach port API is indeed available on iOS, NSMachBootstrapServer isn’t and creating a remote port with CFMessagePortCreateLocal will return NULL and log a sandbox violation to the console.

[Update: This was proven incorrect. See this follow-up post for more info.]

<notify.h>

OS X has had support for the <notify.h> Core OS notification mechanism since 10.3. Similarly, iOS has had support for these mechanism since its inception. <notify.h> allows processes to exchange stateless notification events. As explained in the <notify.h> header:

These routines allow processes to exchange stateless notification events. Processes post notifications to a single system-wide notification server, which then distributes notifications to client processes that have registered to receive those notifications, including processes run by other users.

Notifications are associated with names in a namespace shared by all clients of the system. Clients may post notifications for names, and may monitor names for posted notifications. Clients may request notification delivery by a number of different methods.

In a nutshell, processes can post and receive system-wide notifications with the notifyd daemon acting as the server by receiving and broadcasting notifications. Observers can be registered on a dispatch queue, a signal, a mach port or a file descriptor!

A simple example would be registering for lock notifications on iOS:

#import <notify.h>

const char *notification_name = "com.apple.springboard.lockcomplete";
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
int registration_token;
notify_register_dispatch(notification_name, &registration_token, queue, ^ (int token) {
    // do something now that the device is locked
});

Pretty straightforward.

CFNotificationCenterGetDarwinNotifyCenter

Core Foundation offers a notification center based on <notify.h>: CFNotificationCenterGetDarwinNotifyCenter. By retrieving this notification center, one can post and observe system-level notifications that originate in <notify.h>.

static void notificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
{
    // do something now that the device is locked
}

CFStringRef notificationName = CFSTR("com.apple.springboard.lockcomplete");
CFNotificationCenterRef notificationCenter = CFNotificationCenterGetDarwinNotifyCenter();
CFNotificationCenterAddObserver(notificationCenter, NULL, notificationCallback, notificationName, NULL, CFNotificationSuspensionBehaviorDeliverImmediately);

As you can see, this is actually more code since it requires adding a callback function. So, unless you need the behavior provided by CFNotificationSuspensionBehavior, I’d say stick with <notify.h> if you want to post or receive these notifications.

It’s also important noting that the userInfo parameter of the notification is not supported for the Darwin notify center. What this means is that even if you specify a userInfo dictionary when sending the notification from one process, it will not be available on the receiving side (Notification Center simply ignores it).

This drastically reduces the appeal of this method and essentially restricts its usage to simple stateless system-wide notifications – as it was designed for.

One use case I can think of is to reduce polling. You could for example post a notification whenever one process changes a user default so that other process can fetch the new value without having to constantly check whether the value has changed. However, this method is racy by nature since it requires the receiver to fetch the value after the notification was received and its value could have changed or even be reverted. Ideally the new value would be sent with the notification but as we’ve seen the API prevents this from happening.

NSFilePresenter

I previously discussed NSFilePresenter and NSFileCoordinator extensively in this article. In a few words, NSFileCoordinator class coordinates the reading and writing of files and directories among multiple processes and objects in the same process. Similarly, by adopting the NSFilePresenter protocol, an object can be notified whenever the file is changed on disk by another process.

I have used this method to implement messaging between process on OS X and people have been attempting the same on iOS.

Unfortunately, with TN2408 Apple made it clear that using file coordination to coordinate reads and writes between an app extension and its containing app is a very bad idea:

Important: When you create a shared container for use by an app extension and its containing app in iOS 8, you are obliged to write to that container in a coordinated manner to avoid data corruption. However, you must not use file coordination APIs directly for this. If you use file coordination APIs directly to access a shared container from an extension in iOS 8.0, there are certain circumstances under which the file coordination machinery deadlocks.

I suspect file coordination is not handling the case where the app extension is killed by the system while the containing app is blocked waiting to read or write. This will likely be fixed soon but for now this means using this API is not an option.

Unix Domain Sockets

As previously stated, Darwin has its foundations in BSD. BSD being a Unix derivative, Darwin inherited many Unix goodies, one of them being Berkeley sockets, also known as Unix domain sockets or BSD sockets.

You might be familiar with the “Everything in Unix is a file” adage. Well, turns out it’s quite true, in Unix pretty much everything is a file descriptor!

As you probably remember, we previously said that an application group provided a shared container directory to multiple applications in the same group. What if we could use a file to send messages between processes without having to deal with file coordination? Well, turns out a socket is a file and sockets are a pretty good channel for sending and receiving messages. As a side note, Apple wrote a nice section in favor of Unix Domain Sockets vs Mach Messages in TN2083.

In the next section we will discuss how to build a general solution for interprocess communication on iOS based on Berkeley sockets.

IPC on iOS based on Berkeley sockets

In order to build a general solution for IPC, we need to narrow down some requirements. I have used XPC as a model so the feature set I’m aiming at will be a subset of the one offered by XPC. Without further ado, here’s the list of requirements for our solution:

  1. A communication channel
  2. A client-server connection architecture
  3. Non-blocking communication (i.e. Asynchronous I/O)
  4. Message framing for data that is sent through the channel
  5. Support for complex data to be sent through the channel
  6. Secure encoding and decoding of data on each end of the channel
  7. Error and invalidation handling

Let’s now discuss each requirement individually and let’s try to figure out a way to solve each of them.

A communication channel

By creating a stream socket of Unix local type at a given path in the application group container directory we can support multiple processes connecting and sending data to each other. By specifying a unique name for the socket we can even support multiple simultaneous connections. It is however important for the client and server to agree on the name so that they can find each other.

Creating such a socket is as simple as:

dispatch_fd_t fd = socket(AF_UNIX, SOCK_STREAM, 0);

AF_UNIX creates a Unix domain socket (vs an Internet socket for example) and SOCK_STREAM makes the socket a stream socket (other options could be datagram or raw).

Note that creating a socket doesn’t even require a path to be specified. The path is usually specified later so we’ll discuss it in the next section. For now, we can just remember that calling socket returns a file descriptor.

A client-server connection architecture

For our hosts to communicate with each other we need a particular architecture where we can make sure that multiple hosts can connect to each other. In order to achieve this, we use a client-server architecture where one host acts as the server by creating the socket on disk and waiting for connections and the other host(s) act as clients by connecting to the server.

In socket parlance, the server binds to the socket and listens for connections while the client connects. For the client to fully connect, the server has to accept the connection. Let’s see how that translates to code.

We can construct the socket path by appending a unique identifier to the path of the application group container directory that all applications in the group have read/write access to. As long as both server and clients agree on the unique identifier, we now have a channel through which they can communicate.

First, on the server side we would start the connection by doing (error handling removed for brevity):

const char *socket_path = ...

dispatch_fd_t fd = socket(AF_UNIX, SOCK_STREAM, 0);

struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;

unlink(socket_path);
strncpy(addr.sun_path, socket_path, sizeof(addr.sun_path) - 1);

bind(fd, (struct sockaddr *)&addr, sizeof(addr));

listen(fd, kLLBSDServerConnectionsBacklog);

Similarly, on the client side we would connect to the server with:

const char *socket_path = ...

dispatch_fd_t fd = socket(AF_UNIX, SOCK_STREAM, 0);

struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;

strncpy(addr.sun_path, socket_path, sizeof(addr.sun_path) - 1);

connect(fd, (struct sockaddr *)&addr, sizeof(addr));

A couple notes. On the server, we call unlink() before binding to the socket. This is to make sure that a previous socket that was not correctly cleaned up is correctly removed before creating a new one. Also, note that connect() will fail if the server is not listening for connection. Usually, one should make sure that the server is running before attempting to connect a client (ideally, the server runs all the time, like a daemon process for example). Given that applications are short-lived on iOS, a typical scenario would involve attempting to connect the client multiple times until it succeeds.

Note that whereas the client has already called connect() and returned, the server hasn’t yet accepted the connection. Since this could potentially involve a blocking call we will discuss this in the next section.

Since accepting a new client is really a decision that the server application should make, the connection also has a -server:shouldAcceptNewConnection:; delegate method that the application can implement in order to decide whether a client should connect. It is also the perfect opportunity for the server to keep track of which client have connected.

Since there could potentially be multiple client connected to the server, sending a message from the server comes in two flavors:

  • Broadcasting a message. This will send the message to every connected client.
  • Sending a message to a particular client, assuming that the application knows its identity.

Non-blocking communication (Asynchronous I/O)

In order to accept the client connection, the server needs to call accept(). By default, if no pending connections are present on the queue, accept() blocks the caller until a connection is present. Since one of our requirement is non-blocking communication this is clearly not acceptable. Luckily, we can use the O_NONBLOCK property and a dispatch source to solve the problem. Since we can retrieve the pending connection by issuing a read we can set up a dispatch source for DISPATCH_SOURCE_TYPE_READ:

dispatch_source_t listeningSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, fd, 0, NULL);
dispatch_source_set_event_handler(listeningSource, ^ {
    struct sockaddr client_addr;
    socklen_t client_addrlen = sizeof(client_addr);
    dispatch_fd_t client_fd = accept(self.fd, &client_addr, &client_addrlen);
});
dispatch_resume(listeningSource);

Now that we have both server and client correctly connected, we need a way to send and receive messages. Like with any other socket, this can be achieved by mean of read() and write(). Remember that these two calls are blocking by default so not acceptable for our solutions. Luckily both also support a non-blocking variant and dispatch IO provides a very nice API to deal with such reads and writes asynchronously.

We can first create a dispatch IO channel with the following:

dispatch_io_t channel = dispatch_io_create(DISPATCH_IO_STREAM, fd, NULL, ^ (int error) {});
dispatch_io_set_low_water(channel, 1);
dispatch_io_set_high_water(channel, SIZE_MAX);

To read on the channel asynchronously:

dispatch_io_read(channel, 0, SIZE_MAX, NULL, ^ (bool done, dispatch_data_t data, int error) {
    if (error) {
        return;
    }
    // read data
    if (done) {
        // cleanup
    }
});

And to write on the channel asynchronously:

dispatch_data_t message_data = ...
dispatch_io_write(self.channel, 0, message_data, NULL, ^ (bool done, dispatch_data_t data, int write_error) {
    // check for errors
});

Message framing for data that is sent through the channel

As seen in the previous section, we can now read and write data asynchronously on the connection. However, this data is raw bytes and we have no idea where it starts and ends (remember, we are using a stream socket so the data arrives in order but it also arrives broken into pieces).

We need some mechanism in which we could wrap our raw data and that would make it easy to know where it starts and ends. This is known as message framing.

We could implement a new format that has some sort of delimiters for the start and end of a message. It would also probably have some kind of a header that lets one specify the content length of the message as a whole, the encoding to expect, etc… Finally, our framing should support binary data and not simply text since we want to be able to send anything through our connection.

As you can imagine, this is a somewhat solved problem. Many message framing “formats” have been created along the years and HTTP is definitely the most well-known and maybe most widely used. Also, HTTP support binary data (one misconception about HTTP is that it only supports a text body. This is actually not true: the HTTP headers have to be text but the body itself can be any binary data). Since CFNetwork has a very support for HTTP we’ll just use it to frame our messages.

Given a data object to send through the connection, we can create an HTTP message for it.

NSData *contentData = ...
CFHTTPMessageRef response = CFHTTPMessageCreateResponse(kCFAllocatorDefault, 200, NULL, kCFHTTPVersion1_1);
CFHTTPMessageSetHeaderFieldValue(response, (__bridge CFStringRef)@"Content-Length", (__bridge CFStringRef)[NSString stringWithFormat:@"%ld", (unsigned long)[contentData length]]);
CFHTTPMessageSetBody(response, (__bridge CFDataRef)contentData);

NSData *messageData = CFBridgingRelease(CFHTTPMessageCopySerializedMessage(response));
CFRelease(response);

As you can see, this is actually pretty simple. Given a raw NSData we can create an HTTP message by setting the raw data as the body and specifying a Content-Length header of the size of the data. We can then serialize the HTTP message as an NSData instance to send through the connection.

On the other end, we can do a similar thing, by creating a new HTTP message and appending bytes as they come in through the connection. Once the number of bytes is equal to the one we’re expecting from the Content-Length header, we can get our original data by looking at the message body.

CFHTTPMessageRef framedMessage CFHTTPMessageCreateEmpty(kCFAllocatorDefault, false);

while (... get more bytes ...) {
    NSData *data = ...
    CFHTTPMessageAppendBytes(framedMessage, data.bytes, (CFIndex)data.length);

    if (!CFHTTPMessageIsHeaderComplete(framedMessage)) {
        continue;
    }

    NSInteger contentLength = [CFBridgingRelease(CFHTTPMessageCopyHeaderFieldValue(framedMessage, CFSTR("Content-Length"))) integerValue];
    NSInteger bodyLength = (NSInteger)[CFBridgingRelease(CFHTTPMessageCopyBody(framedMessage)) length];
    if (contentLength != bodyLength) {
        continue;
    }

    NSData *rawData = CFBridgingRelease(CFHTTPMessageCopyBody(message));
}

This is indeed very similar. There’s a one little gotcha to be aware of: since we are receiving the HTTP message in chunks, the header might not actually be complete after receiving the first few bytes. Luckily CFNetwork has a handy CFHTTPMessageIsHeaderComplete function that we can use to check for it. Once the header is complete, for each chunk of data that we receive we can check whether the body length matches the expected Content-Length and just retrieve the body data once it’s been fully received.

Support for complex data to be sent through the channel

As discussed previously, one of the major drawbacks of the current solutions is the lack of support for complex data to be sent through the channel. <notify.h> for example only lets us send a message name.

Since our solution supports sending an NSData instance through the channel we can use an encoding and decoding mechanism to transform data <-> objects on each side of the connection.

Once again, luckily Foundation has very good support for this through NSKeyedArchiver and NSKeyedUnarchiver. By encoding the object graph on one end and decoding on the other end we can give the appearance that real objects are being sent through the connection. As far as the library user is concerned, a message containing objective-c objects is being sent and objective-c objects are similarly being received on the other side. Magical!

One requirement for this to work is that the object has to conform to the NSCoding protocol. Most of the Foundation classes already do and it’s pretty easy to implement for one’s custom classes. Obviously, when using custom classes one must make sure that such class is available on both end of the connection.

Secure encoding and decoding of data on each end of the channel

One concern with encoding and decoding random objects between processes is security. Not only do we have to make sure that the class is available on the client and the server for it to be decoded but we also have to make sure that the right class is being decoded and not one pretending to be.

Along NSXPCConnection in OS X 10.8, Apple introduced NSSecureCoding. This protocol extends NSCoding and by adopting NSSecureCoding an object indicates that it handles encoding and decoding instances of itself in a manner that is robust against object substitution attacks.

As per the NSSecureCoding header:

NSSecureCoding guarantees only that an archive contains the classes it claims. It makes no guarantees about the suitability for consumption by the receiver of the decoded content of the archive. Archived objects which may trigger code evaluation should be validated independently by the consumer of the objects to verify that no malicious code is executed (i.e. by checking key paths, selectors etc. specified in the archive).

By requiring the messaged objects to conform to NSSecureCoding and by providing a whitelist of classes to the connection, we can make sure that our messages are encoded and decoded in a secure fashion.

id content = ...
NSMutableData *contentData = [NSMutableData data];

NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:contentData];
archiver.requiresSecureCoding = YES;

@try {
    [archiver encodeObject:content forKey:NSKeyedArchiveRootObjectKey];
}
@catch (NSException *exception) {
    if ([exception.name isEqualToString:NSInvalidUnarchiveOperationException]) {
        return;
    }
    @throw exception;
}
@finally {
    [archiver finishEncoding];
}
NSSet *allowedClasses = ...
NSData *contentData = ...

NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:contentData];
unarchiver.requiresSecureCoding = YES;

NSSet *classes = [NSSet setWithObjects:[NSDictionary class], [NSString class], [NSNumber class], [LLBSDProcessInfo class], nil];
classes = [classes setByAddingObjectsFromSet:allowedClasses];

@try {
    content = [unarchiver decodeObjectOfClasses:classes forKey:NSKeyedArchiveRootObjectKey];
}
@catch (NSException *exception) {
    if ([exception.name isEqualToString:NSInvalidUnarchiveOperationException]) {
        return;
    }
    @throw exception;
}
@finally {
    [unarchiver finishDecoding];
}

Error and invalidation handling

In an environment involving multiple processes that could start and die at any time, we cannot assume that errors and connection invalidation will never happen.

Luckily, as stated in TN2083, since Berkeley sockets are a connection-oriented API the server automatically learns about the death of a client and it’s easy for the server to asynchronously notify the client.

We can thus easily implement an invalidation handler where the server or client can be notified whenever the connection becomes invalid.

Similarly, sending a message through the connection can take a completion handler that can pass a possible error that occurred while sending the message.

Conclusion

We were able to build a full-fledged IPC mechanism to communicate between applications and extensions within an application group on iOS. Our solution is based on powerful yet simple technologies such as Berkeley sockets, HTTP message framing and libdispatch.

You can find LLBSDMessaging on GitHub. It is built as a framework so that you can easily integrate it in an existing project.

Announcing Spillo

I’ve started using Pinboard a couple of years ago and have soon become a real fan of the website. It does everything I need: storing and organizing links I want to remember in the future, without all the social madness that services seem to be crazy about these days. I also admire Maciej’s business model and Pinboard is one the few services that I’m confident with using for the years to come.

While the website is very minimal it lets you find your stuff in a very efficient manner. The website’s design has practically not changed since its inception and it has actually been for the best. Pinboard does one thing and it does it well!

However, having been in love with the Mac for a long time I was starting to miss having a strong Mac client to create and manage my bookmarks, in an efficient manner that only a native application can provide.

So a couple of months ago, I started to work on a side project that I’m proud to announce today: Spillo for Mac!

Spillo

In a nutshell, Spillo is a powerful, beautiful and amazingly fast Pinboard client for OS X. Spillo lets you browse and organize your bookmarks in a stunning modern interface. Spillo also makes creating a bookmark from anywhere on your Mac as convenient as possible.

Spillo is available today for $9.99 on the Mac App Store or directly from our website.

If you would like to give Spillo a try, a 14-day demo is available on the Bananafish Software website.

I hope you will enjoy using Spillo as much as I do!