Often times one needs to perform a series of asynchronous tasks in a specific order. For example, you may need to authenticate a user into your app and immediately after a successful response, request some specific information about the user (e.g. the user's profile). Another example might be post-processing a video capture by the device and then upload the result to a server. There are several examples where multiple time consuming asynchronous tasks need to be sequenced.

Fortunately, iOS provides several methods for us to coordinate the execution of asynchronous tasks. In this article, I'll present some of the techniques and technologies commonly used.

Orchestrating Concurrency

In this article, I'm going to discuss four methods that can be used to coordinate asynchronous tasks: Cascading closures, Operation Queues, Dispatch groups with GCD, and finally functionally reactive methods using Combine.

For these examples, we’re going to pretend that we want to get the weather for our current location. To do so we will need to make two asynchronous requests. The first will return the GPS coordinates of our current location, and the second request will return the weather conditions for the given latitude and longitude. To give us something tangible to discuss, let's say we have two classes which perform these request respectively which look like this:

class LocationManager {
    /// async request to get the user's current GPS location
    func currentLocation(_ completion: @escaping (CLLocation) -> Void)  {

        ... do async stuff to get theLocation ...
        completion(theLocation)
    }
}
class WeatherServiceClient {
    func getCurrentConditions(at location: CLLocation, completion: @escaping (Weather) -> Void) {
        ... do async stuff to get theWeather ...
        completion(theWeather)
    }
}

Both methods in each class take a closure as an argument which supplies the caller with the results of the asynchronous task. If these were real iOS classes we'd expect the LocationManager class to use a CLLocationManager instance for location services and WeatherServiceClient to perform a network request using a URLSessionTask somewhere in its implementation.

Cascading Closures

A naive method of sequencing tasks would be to simply call each subsequent task from the completion block of the parent task. Using our LocationManager and WeatherServiceClient classes defined above, we could get the weather for our current location using like so:

let locationManager = LocationManager()
let weatherClient = WeatherServiceClient()

locationManager.currentLocation { [weatherClient] location in
    weatherClient.getCurrentConditions(at: location) { weather in
        print("Its '\(weather)' where you're at currently.")
    }
}

From this simple example, the above solution is pretty straight forward. We request the current location from our LocationManager and when it computes the GPS location, it calls its completion block with the CLLocation struct: location. Within that completion block, we call our WeatherServiceClient's getCurrentConditions method, passing it the freshly received CLLocation. In the completion block for getCurrentConditions, we take the passed in weather object and display it.

This may be a perfectly acceptable solution when you have two simple tasks as we do in this case. However, the real world is often not that simple. In a real app, you'll need to handle error conditions. Also, you may want to cancel the network requests at any time if the user should decide to navigate out of your view. Neither of which we can easily do with our current implementation. Likewise, it's also easy to notice how messy this can get when there are more than two tasks to synchronize:

instanceA.doTask { resultA, errorA in 
    lf let e = errorA else {
        // handle error a
        return 
    }
    instanceB.doTaskB(resultA) { resultB, errorB in 
        if let e = errorB else {
            // handle error b
            return 
        }
        instanceC.doTaskC(resultB) { resultC, errorC  in 
            if let e = errorC else {
                // handle error c
                return
            }
            instanceD.toTask(resultC) { .... 
                // make it stop!!
            }
        }
    }
}

As you notice, your error handling is strewn throughout all your completion blocks, making for code that's difficult to read and maintain. Naturally, you start thinking to yourself that there has to be a better way.

And fortunately for us, iOS provides us with better alternatives.

Operation Queues

Another means of handling concurrency that's been around for a long time in iOS are OperationQueues. An OperationQueue does exactly what its name implies, it manages a queue of operations. Operations are added to an OperationQueue and based on how the OperationQueue has been configured, the operations that are added are scheduled and executed accordingly.

OperationQueues provide developers with much more flexibility than directly chaining completions blocks. For example, say you're running a time-consuming video encoding task and the user executes a search request against your CoreData store. You can suspend the video encoding operation, perform the user's database request, and resume it once the search has finished. Likewise, should they decide they don't want that video you spent all the time encoding, you can cancel the operation altogether.

As I alluded to previously, for an OperationQueue to be of value to us, we need to have operations that actually perform some unit of work. Foundation provides an abstract class name Operation that we can subclass to perform any custom work we need for our application. If the tasks you need to perform are relatively straightforward, Foundation provides two concrete subclasses: NSInvocationOperation and BlockOperation that may provide all you need without having to create a custom subclass.

For our weather example, we might create two custom Operation sub-classes like the following:

private class LocationOp: Operation {

    var currentLocation: CLLocation?
    let locationManager = LocationManager()

    override func main() {
        locationManager.currentLocation { [unowned self] result in
            defer { 
                // we're done, inform the operation queue.
                self.isFinished = true 
                self.isExecuting = false
            }
            self.currentLocation = result
        }
    }
}

private class WeatherOp: BoilerPlateOp {

    var weatherClient = PretendWeather()
    var location: CLLocation?
    var currentWeather = Weather?

    override func main() {
        guard let location = location else {
            isExecuting = false
            isFinished = true
            return 
        }
        weatherClient.currentWeather(at: location) { [unowned self] in
            defer {
                self.isFinished = true 
                self.isExecuting = false
             }
             self.weather = $0
        }
    }
}

To fetch the weather, we first create an instance of LocationOp and WeatherOp and add them to an OperationQueue. Before we can add them, we have to ensure that LocationOp executes and completes before WeatherOp begins. To coordinate the efforts of each operation, we add LocationOp as a dependency of WeatherOp. The resulting code might look something like this:

let operationQueue = OperationQueue(...)
...
let locationOp = LocationOp()
let weatherOp = WeatherOp()

weatherOp.addDependency(locationOp)
operationQueue.addOperations([locationOp, weatherOp])

You might be asking at this point how we pass weatherOp the location retrieved by locationOp? There are a few different ways to accomplish this, one method that will keep our two operations unknown to each other is to use a BlockOperation to hand off the location to the weather operation. We can do the same thing after the weather is fetched to update our UI:

let operationQueue = OperationQueue(...)
...
let locationOp = LocationOp()
let weatherOp = WeatherOp()
let handOffOp = BlockOperation { [unowned locationOp, unowned weatherOp] in 
    weatherOp.location = locationOp.currentLocation
}
let displayOp = BlockOperation { [unowned weatherOp] in 
    // update UI on main thread with weatherOp.currentWeather
}
handOffOp.addDependency(locationOp)
weatherOp.addDependency(handOffOp)
displayOp.addDependency(weatherOp)
operationQueue.addOperations([locationOp, handoffOp,  weatherOp, displayOp])

This is all fine and good, but it seems that it takes quite a bit of code to accomplish this. Can we do the same thing, but with less code?

Dispatch Groups Using GCD

Grand Central Dispatch (GCD) is another means by which iOS and Cocoa programmers can support concurrent execution of tasks. In fact, the Core Foundation and Cocoa API's are built using GCD, including Operation and OperationQueue we discussed previously. GCD is less of an object-oriented SDK than it is a closure based SDK, but it's fairly straightforward and easy to understand. It provides far more functionality that we'll go into for this article. For our example, we'll discuss DispatchQueues, DispatchGroups, and DispatchWorkItems

A DispatchQueue is much like the OperationQueue in our previous example. It manages the execution of tasks based on how it's configured. Tasks added to the DispatchQueue can be executed on the main thread or a background thread with different priorities, as well as execute serially or concurrently. A DispatchGroup is a way of grouping a series of tasks that you can treat as a single unit of work. And finally, a DispatchWorkItem is a task that does some work on a DispatchGroup, either singularly or as part of a DispatchGroup.

A very simple GCD and naive version of our weather example would look something like this using GCD:

func fetchWeather(completion: (SimpleWeather?) -> Void) {

    DispatchQueue.global(qos: .background).async {

        var location: CLLocation?

        let fetchGroup = DispatchGroup()
        fetchGroup.enter()
        PretendLocationManger().currentLocation {
            location = $0
            fetchGroup.leave()
        }

        fetchGroup.wait()
        guard let currentLocation = location else {
            completion(nil)
            return
        }

        var weather: SimpleWeather?
        fetchGroup.enter()
        PretendWeather().currentWeather(at: currentLocation) {
            weather = $0
            fetchGroup.leave()
        }
        fetchGroup.wait()

        DispatchQueue.main.async {
            completion(weather)
        }
    }
}

In our simple example above, we create an asynchronous DispatchQueue with background priority. This queue will execute all it's DispatchWorkItems on a background queue. Before we execute our location request, we first create a DispatchGroup. This dispatch group will allow us to guarantee that we get a location before we start fetching our weather. Just before we make our location request, we inform the DispatchGroup that we are about to start our location fetch work, by calling enter(). When we complete our task, we call leave(), informing the group that we're done. By calling wait() on our fetchGroup we're telling the DispatchQueue that we want to block the current thread until all the work in the fetchGroup has completed before we continue. This won't block our UI since we’re doing all our work on a background thread.

Once we get our current location, we follow the same pattern when requesting our weather. Finally, we call our completion block with the results of the weather fetch operations, making sure we dispatch that work onto the main thread, assuming we're updating the UI in our app.

It should be noted that each call to enter() needs to be followed by a call to leave(). If you don't do that, your app will either never complete, or crash. I'd also like to point out that our example is very simple. We could have just as easily done this with a serial queue. But it's easy to imagine a different task that would require more tasks. For example, instead of fetching a single location, we could instead want to download several images. In this case, we could set up a dispatch group, make the network request to download, making sure we start each request with a call to fetchGroup.enter() and finishing a request with fetchGroup.leave(). We'd call fetchGroup.wait() to block until all our images have finished downloading and then continue with our app.

Also in our example, we didn't explicitly create a DispatchWorkItem. It's relatively easy to wrap our asynchronous requests in a DispatchWorkItem. The advantage of doing so provides us with a "handle" to the work items in the queue. This would allow us to cancel a specific task as well as give us finer-grained control at the task level should we so require it.

This simple example illustrates how we could cancel our location request:

    // ...snip...
    let locationWork = DispatchWorkItem {
        PretendLocationManger().currentLocation {
            location = $0
        }
    }
    DispatchQueue().async(execute: locationWork)
    // ... do some additional work
    locationWork.cancel() // cancel our request 

FRP Using Combine

At the 2019 WWDC, Apple announced Combine, an interesting new framework that will allow developers to express concurrent operations in a more natural or declarative way. In our previous examples, we had to describe how each task was to perform its work and how those tasks were synchronized with each other, it was a more imperative approach to managing concurrency. Combine lets developers describe a sequence of work, and the framework takes care of executing it.

From our GCD example, we created our queue, then we set our location task to work by informing the fetchGroup work had started by calling enter(), then inform it was completed with leave() and explicitly call wait() to allow each task to complete its work. With Combine, all we would do is describe the workflow, and let Combine take care of the sequencing for us. A possible description of our example workflow might look like this using Combine:

import Combine

let locationPromise = Future<GeoLocation, Error> { promise in
    PretendLocationManger().currentLocation { location, error in 
        if let error = error {
            return promise(.failure(error))
        }
        return promise(.success(location))
    }
}.eraseToAnyPublisher()

let weatherPromise = locationPromise.flatMap { location in
    Future<SimpleWeather, Error> { promise in
        PretendWeather().currentWeather(at: location) { weather, error in 
            if let error = error {
                return promise(.failure(error))
            }
            return promise(.success($0))
        }
    }
}.eraseToAnyPublisher()

let cancellable = weatherPromise.sink(
    receiveCompletion: { completion in
        switch completion {
        case .finished:
            print("finish event")
          case .failure(let error):
              print("failed with: \(error)")
        }
    }, 
    receiveValue: { weather in  
      print("Weather is: \(weather)")
    }
)

In the first line, we import the Combine framework. In the next few lines we create a locationPromise. Our locationPromise is a unit of work that "promises" to eventually return either a CLLocation or an Error(note that I modified our request classes to return an error to illustrate the point). In the next line we create a weatherPromise that uses the result locationPromise to fetch the current weather conditions. Finally, in the last few lines, we call sink on our weatherPromise to either display the result of our workflow, or should either the location request or the weather request fail, print out the error result.

Calling sinksets our work pipeline in motion and permits us to subscribe to the events it emits. In this particular case, we provide two closures. The first closure, receiveCompletion, allows us to subscribe to the two terminal events in our pipeline: .finished which denotes normal termination, and .failure, which is a failed event that provides an associated error value describing the reason for the failure. The other closure, receiveValue takes as input the actual computed value of our pipeline.

Notice that using Combine, nowhere did we explicitly have to wait or store our dependent results (CLLocation in our case) so the next operation could retrieve them. Combine allows us to describe functionally, what work is to be done and Combine takes care of the actual sequencing.

Summary

In this article, I've provided four different methods of dealing with concurrency in your apps. From a simple approach that you should probably avoid using in production code with cascading closures, along with two long-established Cocoa techniques using OperationQueues and GCD, and finally using Apple's Combine framework for functional reactive programming. You might be wondering which approach we use in our iOS apps at Doximity? We use the functional reactive approach, but we do so using a third-party library called ReactiveSwift. Not because we feel it's better, only because we started developing our app before Combine existed.

Further Reading

I urge you to read more about the various frameworks and libraries mentioned in this article.

Special thanks to Gustavo Ambrozio, Jessica Emerson, and Bruno Miranda for reading drafts of this blog post and their valuable feedback. Also a big thank you to Hannah Gambino for the illustrations.


Be sure to follow @doximity_tech if you'd like to be notified about new blog posts.