Hero Image

From Java to Kotlin Part 6

Looking back at our journey, we've come a long way already through the desert of nullability, the swamp of extension functions, the valley of parameters, the mountains of classes and the plains of scope functions. Is there anything left to learn? Well, as I said in the beginning, Kotlin has no single 'killer feature', so there is always something left. In this article, we're going to look at an exciting and quite complicated feature to write non-blocking code: coroutines

What is a Coroutine?

Kotlin itself describes a coroutine as an instance of suspendable computation, which still doesn't tell us a whole lot, what is suspendable computation? I'm pretty sure you might have heard of threads, especially with project Loom lurking about. Threads are operating system managed executors of computations. A single system has a limited number of CPU's and cores. A single core can only perform computations for a single thread at a time. When more threads want to do computations than there are cores available, threads have to wait until it's their turn. The thing about threads is that it's relatively expensive to let the OS create and manage them. That's why many applications use thread pools to reuse threads. Coroutines can be thought of as lightweight threads with some minor differences. Kotlin uses its own scheduler to decide which coroutine gets executed on which thread.

Structured Concurrency

Coroutines always run in a certain scope. Within that scope, they can start other coroutines. If something happens inside this scope, e.g. an error occurs, all of the coroutines belonging to this scope are neatly cleaned up so nothing is lost or leaked. This is called 'structured concurrency', you can enter the coroutine world in a scope, but eventually you also have to exit it. The fun part about coroutines is that they can be started on one thread and finished on another thread.

Entering the Coroutine world

Kotlin offers some useful functions to enter the coroutine world, one of them is runBlocking:

fun main() = runBlocking {
    hello()
    world()
    // prints 'Hello', then 'World!'
}

suspend fun hello() = println("Hello")
suspend fun world() = println("World!")

runBlocking is a bridge from the non-coroutine to the coroutine world, it creates a coroutine scope which blocks the thread it's created on until all of the coroutines inside it have completed. Because of this blocking nature, you will rarely use it in your own code, after all the whole point was to write non-blocking code. The code within the coroutine block is executed sequentially by default, but we will see later on how we can run code in parallel.

You might also notice the suspend keyword in front of the hello function: this means that the Kotlin coroutine scheduler can park the execution of this function and resume it a later time on a different thread. It is perfectly OK to call non-suspending functions from a coroutine scope, but not the other way around.

Parallel Coroutines

If you are inside of a coroutine scope, you can launch multiple coroutines side by side with the function launch:

val millis = measureTimeMillis {
    runBlocking {
        launch { delay(5000L) }
        launch { delay(5000L) }
    }
}
println(millis)

If you were to run this code, you would see that it takes approximately 5 seconds to finish (with some overhead) instead of 10. Both of the delay functions (which just wait an amount of time doing nothing) run simultaneously.

Awaiting results

Java has CompletableFutures which you can use for the same kind of purpose. It offers an API to compose them, use their results for further processing et cetera. Coroutines also offer this functionality with less verbosity. Here is an example:

val result = runBlocking {
    val job1 = async { 1 } // a simple lambda block which returns the number 1
    val job2 = async { 1 }
    println(job1.await() + job2.await()) // prints '2'
}

With the async function, we can launch a coroutine from which we expect a result. We can then wait for this result by using the await function.

Which Thread am I running on?

Because the Kotlin scheduler handles parking and resuming of coroutines, you might wonder which thread the code is exactly running on. There are ways to be more specific about this with dispatchers. Dispatchers are responsible for selecting an appropriate thread for your code to run on. If you don't specify any dispatcher, the default dispatcher is used which just picks a thread from a shared background thread pool. But what if we want a more specialized dispatcher? Take a look at this example:

// Assume this code is inside the body of a suspending function
val result = withContext(Dispatchers.IO) {
  client.getResponse() // this could be a network call
}

The withContext function is a way to switch to a different context which includes dispatchers. Here we select the IO dispatcher which is optimized for code that works with blocking I/O like the file system or the network. It could still happen that a thread from the default background thread pool is selected, because creating new threads is expensive. It's just that the IO dispatcher is more liberal with how many threads can be created in total and has sophisticated algorithms to enlarge or shrink this extra thread pool. If the code is already running in a context with the IO dispatcher, withContext simply does nothing.

Beware the ThreadLocals

Because coroutines are executed on whatever thread the scheduler picks for them, properties which are stored as a ThreadLocal such as logging context data (MDC) or servlet properties are often lost inside a coroutine. To copy the needed values into the scope of the coroutine, you can use Thread Context Elements.

What about Project Loom?

A question often asked is: Do we really still need coroutines with project Loom? If you are already using Kotlin, then using coroutines keeps making sense, the language support for it is there. Using Java APIs from Kotlin is of course possible, but does not offer the best experience. When project Loom is finally complete, it is likely that we might get some improvements out of the box. Servlet applications such as Tomcat could replace their thread-based request handling with virtual-thread-based request handling which works nicely with coroutines. Kotlin itself might decide to create a dispatcher which schedules coroutines onto virtual threads instead of actual threads to let the JVM manage the actual threads, but that remains to be seen.

Summary

We have taken a brief glance at coroutines, but much like reactive code, once you enter that world, you can start feeling trapped. One suspending function will often lead to many other suspending functions all over your code. At least you won't have to deal with specific result types, like Fluxes and Monos, so inside of those functions everything looks pretty normal. A chain is only as strong as its weakest link so if you want to write a completely non-blocking application you have to make sure that everything is non-blocking, from the initial inbound request to the eventual database query or downstream network call, and this can be quite the challenge.