Hero Image

A Look at Loom

If you ask any Java developer what the most exciting feature is of the upcoming Java release(s), it is likely they will mention Loom. I talked briefly about Loom in the previous article on coroutines, but it's time to take a closer look. What kind of problems will it solve? Why is it so exciting?

The Problem

Applications run on (virtual) machines with a certain number of cores. Each core can, independently of other cores, process some instructions to do some work for our application. A problem arises when we have more parallel work to do than there are cores, for instance if we have an HTTP server with 10 incoming requests on a machine with 5 cores. To be able to process all of these requests, the application and operating system use threads. A thread is basically a carrier for CPU instructions which the OS can assign to a core for a certain amount of time.

The OS makes sure that threads get executed by cores and if there are more threads than cores wanting to do work, the OS makes sure they all get a turn being executed. This means that sometimes threads have to wait. Letting the OS create and schedule threads is a relatively expensive operation, this is why many applications use a fixed pool of threads that are all created during startup. If there is any work to be done, it is run on one of the previously created threads from the pool. The problem with a thread pool is that it's hard to decide how many threads you need from the outset. If you choose too few, no more incoming requests can be handled, but if you choose too many, you will have a lot of overhead.

What's worse is that there are operations in the JVM, such as reading from an input stream, which actually block the running thread meaning that no other thread can be executed while it is waiting for some I/O data.

Current Solutions

There are a few different ways to deal with threading problems, one of them is, as mentioned, to work with a thread pool, but it is definitely not ideal and doesn't actually solve the problem of threads being blocked because they're performing I/O tasks. A way to write non-blocking code is to go 'reactive'. Reactive programming is a way to invert the data flow. Usually in regular code, a request is handled by a controller, which waits to obtain a result from a service, which waits to obtain it from a repository, which waits and blocks while it obtains it from the database. Reactive programming turns this around by turning the result type into a sort of promise or future, often called a mono (for a single value), or flow or flux (for a collection of values). You setup a chain and wait, without blocking. Once data becomes available, it is passed back up the chain.

The problem many people have with reactive programming is that it's very intrusive. Instead of simple return values your code gets infested with special reactive types. It makes testing and debugging and reasoning about your code harder and less intuitive. In Kotlin you can use coroutines, which kind of serve the same purpose and even though their impact on your code is less significant, you still have to use the suspend modifier and use invocations like runBlocking and launch.

Loom to the Rescue!

Wouldn't it be great if you could write code in a very straightforward way without having to worry about how many threads you have in your thread pool and whether the code you write is blocking? That is entirely Loom's purpose. Project Loom gives us virtual threads, which an application can use just as if they were actual threads. The benefit is that these virtual threads are scheduled by the JVM, not the OS. If you are running a program with virtual threads, you only need as much actual (platform) threads as there are cores available. What about blocking? Now therein lies the beauty, the JVM has been modified to recognize when blocking methods are being called. If your code is calling a blocking method, such as reading from an InputStream, the JVM can temporarily pause the execution of that virtual thread, let another virtual thread do some work, and resume the original thread once the data has become available. This could lead to a virtual thread starting on one carrier thread and being completed on another, but for the developer this is all transparent.

What does this all mean?

It means that, once Loom has been embraced by all of our favorite frameworks and libraries, we could write code without having to worry whether it blocks. An HTTP request handled by a virtual thread ending up in some blocking code does not hinder the handling of any other request. It sounds too good to be true, surely there must be caveats right? Yes there are some caveats, synchronized blocks still block the carrier thread and foreign method calls can also do this. There are workarounds for this, such as reentrant locks, which need to be embraced by the library developers.

Structured Concurrency

In our article about coroutines we had a look at structured concurrency, a way to give limited scope to tasks and variables, this is coming with Loom as well. With this in place, we can finally get rid of ThreadLocal variables and write parallel code which is a lot more composable and readable.

Try it now

If, like me, you don't want to wait and try out Loom now, you can, just create a standard Spring Boot application with Tomcat and configure it to work with an early access version of OpenJDK version 21. Then, you can add the following customization:

class TomcatConfiguration {

    fun protocolHandlerVirtualThreadExecutorCustomizer(): TomcatProtocolHandlerCustomizer<*> {
        val virtualThreadFactory = Thread.ofVirtual().name("tomcat-loom-", 0).factory()
        return TomcatProtocolHandlerCustomizer { protocolHandler: ProtocolHandler ->
            protocolHandler.executor = Executors.newThreadPerTaskExecutor(virtualThreadFactory)

Good Times Ahead

At the time of this writing, Java 19 is the current version with the first preview version of Loom. In Java 20, it will get another preview and hopefully (but no promises) it will be final in version 21, coming in October this year. So if everything aligns, we could use it in production in late 2023 or early 2024. It's an exciting time to be a JVM programmer, whether you use Java or Kotlin, Loom will benefit both. And if project Valhalla also starts delivering features and the various garbage collectors keep on improving (such as generational ZGC), we can only expect the JVM to get faster and faster while writing code becomes easier and easier! I'm very much looking forward to using Loom in production.