Adaptive, just in time (JIT) compilation provides a massive performance improvement to JVM-based applications compared to only using an interpreter. The downside of this is that applications have to compile frequently used methods as the application is running. This can lead to reduced throughput and slower response times. Another drawback is that each time an application is started, it must perform the same analysis to identify hot spot methods and compile them.
When running an application in the cloud, the elastic nature of resources provides the ability to change and improve the dynamics of how the JIT compiler works.
In this session, we'll look at Azul's work to move the JIT compiler into a centralized service that can be shared by many JVMs. This provides many advantages, such as caching compiled code for instant delivery when restarting the same application or spinning up new instances of the same service. In addition, it removes the workload from individual JVMs, allowing them to deliver more transactions per second of application work. Finally, there is the opportunity to apply considerably more compute resources to enable complex optimizations to be used that wouldn't be practical in a single JVM.