Interview: Why Java is the future of cloud applications
The fact that ARM64 processors are low powered in terms of energy consumption means more servers can be crammed into the same volume of datacentre space than x86 hardware.
If workloads can run on ARM64 hardware, there is potentially more processing power available per datacentre rack. Each ARM-based rack, for example, consumes less power and requires less cooling infrastructure than the datacentre energy and cooling needs of x86 server racks.
Scott Sellers is CEO of Azul, a company that offers an alternative to the Oracle Java Development Kit (JDK) called Azure Platform Core for developing and running enterprise Java applications. In an interview with Computer Weekly, Sellers discusses the impact of processor architectures on enterprise software development and why the original “write once run anywhere” mantra of Java is more important than ever.
It is no longer the case that the only target platform for enterprise applications is an Intel or AMD-powered x86 server. Graphics processing units (GPUs) from Nvidia and the presence of alternative server chips from ARM mean the choice of target server platform is an important decision when deploying enterprise applications.
The rise of ARM
“There’s no question that the innovation on the ARM64 architecture is having a profound impact on the market,” says Sellers. For instance, he points out, Amazon has made significant investments in developing ARM64-based server architectures for Amazon Web Services (AWS), while Microsoft and Google also have ARM server initiatives.
“It’s an inherently more cost-effective platform compared to x86 servers,” he adds. “At this point in time, performance is equal to, if not better than, x86, and the overall power efficiency is materially better.”
According to Sellers, there is a lot of momentum behind ARM64 workloads. While public clouds generally support multiple programming languages, including Python, Java, C++ and Rust, using programming languages that need to be compiled for a target platform will mean revisiting source code when migrating between x86 and ARM-based servers. Interpreted languages such as Python and Java, which are compiled “just in time” when the application runs, do not require applications to be recompiled.
Scott Sellers, Azul
“The beauty of Java is that the application doesn’t have to be modified. No changes are necessary. It really does just work,” he says.
According to Sellers, replatforming efforts usually involve a lot of work and a lot of testing, which makes it far more difficult for them to migrate cloud workloads from x86 servers onto ARM64. “If you base your applications on Java, you’re not having to make these bets. You can make them dynamically based on what’s available,” he says.
This effectively means that in public cloud infrastructure as a service, a Java developer simply writes the code once and the Java runtime compiler generates the machine code instructions for the target processor when the code is run. IT decision-makers can assess cost and performance dynamically, and choose the processor architecture based on cost or the performance level they need.
Sellers claims Java runs exceptionally well both on x86 and ARM64 platforms. He says Azul customers are seeing a 30% to 40% performance benefit using the company’s Java runtime engine. “That’s true of both x86 and ARM64,” he adds.
Sellers says IT leaders can take advantage of the performance and efficiency boost available on the ARM64 platform without the need to make any changes to the target workload. In the public cloud, he says this not only saves money – since the workload uses less cloud-based processing to achieve the same level of performance – but the workload also runs faster.
The decision on which platform to deploy a workload is something Sellers feels should be assessed as part of a return on investment calculation. “For the same amount of memory and processing capability, an ARM64 compute node is typically about 20% cheaper than the x86 equivalent,” he says. This, he adds, is good for the tech sector. “Frankly, it keeps Intel and AMD honest.”
He adds: “Some of our bigger customers now simply have hybrid deployments in the cloud, and by hybrid, what I mean is they’re running x86 and ARM64 simultaneously to get the best of all worlds.”
What Sellers is referring to is the fact that while customers may indeed want to run workloads on ARM64 infrastructure, there is far more x86 kit deployed in public cloud infrastructure.
While this is set to change over time, according to Sellers, many of Azul’s biggest customers cannot purchase enough ARM64 compute nodes from public cloud providers, which means they have to hedge their bets a bit. Nevertheless, Sellers regards ARM64 as something that will inevitably become a dominant force in public cloud computing infrastructure.
Why it is not always about GPUs
Nvidia has seen huge demand for its graphics processing units (GPUs) to power artificial intelligence (AI) workloads in the datacentre. GPUs pack hundreds of relatively simple processor cores into a single device, which can then be programmed to run in parallel, achieving the acceleration required in AI inference and machine learning workloads.
Sellers describes AI as an “embarrassingly parallel” problem, which can be solved using a high number of GPU processing cores, each running a relatively simple set of instructions. This is why the GPU has become the engine of AI. But this does not make it suitable for all applications that require a high degree of parallelism, where several complex tasks are programmed to run simultaneously.
For one of Azul’s customers, financial exchange LMAX Group, Sellers says GPUs would never work. “They would be way too slow and the LMAX use case is nowhere near as inherently parallel as AI.”
GPUs, he says, are useful in accelerating a very specific type of application, where a relatively simple piece of processing can be distributed across many processor cores. But a GPU is not suitable for enterprise applications that require complex code to be run in parallel across multiple processors.
Beyond the hardware debate over whether to use GPUs in enterprise applications, Sellers believes the choice of programming language is an important consideration when coding AI software that targets GPUs.
While people are familiar with programming AI applications in Python, he says: “What people don’t recognise is that Python code is not really doing anything. Python is just the front end to offload work to the GPUs.”
Sellers says Java is better suited than other programming languages for developing and running traditional enterprise applications that require a high degree of parallelism.
While Nvidia offers the GPU language CUDA, when writing traditional enterprise applications, Sellers says Java is the only programming language that has true vector capabilities and massive multithreading capabilities. According to Sellers, these make Java a better language for programming applications requiring parallel computing capabilities. With virtual threading, which came out with Java 21, it becomes easier to write, maintain and debug high-throughput concurrent applications.
“Threading, as well as vectorisation, which enables more than one computer operation to be run simultaneously, have become a lot better over the last few Java releases,” he adds.
Given Azul’s product offerings, Sellers is clearly going to exalt the virtues of the Java programming language. However, there is one common thread in the conversation that IT decision-makers should consider. Assuming the future of enterprise IT is one dominated by a cloud-native architecture, even when some workloads must be run on-premise, IT leaders need to address a new reality that x86 is not the only game in town.
link