More than two decades ago, the Java programming language, originally developed by Sun Microsystems, offered developers the promise of building an application once and then running it on any operating system.
Greg Lavender, Intel’s CTO, remembers Java’s original promise better than most, having worked at Sun for over a decade. Rather than having to build applications for different hardware and operating systems, Java’s promise was more unified and streamlined development.
However, the ability to build once and run anywhere won’t be uniform across the computing landscape in 2022. It’s a situation Intel wants to help change, at least when it comes to accelerated computing and the use of GPUs.
The need for a unified, Java-like language for GPUs
“Nowadays, in the accelerated computing and GPU world, you can use CUDA and then you can run only on an Nvidia GPU, or you can run AMD’s CUDA equivalent on an AMD GPU,” Lavender told VentureBeat. “You can’t use CUDA to program an Intel GPU, so what do you use?”
That’s where Intel is making a big contribution to the open source SYCL specification (SYCL is pronounced “sickle”) that is intended to do for GPU and accelerated computing what Java did for application development decades ago. Intel’s investment in SYCL isn’t entirely selfless and isn’t just about supporting an open source effort; it’s also about helping drive more development toward the recently released consumer and data center GPUs.
SYCL is an approach to parallel programming of data in the C++ language and is very similar to CUDA, according to Lavender.
Intel supports standardization for one code to master them all
To date, the development of SYCL has been managed by the Khronos group, a multi-stakeholder organization that helps develop standards for parallel computing, virtual reality, and 3D graphics. On June 1, Intel acquired the Scottish development company Codeplay softwareone of the main contributors to the SYCL specification.
“We should have an open programming language with extensions to C++ that are standardized, that can run on Intel, AMD and Nvidia GPUs without changing your code,” Lavender said.
Automated tool for converting CUDA to SYCL
Lavender is also a realist and he knows that a lot of code has already been written specifically for CUDA. That’s why Intel developers built an open-source tool called SYCLomatic, which aims to migrate CUDA code to SYCL. Lavender claimed that SYCLomatic covers about 95% of all functionality present in CUDA today. He noted that the 5% that SYCLomatic doesn’t cover are capabilities specific to Nvidia hardware.
With SYCL, Lavender said there are code libraries that developers can use that are device agnostic. The way that works is that once code is written by a developer, SYCL can compile the code to work with any architecture needed, be it for an Nvidia, AMD, or Intel GPU.
Looking ahead, Lavender said he is hopeful that SYCL can become a Linux Foundation project, further enabling participation and growth of the open source efforts. Intel and Nvidia are both members of the Linux Foundation and support multiple efforts. One of the projects that Intel and Nvidia are now both members of is the Open Programmable Infrastructure (OPI) project, which revolves around providing an open standard for Infrastructure Programming Units (IPUs) and Data Processing Units (DPUs).
“We should have written once, run everywhere for accelerated computing, and then let the market decide which GPU to use, and level the playing field,” Lavender said.
The mission of VentureBeat is a digital city square for tech decision makers to gain knowledge about transformative business technology and transactions. Discover our briefings.