Presented by Microsoft + NVIDIA
Despite numerous challenges, some of the most successful examples of turning innovative AI applications into production come from healthcare. At this VB Spotlight event, learn how organizations in every industry can follow proven practices and use cloud-based AI infrastructure to accelerate their AI efforts.
From pilot to production, AI is a challenge for every industry. But as a highly-regulated, high-stakes industry, healthcare faces particularly complex obstacles. Cloud-based infrastructure that is “purpose-built” and optimized for AI has emerged as an important foundation for innovation and operationalization. Leveraging the agility of cloud and high-performance computing (HPC), enterprises in every industry are successfully extending proof of concepts (PoC) and pilots to production workloads.
VB Spotlight brought together Silvain Beriault, AI strategy lead and principal investigator at Elekta, a global top innovator of precision radiotherapy systems for cancer treatment, with John K. Lee, AI platform and infrastructure lead at Microsoft Azure. They joined VB Consulting Analyst Joe Maglitta to discuss how cloud-based AI infrastructure has driven enhanced collaboration and innovation for Elekta’s global R&D efforts aimed at improving and expanding the company’s brain imaging and MR-guided radiotherapy around the world .
The big three benefits
According to Lee, elasticity, flexibility and simplicity are the key benefits of end-to-end, on-demand, cloud-based infrastructure-as-a-service (IaaS) for AI.
Since enterprise AI usually starts with a PoC, Lee says, “the cloud is a perfect place to start. You can get started with a single credit card. As models become more complex and the need for additional compute capacity increases, the cloud is the perfect place to scale up that task.” That includes scaling up or increasing the number of GPUs interconnected to a single host to increase server capacity and scaling out or increasing the number of host instances to improve overall system performance.
The flexibility of the cloud allows organizations to manage workloads of all sizes, from massive corporate projects to smaller projects that require less computing power. For any effort, purpose-built cloud infrastructure services deliver a much faster time to value and better TCO and ROI than building an on-premises AI architecture from scratch, Lee explains.
As for simplicity, Lee says pre-tested, pre-integrated, pre-optimized hardware and software stacks, platforms, development environments, and tools make it easy for enterprises to get started.
COVID Accelerates Elekta’s Cloud-Based AI Journey
Elekta is a medical technology company developing image-driven clinical solutions for the treatment of brain disorders and improved cancer care. As the COVID pandemic forced researchers out of their labs, business leaders saw an opportunity to accelerate and expand efforts to move AI R&D to the cloud that had begun a few years earlier.
The division’s AI head knew that a more robust, accessible cloud-based architecture to enhance its suite of AI-powered solutions would help Elekta advance its mission to improve access to healthcare, including in underserved countries.
In terms of cost analysis, Elekta also knew it would be difficult to estimate current and future high-performance computing needs. They considered the cost of maintaining on-premises infrastructure for AI and its limitations. The total cost and complexity extends well beyond purchasing GPUs and servers, Beriault notes.
“Trying to do that on your own can get difficult pretty quickly. With a framework like Azure and Azure ML, you get much more than just access to GPUs,” he explains. “You get a whole ecosystem for doing AI experiments, documenting your AI experiments, sharing data between different R&D centers. You have a common ML ops tool.
The pilot was simple: automating the contouring of organs in MRI images to speed up the task of delineating the treatment target, as well as organs at risk of being spared by radiation exposure.
The ability to scale up and down was critical to the project. In the past, “there were times when we launched as many as ten training experiments in parallel to perform some hyperparameter tuning of our model,” recalled Beriault. “Other times we just waited for data curation to finish, so we didn’t train at all. That flexibility was very important to us, as we were quite a small team at the time.”
Since the company was already using the Azure framework, they turned to Azure ML for their infrastructure, as well as critical support as teams learned to use the platform portal and APIs to begin launching jobs in the cloud. Microsoft worked with the team to build a data infrastructure that was very specific to the company’s domain and addressed critical data security and privacy issues.
“As of today, we have expanded auto-contouring, all using cloud-based systems. By using this infrastructure, we were able to expand our research activities to more than 100 organs for multiple tumor sites. In addition, scaling has enabled us to expand into other, more complex AI research in RT beyond simple segmentation, increasing the potential to positively impact patient treatment in the future.”
Choosing the right infrastructure partner
Ultimately, Beriault says adopting cloud-based architecture allows researchers to focus on their work and develop the best possible AI models instead of building and “babysitting” AI infrastructure.
Choosing a partner who can provide that kind of service is crucial, Lee noted. A strong provider must have a strong strategic partnership that helps keep its products and services up to date. He says Microsoft’s partnership with NVIDIA to lay the foundation for enterprise AI would be critical for customers like Elekta. But there are other considerations, he adds.
“You have to remind yourself that it’s not just about the product offering or the infrastructure. Do they have the entire ecosystem? Do they have community? Do they have the right people to help you?”
- First-hand experience and advice on the best ways to accelerate the development, testing, deployment, and operation of AI models and services
- The critical role AI infrastructure plays in the transition from POCs and pilots to production workloads and applications
- How a cloud-based, “AI-first” approach and front-line proven best practices can help your organization, regardless of industry, scale AI faster and more effectively across departments or across the globe
- Silvain Beriault, AI strategy leader and principal investigator, Elekta
- John K. Lee, lead AI platform and infrastructure lead, Microsoft Azure
- Joe Maglitta, host and moderator, VentureBeat