Eligibility :
Perks :
High Performance Computing (HPC) Summit is a specialised industry gathering that brings together leaders in advanced compute, AI infrastructure, data centre engineering, and scientific research to discuss the next generation of high-density computing.
This lead up to the main day will showcase the dedicated summit which integrates AI workloads & DC infrastructure into HPC, this will be the first HPC summit in South East Asia to address this. HPC summit focuses on the rapidly evolving ecosystem surrounding HPC workloads powered by GPUs, accelerators, high-speed interconnects, and liquid-cooled systems and how they are transforming data centre design, cloud platforms, and capabilities.
HPC Summit typically unite hyperscalers, chipmakers, GPU cloud providers, data centre operators, system integrators, and research institutions to explore topics such as cluster architecture, distributed AI training, cooling and power requirements for 100kW+ racks, optical networking, and the convergence of HPC with AI, quantum, and edge technologies.
As demand for compute accelerates across industries in finance, biotech, manufacturing, climate research, and more. HPC Summit serves as a critical platform for collaboration, roadmap alignment, and knowledge sharing. This platform will provide organisations with strategic insight into where the compute landscape is heading, how infrastructure needs to evolve, and what opportunities exist across cloud, hardware, and digital infrastructure markets.
Opening remarks from W.Media introducing the HPC Summit Southeast Asia 2026 – setting the tone for a day of open collaboration among open standard communities.
A visionary session highlighting how open compute standards and cross-border collaboration are driving the next wave of AI and supercomputing infrastructure across Asia.
Innovation: 48V busbars, liquid-cooling integration, and hyperscale deployment readiness for GPU clusters.
Explores the new v2 specification, emphasizing modularity, rapid deployment, and sustainability in existing 19-inch rack environments.
Outlines how China’s rack standard supports its AI Compute Network initiative, creating interoperability and supply-chain alignment across hyperscalers.
Industry experts and consortium representatives discuss the possibility of a universal design language for racks, power, and cooling systems supporting global AI and HPC growth.
OEMs and power specialists explain how 48V systems are becoming the new baseline for AI clusters, enabling higher efficiency and interoperability between open rack formats.
A hyperscaler or integrator shares practical insights from deploying an open compute-based HPC facility using liquid cooling and modular power infrastructure.
Discussion on how renewable integration, heat reuse, and green financing intersect with HPC’s high-density requirements.
Vendors and operators examine how open standards are enabling interoperable liquid-cooling ecosystems at scale.
Leading ODMs such as Wiwynn, Foxconn, and Quanta discuss manufacturing agility and ecosystem collaboration for open rack systems.
Policy leaders and research institutions explore how open architectures can underpin national HPC programs across Southeast Asia.
A technical session covering the emerging role of high-bandwidth, low-latency interconnects (InfiniBand, Ethernet, CXL) in open HPC system design.
Experts from hyperscalers, OEMs, and engineering firms envision how open, modular, and liquid-cooled data centers will evolve to serve AI workloads.
Industry leaders discuss a roadmap for global interoperability and regional working groups for AI infrastructure harmonization.
Opening remarks from W.Media introducing the HPC Summit Southeast Asia 2026 – setting the tone for a day of open collaboration among open standard communities.
A visionary session highlighting how open compute standards and cross-border collaboration are driving the next wave of AI and supercomputing infrastructure across Asia.
Innovation: 48V busbars, liquid-cooling integration, and hyperscale deployment readiness for GPU clusters.
Explores the new v2 specification, emphasizing modularity, rapid deployment, and sustainability in existing 19-inch rack environments.
Outlines how China’s rack standard supports its AI Compute Network initiative, creating interoperability and supply-chain alignment across hyperscalers.
Industry experts and consortium representatives discuss the possibility of a universal design language for racks, power, and cooling systems supporting global AI and HPC growth.
OEMs and power specialists explain how 48V systems are becoming the new baseline for AI clusters, enabling higher efficiency and interoperability between open rack formats.
A hyperscaler or integrator shares practical insights from deploying an open compute-based HPC facility using liquid cooling and modular power infrastructure.
Discussion on how renewable integration, heat reuse, and green financing intersect with HPC’s high-density requirements.
Vendors and operators examine how open standards are enabling interoperable liquid-cooling ecosystems at scale.
Leading ODMs such as Wiwynn, Foxconn, and Quanta discuss manufacturing agility and ecosystem collaboration for open rack systems.
Policy leaders and research institutions explore how open architectures can underpin national HPC programs across Southeast Asia.
A technical session covering the emerging role of high-bandwidth, low-latency interconnects (InfiniBand, Ethernet, CXL) in open HPC system design.
Experts from hyperscalers, OEMs, and engineering firms envision how open, modular, and liquid-cooled data centers will evolve to serve AI workloads.
Industry leaders discuss a roadmap for global interoperability and regional working groups for AI infrastructure harmonization.