Around the world, governments and industries are moving quickly to secure leadership in Artificial Intelligence (AI). For many countries, this momentum intersects with various pressing demographic realities: an aging population a shrinking workforce, and the urgent need to reinvent productivity. We cannot afford to fall behind. And the rise of agentic AI promises to accelerate this transformation.
Unlike traditional AI models, agentic AI does not just respond to queries - it reasons, plans, and takes actions across systems. For example, instead of simply answering a question on travel recommendations, an agentic system would book your flights, update your calendar, send reminders, and even adjust your itinerary based on weather or delays - all without being prompted for each step. This marks a shift from passive AI responses to proactive, collaborative systems that work alongside humans. The rise of agentic AI will require significantly more compute power - not just for single tasks or queries, but for extended workflows that involve reasoning, planning, and continuous adaptation.
As the technology for agentic AI matures and adoption expands, the world is effectively adding billions of virtual users into the compute fabric. The question for every country, including Thailand is whether their AI infrastructure is ready to support this scale and complexity.
AI is more than GPUs
High-performance graphics processing units (GPUs) often dominate AI conversations, especially for training and running large-scale models. But central processing units (CPUs) are just as critical in powering AI systems behind the scenes - handling essential tasks such as data movement, memory management, thread coordination, and orchestrating GPU workloads.
In fact, many AI workloads - including language models with up to 13 billion parameters, image recognition, fraud detection, and recommendation systems, can run efficiently on CPU-only servers, particularly when powered by high-performance CPUs like the AMD EPYC(TM) 9005 Series processors.
As AI models evolve into more modular architectures - such as mixture-of-experts systems popularized by DeepSeek and others, the need for smarter resource orchestration grows. CPUs must deliver high instructions per clock (IPC), fast input/output (I/O), and the ability to manage multiple concurrent tasks with precision.
Equally critical is connectivity, the "glue" that binds modern AI systems together. Advanced networking components, such as smart network interface controllers (NICs) help route data efficiently and securely between components, offloading traffic from GPUs and reducing latency. High-speed, low-latency interconnects help ensure data flows seamlessly across systems, while scalable fabric ties nodes together into powerful distributed AI clusters.
In the age of agentic AI, heterogeneous system design becomes critical. AI infrastructure must go beyond raw compute - it must integrate CPUs, GPUs, networking, and memory in a flexible and scalable way. Systems built this way can deliver the speed, coordination, and throughput needed to support rapid, real-time interactions of billions of intelligent agents. As adoption scales, rack-level optimization where compute, storage, and networking are tightly co-designed will be key to delivering the next wave of performance and efficiency.
Why openness matters in the AI race
As AI systems grow more complex and distributed, the need for openness - in software, hardware, and systems design becomes a strategic imperative. Closed ecosystems risk vendor lock-in, limit flexibility, and can constrain innovation at a time when adaptability is key to scaling AI.
This is why open software stacks like AMD ROCmTM are essential. ROCm provides developers and researchers the freedom to build, optimize, and deploy AI models across a wide range of environments. It supports popular frameworks like PyTorch and TensorFlow, includes advanced tools for performance tuning, and offers portability across hardware - all available as open source. In the context of Japan's ambition to foster innovation across academia, startups, and industry, open AI software offers broader accessibility, faster iteration, and lower barriers to entry.
Similarly vital is openness at the hardware and systems level. As AI compute evolves toward large-scale, heterogeneous deployments, rack-scale architecture becomes foundational. Open standards such as the Open Compute Project (OCP) support modular system design, while emerging collaborations like the Ultra Accelerator Link (UALink) aim to create open, high-bandwidth connections between AI accelerators across servers. Meanwhile, the Ultra Ethernet Consortium (UEC) is defining next-generation networking standards purpose-built for AI - enabling low-latency, high-throughput data movement across distributed systems.
These open initiatives give cloud and data center operators the ability to build flexible, interoperable infrastructure that keeps pace with AI's explosive growth. For Thailand, embracing an open ecosystem positions the country to benefit from global innovation while cultivating local differentiation. It enables governments and businesses to build infrastructure that is performant, energy-efficient, and tailored to domestic needs - without being locked into proprietary limitations.
In the upcoming era defined by multi-agent AI, openness is not just a philosophy - it is a prerequisite for scale, sovereignty, and sustained leadership.
Looking ahead in 2026
As agentic AI reshapes how everything is done, the focus must go beyond GPUs to encompass CPUs, high-speed interconnects, and smart networking - all equally essential for orchestrating the complex, real-time decisions AI agents make at scale. Just as critical is an open ecosystem - with open software like ROCm, industry standards for rack-scale design, and collaborative efforts such as UALink and UEC enabling greater flexibility, faster innovation, and interoperability from edge to cloud.
This is why AMD is advancing its vision with "Helios" - a next-generation rack-scale reference design for AI infrastructure that will be released in 2026, designed to unify high-performance compute, open software, and scalable architecture to meet the demands of agentic AI.
For Thailand, building open, heterogeneous, and scalable infrastructure like this is more than a technology choice - it is a strategic foundation for national competitiveness. The country is already leading this charge with NSTDA's "LANTA" supercomputer—ASEAN's No. 1 HPC system. By leveraging the flexibility of AMD EPYC(TM) processors, NSTDA can handle diverse workloads like LLM training and precision medicine. This choice is delivering real-world results today, such as slashing PM2.5 prediction times from 11 hours to 45 minutes and cutting energy costs by 30% through liquid cooling. This proves Thailand's infrastructure is powerful, sustainable, and ready for 2026. As regional AI ambitions grow, such future-ready infrastructure will be essential to unlocking sustainable growth and innovation. As the country navigates rising automation needs and growing regional AI ambitions, future-ready AI infrastructure will be essential to unlocking sustainable growth, innovation, and resilience.