GIGABYTE AI TOP ATOM vs NVIDIA DGX Spark: Desktop AI power head-to-head

Last update: 27/10/2025
Author Isaac
  • Both are based on the superchip NVIDIA GB10 Grace Blackwell and target 1 petaFLOP FP4 with 128GB of unified memory.
  • AI TOP ATOM scales from 200B to ~405B parameters by joining two nodes with ConnectX‑7 (up to 200 Gbps).
  • Full software stack: NVIDIA DGX OS/Ubuntu support and AI TOP utility for downloads, inference, RAG and ML.
  • 1 liter format with 10GbE, HDMI 2.1ay USB 3.2 Gen 2x2; availability and pricing already listed on Newegg.

GIGABYTE AI TOP ATOM and NVIDIA DGX Spark comparison

The duel between desktop mini-supercomputers is getting interesting: GIGABYTE AI TOP ATOM vs NVIDIA DGX Spark. Both share the heart of the new era of computing IA from NVIDIA, but they come in different “packages”: one as a reference platform and the other as a turnkey product ready to put to work on the table. If you're looking for serious AI performance without setting up a rack, there's a lot to analyze here.

We're talking about computers weighing just one liter that promise performance figures that until recently were unthinkable in such a small PC: 1 petaFLOP of AI performance in FP4 (1.000 TOPS), high-speed unified memory, and the entire NVIDIA software ecosystem to generate, fine-tune, and infer next-generation models. And yes, all this with low power consumption and network connections that scale when needed.

What's each: DGX Spark platform and GIGABYTE's mini PC

To situate ourselves: NVIDIA DGX Spark is NVIDIA's desktop reference platform (formerly known as Project DIGITS) and is born from a collaboration with MediaTekIt's the "mold" around which different manufacturers can build their compact AI devices. Within that framework, brands like MSI and ASUS have already showcased their offerings, and GIGABYTE is joining in with its own twist.

That twist is the GIGABYTE AI TOP ATOM, a 1-liter mini PC that takes the technical foundation of the DGX Spark and packages it into a final product with clear specifications, its own utilities, and commercial availability. In other words, Spark sets the standard and AI TOP ATOM is GIGABYTE's concrete interpretation to bring that power to developer desktops, research centers, and classrooms.

Architecture: NVIDIA GB10 Grace Blackwell from top to bottom

nvidia dgx spark

The common denominator is the superchip NVIDIA Grace Blackwell GB10, a SoC that merges CPU and GPU with unified high-bandwidth memory. On the CPU side, we find a processor 20-core Armv9, with 10 high-performance Cortex-X925 cores and 10 Cortex-A725 cores for efficiency. This combination provides agility in both computing tasks preprocessing and orchestration as in complex AI pipelines.

The GPU implements the architecture Blackwell with the latest generation of specialized NVIDIA cores: 5th Generation Tensor to accelerate AI and 4th generation RT for ray tracing. In addition, the GB10 integrates 1 NVENC and 1 NVDEC, useful if you work with video streams in analytics or multimodal applications. All of this is brought together through NVLink-C2C, the ultra-high-speed “glue” between the CPU and GPU that far exceeds what PCI Express 5.0 offers within the system.

AI performance and model sizes

The data that draws attention is the low precision calculation for Generative AI: 1.000 TOPS in FP4, equivalent to 1 petaFLOPIn practice, this means that, on a tiny chassis, you can efficiently run and tune contemporary language and multimodal models, perform rapid prototyping, and deploy demanding inferences into production without relying on the cloud.

  China unveils GPMI, the unified alternative to HDMI and DisplayPort

As for the size of the models, the information from the sources is not monolithic and it is advisable to collect all of it: on the one hand, Local support is indicated for models with up to 200.000 billion parameters in a single unit; on the other hand, it also mentions that it "promises support for LLMs of up to 70 trillion and 200 trillion (US) parameters." This last formulation plays on the ambiguity between "billones" in Spanish and "billion" in English, but the core message remains: The work threshold of these teams is fully within the top leagues for current LLMs.

When two units are interconnected, the joint limit scales up to approximately 405.000 billion parameters, opening the door to even larger models and distributed workloads. For those exploring extended contexts, RAGs with large indexes, or multi-stage pipelines, that extra headroom makes all the difference.

This kind of performance is not just for laboratories: development, education, research, science, smart cities and robotics They appear as natural scenarios. Having a petaflop of AI on the table accelerates testing, reduces operational costs, and enables agile iteration without waiting in queues in the cloud.

Unified memory, storage and bandwidth

System memory is a strong point: 128 GB of LPDDR5X on 256-bit bus, with a bandwidth of around 273 GB/s. Being unified between CPU and GPU, simplifies the programming and avoids redundant copies, which is critical for moving large tensors and batches in lightweight training or high-concurrency inference.

For storage, GIGABYTE contemplates up to 4 TB in SSD PCIe Gen5, although there are commercial configurations that start from 1 TB on PCIe 4.0. Beyond capacity, the point is that bottlenecks are minimized when loading bulky models, or for copy large files quickly, collections of embeddings or vision and audio data sets at a good pace, keeping the team in its performance “sweet spot.”

Internal interconnection NVLink-C2C marks another key difference. By offering a clearly higher bandwidth than PCIe 5.0 for CPU–GPU communication within the SoC, reduces latencies and takes advantage of unified memory in transfer-sensitive scenarios, which are precisely those that dominate generative AI and ML.

Connectivity and expansion to go further

On the ports and connections front, it doesn't fall short. As standard, the equipment integrates 10GbE for high-performance wired network, Wi‑Fi 7 and Bluetooth 5.3 for modern wireless connectivity, plus HDMI 2.1 and various USB 3.2 Gen 2x2 Type-C (including a PD IN port for power). Multi-channel audio is output via HDMI, so it also covers multimedia needs without any extra accessories.

The piece that allows growth is the NVIDIA ConnectX‑7 SmartNIC. It interconnects two systems with links of up to 200 Gbps, enabling low-latency cluster computing. The sources also describe this interface as ConnectX‑7 InfiniBand NIC, a way of underscoring the focus on minimal latency and sustained bandwidth. In practice, this allows go from 200B to ~405B parameters when joining two nodes.

  Valve is working on a new virtual reality headset for 2025, codenamed 'Deckard'

In addition to this union of two teams, from an operational point of view it is possible stack multiple AI TOP ATOM units and link them together using ConnectX‑7 to make them work simultaneously and facilitate the network folder synchronizationFor research workshops or advanced classrooms, having “blocks” that add up in a modular way is especially practical.

Software, operating systems and utilities

There are no compromises in software: the team supports NVIDIA DGX OS y Ubuntu Linux, and integrates with the NVIDIA AI software stack, that is, the set of tools, frameworks, and libraries used to build and deploy cutting-edge AI projects. This lowers the barrier to entry for both those already in the CUDA ecosystem and for teaching teams, and allows, for example, update firmware on Linux as standard.

GIGABYTE also adds its AI TOP Utility, a layer that simplifies common flows: Model download, inference, RAG, and machine learning from a more direct interface. The proposal emphasizes the local data privacy and security, something that suits organizations that must comply with regulations or that simply prefer to keep their sensitive information in-house.

Physical design, consumption and format

The set fits in a chassis of approximately 1 liter. It is a surprisingly compact box with official measurements of 150 × 150 × 50,5 mm and a weight of around 1,2 kg. Power is supplied by an external adapter. 240 W, enough to sustain peak performance without complications, and, as detailed by the sources, the system is compatible with standard household outlets without the need for special infrastructure.

Beyond size, GIGABYTE has thought about rapid and orderly deployment: it is viable physically stack multiple units and connect them with the appropriate SmartNIC, which helps build a small cluster in a very small space. For project workers, being able to "add boxes" and remove nodes as needed is pure gold.

Key specifications of the GIGABYTE AI TOP ATOM

GIGABYTE AI TOP ATOM

  • SoC – NVIDIA GB10
    • CPU – 20 Armv9 cores with 10× Cortex‑X925 and 10× Cortex‑A725
    • Architecture – NVIDIA Grace Blackwell
    • GPU – Blackwell Architecture
    • CUDA Cores – Blackwell Generation
    • Tensor Cores – 5th generation
    • RT cores – 4th generation
    • Tensor Performance – 1.000 AI TOPS (FP4)
    • VPU – 1× NVENC, 1× NVDEC
  • System memory – 128 GB 256-bit LPDDR5X (≈273 GB/s)
  • Storage – Up to 4 TB PCIe Gen5 SSD
  • Institutional – HDMI 2.1a output
  • Audio – Multi-channel audio via HDMI
  • Red
    • 1× 10GbE RJ45
    • SmartNIC ConnectX‑7 to join two systems up to 200 Gbps
    • Wi ‑ Fi 7 y Bluetooth 5.3
  • USB
    • 1× USB 3.2 Gen 2×2 Type‑C (PD IN)
    • 3× USB 3.2 Gen 2×2 Type‑C (up to 20 Gbps)
  • Consumption – 240 W external adapter
  • Dimensions – 150 × 150 × 50,5 mm
  • Weight - 1,2 kg

Price and Availability

The GIGABYTE AI TOP ATOM is now available in stores such as Newegg, with a starting price of approximately $3.499,99 for the version with 1 TB PCIe 4.0 SSD. Above are the option of 4TB PCIe 4.0 (about $3.899,99) and the top variant with 4TB PCIe 5.0 (around $3.999,99). As always, it's worth it confirm availability with local distributors and consult the official product page and press release for configurations and deadlines.

  How to turn Copilot's memory on or off: Privacy, ads, and settings in Microsoft 365 and Outlook

Ecosystem and alternatives on the same basis

The landing of this platform is not limited to GIGABYTE. MSI and ASUS have already shown their own mini PC on the same NVIDIA DGX Spark reference, which confirms that we are facing a de facto standard in “desktop AI”. For the end user, this translates into choosing the assembler that best fits in terms of design, ports, service and extra utilities, knowing that the technical core (GB10) is the same.

Direct comparison: AI TOP ATOM vs DGX Spark

With all the above, the comparison is better understood. DGX Spark It is the concept and the reference platform: it defines the 1 liter format, the use of the GB10 Grace Blackwell and the goal of bringing a petaFLOP FP4 to the table. On that basis, the AI TOP ATOM GIGABYTE's is a final product that crystallizes the idea with 128GB unified memory, fast storage, dedicated ports, and its own utility to simplify AI flows.

Thus, in terms of raw power and range of models we are looking at practically equivalent figures (1.000 TOPS FP4 and up to 200B parameters in a node, with ~405B in two connected nodes). The practical differences are seen in the concrete implementation: : included utilities (AI TOP Utility), exact combination of ports, storage options and of course, Price and availabilityIf you're looking for an OEM-branded "out-of-the-box" experience, the ATOM fits the bill.

To close the circle, we must remember what the Spanish sources pointed out: GIGABYTE explicitly speaks of stacking multiple units and uses ranging from education to the scientific world, through smart cities and robotics. The Spark spirit remains intact: real AI power in a minimal format and with immediate scalability when more muscle is needed.

It is clear that we are talking about a generational leap in “desktop AI”: a mini PC like the GIGABYTE AI TOP ATOM inherits the essence of NVIDIA DGX Spark (formerly Project DIGITS), concentrates 1 petaFLOP FP4 in one liter, it offers 128 GB unified with NVLink‑C2C, go up to ~405B parameters when pairing two nodes by ConnectX‑7, and brings the NVIDIA stack with DGX OS/Ubuntu and the AI ​​TOP utility to facilitate downloads, inference, RAG and ML; if we add the 10GbE’s most emblematic landmarks, the HDMI 2.1, USB 3.2 Gen 2 × 2 and the prices already published on Newegg, we have a very solid proposal for research, classrooms, and teams that want to accelerate their AI cycles without setting up a dedicated server.

How to access the NPU with Copilot+ PC
Related article:
How to Access the NPU in Copilot+ PC: A Complete Guide to Getting the Most Out of AI