Speak With An Expert

Mail icon to contact Liqid about our composable infrastructure technology
liqid matrix logo and icon

The New Performance Equation​

Performance =
Tokens
(Watts+$)
AI infrastructure is evaluated in real-world metrics that industry leaders are calling the new datacenter currency: tokens generated per watt plus tokens generated per dollar invested.

LIQID solutions enable unmatched performance, agility, and efficiency. LIQID is proven to deliver 2x more tokens/watt & ‍50% higher tokens/dollar.

The AI Revolution

Enterprise success in modern AI, especially inference, reasoning, and agents, requires dense, high-performance, scale-up infrastructure before building out horizontally.

wave icon

Early Wave

Pre – 2021
  • AI in Off the Shelf Apps
  • Static Servers & GPUs
wave icon

TRAINING LLMS

2021 – 2025
  • GPU Shortages due to Hyperscaler ‘Land Grab’ focusing on LLM training
  • LLMs go mainstream
wave icon

ON-PREM INFERENCE

2025 – BEYOND
  • Inferencing - AI Moves to Edge
  • Thinking - Reasoning, RAG, tuning
  • Acting - Agents, MCP

LIQID: Bridging the Widening Enterprise GPU Infrastructure Gap

N-Way Server

GPU icon
x8
1–8 GPUs (600W / ea)
8-way server icon
8-Way Server
  • Simplified Management
  • Targeted Deployments
  • Fixed, Rigid
  • Power Hungry
vs
Liqid logo
GPU icon
x30
Up to 30 GPUs (600W / ea)
liqid stack icon
  • Scale Up and Out
  • Enterprise Inference
  • Composable Architecture
  • Power-, Cost-Efficient
vs

AI Factory

GPU icon
x72
Up to 72 GPUs (@125KW / rack)
AI factory icon
  • Superior LLM Training
  • Massive Unified Computing
  • Optimized for High-End
  • Power Hungry
liqid scale up and scale out graphic

Scale Up and Scale Out

LIQID leads the way in building and enabling the most power- and cost-efficient Scale Up and Scale Out infrastructure solutions.

Enterprises are no longer limited to server instances that max out at 8 GPUs. With Liqid, you can Scale-Up your AI infrastructure by adding up to 30 GPUs to an industry-standard server and you can simultaneously scale memory to support even the largest models.

Liqid’s open-architecture enables leveraging any east-west Scale-Out fabric, as your needs grow, allowing you to cluster multiple systems together to achieve the perforrmance you require without having to move to costly AI Factories that were designed for large operators.

LIQID: Scale Up

Dynamically pool and scale from as little as one up to 30-GPUs, including 600W devices such as the NVIDIA H200 or new RTX Pro 6000, Intel Gaudi 3, Grok, or d-Matrix without clustering and bandwidth-robbing interconnects.

Deliver massive GPU + Memory to a single CPU host, creating the highest performance while reducing hardware, power, and cooling.

Deploy GPUs, memory, and storage in seconds via Liqid Matrix Software to match your workload needs and precisely assign the necessary resources in real time so you can get to market faster.

LIQID’s RDMA Peer-to-Peer technology delivers 30% increase in performance through direct GPU-to-GPU and GPU-to-NMVe storage data exchange without CPU or host memory involvement; critical for minimizing latency and maximizing bandwidth for AI and HPC workloads.

Built for seamless scale-out using open standards-based building blocks that can be extended across racks and systems, enabling scale of AI factories.

liqid scale up graphic
liqid scale out graphic

LIQID: Scale Out

Designed using open, standards-based technology means no lock-in so enterprises can scale out using existing networking hardware and interoperable protocols.

Built-in northbound APIs integrate seamlessly with orchestration tools such as Kubernetes, VMware and OpenShift, job schedulers like Slurm, and automation frameworks such as Ansible to build AI Factories and pool and share resources across the entire datacenter.

Supports Ethernet (RoCE v2, v3) and InfiniBand scale-out fabrics now, with future-proof alignment for emerging fabric technologies, like Nvidia Fusion, Ultra Accelerator Link (UAL), or Ultra Ethernet Connection (UEC) as they become available.
 
Scale beyond 30 devices and implement multi-tenancy across departments, enabling each to adopt its own inference models or fine-tune LLMs as well as offer high availability (HA) and recovery, distributed management, and enhanced GPU and memory sharing.

First and Only Single UI for Composable Scale-up

LIQID Matrix is the first and only single UI for GPU and Memory composability, enabling simplified scale-up and a software-defined foundation for scale-out.

Power off slots and virtually replace failed devices, like GPUs and DIMMs, non-disruptively via software to minimize downtime and maximize operational efficiency.

Command and control critical AI resources in real-time to meet the demand of diverse and dynamic workloads and achieve 100% and balanced utilization.

Compose from LIQID Matrix GUI, or leverage built-in REST APIs that seamlessly connect with Kubernetes, Slurm, OpenShift, VMware, Ansible, and others.

Liqid single pane of glass graphic

LIQID: Our Mission | Redefining Efficiency for ​On Prem & Edge  
AI Infrastructure

LIQID Enables Power & Cost-Efficient
AI Infrastructure​
2X
More Tokens
per Watt
(Operational Efficiency)
50%
Higher Tokens
per Dollar
(Cost Efficiency)
Liqid power graphic

Let’s Get Started

Speak with one of our product experts and learn how Liqid can drive greater agility and value from your datacenter.

Schedule a demo