Speak With An Expert

Mail icon to contact Liqid about our composable infrastructure technology

On the second full day of Dell Technologies World 2024, the mood was electric, with excitement around AI unlike attendees have seen at previous shows, with one attendee saying, “It seems like, for the first time, AI is something real that people are doing and not just scifi fantasy or marketing speak.” 

Echoing that sentiment, executives from Dell, Meta, Hugging Face, Broadcom, and others took the stage to keynote on how the Dell AI Factory will come together and what that architecture will look like.

Liqid Cofounder Sumit Puri revolutionizes AI economics presenting on Liqid UltraStack GPU accelerators.

As reported in IT Pro, Dell outlined five core beliefs related to AI:

  • Data is the differential that should drive businesses, as 83% of all data is on premises and 50% is generated at the edge
  • AI will move to the data and not the other way around (aka data gravity) 
  • There is no one-size-fits-all approach here, as AI will need to adapt from edge computing to the hyperscale 
  • Firms will need to use open, modular infrastructure to use AI
  • The sector needs an open ecosystem to take advantage of these powerful new technologies

What does this AI Factory look like? Depends on the nature of the work, as infrastructure will need to be dynamic enough to meet the shifting demands of AI workloads. It will need to accommodate for data gravity, remaining flexible enough to scale the right resources where the data lives, not where you’d like for it to be. 

Open solutions like those provided via the leading AI interface Hugging Face are key to delivering expansive coverage to innovate in ways that make sense to individual organizations based on their AI strategies. Dell has partnered with the French startup to offer the Dell Enterprise Hub, a new portal on Hugging Face for easy deployment of open source AI models as containers that can be used out of the box.

Sy Choudhury, Meta’s Director of AI Partnerships, took to the stage to discuss the latest version of Meta's large language model, Llama 3 – also now open source for the first time, underlining the importance of open standards for AI proliferation. The new LLM boasts up to 70 billion parameters, and Meta says it is designed to redefine processing power, versatility, and accessibility. 

Choudhury underlined the "very permissive" license for Llama 3, which he says allows businesses to use it how they want and to fine-tune it on their own data. (Check out the video to learn more about Liqid's work with Meta on GPU aggregation for cutting-edge VR).

Excitement over the current realities and future possibilities for AI extended onto the showroom floor where one longtime Liqid sales leader noted “it’s the busiest booth we’ve ever had!” 

Built for AI, Liqid delivers flexible architecture, GPU acceleration, and modular scalability for Dell PowerEdge solutions 

The description of the flexible architecture Dell and its partners envision for AI Factories will sound familiar to those who know Liqid. Liqid’s architecture was built from the ground up to deliver flexible, scalable architecture for AI that accelerates compute by enabling users to pool GPU, storage, networking, and other PCIe-deployed devices, busting silos and enabling a living infrastructure that can adjust to meet the shifting demands of AI workloads. 

On-prem, in the cloud, at the edge, Liqid offers infrastructure designed to meet the demands of the AI data center, delivering 50 percent data performance improvements while reducing power, cooling, and other operational costs by up to 50 percent when compared to similar solutions. 

At Dell Tech World this year, we’ve been showcasing our ability to scale up to 30 GPUs on a single Dell PowerEdge server. Further, Liqid has highlighted its qualification of Dell PowerEdge modular infrastructure solutions, including the PowerEdge C-Series, with their Liqid SmartStack solution. 

Going Mod with Liqid SmartStack for Dell PowerEdge modular servers

This qualification delivers the ability to attach up to 20 GPUs to a single modular server. SmartStack enables dynamic GPU provisioning and reallocation, helping to facilitate acceleration demands of modern AI, graphic-intensive and VDI workloads for Dell PowerEdge modular solutions. 

One attendee in the IT services space said upon understanding the solution: “After you explained how I can get better usage from my GPUs with your SmartStack solution, I get so frustrated, coming into the office and seeing very low utilization across all of our GPU servers.”

Liqid customers in attendance saw their frustrations lifted upon deployment of Liqid solutions, with one tech infrastructure provider saying, “you guys are the best on-prem solution GPU deployment,” and another, “we love it! You just set it up and it works.” 

As Dell and their friends have been stressing throughout the show, wide scale AI deployment will require a fundamental rethinking of how we architect our data center infrastructure, and there are a lot of ways to build an AI factory. We’re excited to be at the leading edge of infrastructure for these remarkable and promising new solutions. 

Follow us on LinkedIn to stay up to date on the latest from Dell Technologies World 2024 and to hear from the Liqid team on all things AI.

Written by
George Wagner, Director, Product Marketing, Liqid
Posted on
May 22, 2024
Artificial Intelligence

Would you like to learn more?

Speak with one of our sales experts to learn more about how we aim to deliver complete composability. For other inquiries, you can drop us a line. We'll get back to you as soon as possible.