AI Workstation of the Deep Learning Elite

This quiet, fast, reliable and universal multi-GPU deep learning machine beats every other solution on the market

AI Server for the Infinite Inference

A cost-effective solution that delivers exceptional performance and scalability for AI inference needs
GRANDO WORKSTATION

DEEP learning PRODUCT LINE

The Comino Deep Learning multi-GPU workstation line is designed and produced with the one and only focus in mind – that is to allow for the fastest, most efficient and most stable machine learning operation on the market. Our devices are tested with the most popular frameworks, such as PyTorch, TensorFlow and others. Choose Grando systems for the best results in deep learning.

GRANDO AI DL BASE

Multi-GPU Workstation
4x Nvidia 4090 GPUs
1x AMD Threadripper Pro 7975WX CPU
Comino Liquid Cooling
Comino Grando RM Platform

BUY NOW
GRANDO AI DL PRO

Multi-GPU Workstation
4x Nvidia L40S GPUs
1x AMD Threadripper Pro 7985WX CPU
Comino Liquid Cooling
Comino Grando RM Platform

BUY NOWDatasheet
GRANDO AI DL MAX

Multi-GPU Workstation
4x Nvidia A100 / H100 GPUs
1x AMD Threadripper Pro 7995WX CPU
Comino Liquid Cooling
Comino Grando RM Platform

BUY NOWDatasheet

Talk To Engineer

Let's talk

Grando AI DL Product Specifications

Please, contact our sales team in case you want a custom setup
Specs
GRANDO AI DL BASE
GRANDO AI DL PRO
GRANDO AI DL MAX
GPU
4x Nvidia 4090
4x Nvidia L40S
4x Nvidia A100 / H100
GPU MEMORY
TOTAL: 96 GB
TOTAL: 192 GB
TOTAL: 320 GB
CPU
AMD Threadripper PRO 7975WX (32 cores)
AMD Threadripper PRO 7985WX (64 cores)
AMD Threadripper PRO 7995WX (96 cores)
SYSTEM POWER USAGE
UP TO 2.6 KW
UP TO 2.2 KW
UP TO 2.2 KW
MEMORY
256 GB DDR5
512 GB DDR5
1024 GB DDR5
NETWORKING
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
STORAGE OS
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
STORAGE DATA/CACHE
ON REQUEST
DUAL 7.68TB U.2 NVME DRIVE
DUAL 7.68TB U.2 NVME DRIVE
COOLING SYSTEM
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
SYSTEM ACOUSTICS
MEDIUM
LOW
LOW
OPERATING TEMPERATURE RANGE
UP TO 30ºC
UP TO 30ºC
UP TO 30ºC
OS COMPATIBILITY
UBUNTU / WINDOWS
UBUNTU / WINDOWS
UBUNTU / WINDOWS
SIZE
439 x 177 x 681 MM
439 x 177 x 681 MM
439 x 177 x 681 MM
CLASS
WORKSTATION
WORKSTATION
WORKSTATION
GRANDO SERVER

inference PRODUCT LINE

The Grando AI Inference Server product line stands out as a cost-effective solution that delivers exceptional performance and scalability for AI inference needs. When it comes to AI inference, having a cost-effective server is crucial for businesses aiming to deploy AI models efficiently without breaking the bank. Selecting the most cost-effective server for AI inference solutions involves careful consideration of performance, scalability, power efficiency, and overall cost. Whether it's image recognition, natural language processing, or any other AI application, Grando servers offer an affordable and efficient solution without compromising on quality or reliability. 

GRANDO AI INFERENCE BASE

Multi-GPU Server
6x Nvidia RTX4090 GPUs
1x AMD Threadripper Pro 7975WX CPU
Comino Liquid Cooling
Comino Grando RM Platform

BUY NOW
GRANDO AI INFERENCE PRO

Multi-GPU Server
6x Nvidia L40S GPUs
1x AMD Threadripper Pro 7985WX CPU
Comino Liquid Cooling
Comino Grando RM Platform

BUY NOW
GRANDO AI INFERENCE MAX

Multi-GPU Server
6x Nvidia A100 / H100 GPUs
1x AMD Threadripper Pro 7995WX CPU
Comino Liquid Cooling
Comino Grando RM Platform

BUY NOW
expert review

"INFINITE Inference Power for AI"

Unlock the power of performance with Sendex!

"A lot of inference power comes from this Powerhouse machine from Comino which has not one, not two, not three - it has six 4090s inside!
Harrison Kinsley, the coding maestro aka Sentdex, dives into the ultimate tech thrill with the Comino Grando Server featuring a mind-blowing 6x RTX 4090s!

Talk To Engineer

Let's talk

Grando AI Inference Product Specifications

Please, contact our sales team In case you want a custom setup
Specs
GRANDO AI INFERENCE BASE
GRANDO AI INFERENCE PRO
GRANDO AI INFERENCE MAX
GPU
6X NVIDIA 4090
6X NVIDIA L40S
6X NVIDIA A100 / H100
GPU MEMORY
6X 24GB
6X 48GB
6X 80GB
CPU
AMD THREADRIPPER PRO 7975WX (32 cores)
AMD Threadripper PRO 7985WX (64 cores)
AMD Threadripper PRO 7995WX (96 cores)
SYSTEM POWER USAGE
UP TO 3.6 KW
UP TO 3.0 KW
UP TO 3.0 KW
MEMORY
256 GB DDR5
512 GB DDR5
512 GB DDR5
NETWORKING
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
STORAGE OS
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
STORAGE DATA/CACHE
ON REQUEST
ON REQUEST
ON REQUEST
COOLING SYSTEM
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
SYSTEM ACOUSTICS
HIGH
HIGH
HIGH
OPERATING TEMPERATURE RANGE
UP TO 38ºC
UP TO 38ºC
UP TO 38ºC
SOFTWARE
UBUNTU / WINDOWS
UBUNTU / WINDOWS
UBUNTU / WINDOWS
SIZE
439 x 177 x 681 MM
439 x 177 x 681 MM
439 x 177 x 681 MM
CLASS
SERVER
SERVER
SERVER
testimonials

Praised by the Top Tech Leaders worldwide

jesse woolston

"The main factor as to why I love the Grando RM is its ability to be diverse with training and modelling, where I can give it any and all assignments and I am able to just utilise the tools and focus on the art".

linus sebastian

"God of computers".
"On this machine, compute take such little time, that I've been having trouble getting all GPUs to get fully loaded".
"It appears to be rock freaking solid stable".

harrison kinsley

"This is the coolest deep learning machine that I have ever had the opportunity to use. It’s the most power in the smallest form factor also, that I’ve ever used, and finally, it also runs the coolest temperatures, that I’ve ever used"

trusted by
are you ready?

join the elite
of Grando Professionals

order your grando now