DEEP learning PRODUCT LINE
Comino Grando AI Deep Learning workstations are designed for on-premise training and fine-tuning of complex deep learning neural networks with large datasets focusing the field of Generative AI, but not limited by it. They provide top tier and unique multi-GPU configurations to accelerate training and fine-tuning of compute-hungry Diffusion, Multimodal, Computer vision, Large Language (LLM) and other models.
Multi-GPU Workstation
4x Nvidia 4090 GPUs
1x AMD Threadripper Pro 7975WX CPU
Comino Liquid Cooling
Grando AI DL MAX workstation hosts FOUR liquid-cooled NVIDIA H100 GPUs with 376GB of HBM Memory and 96-core Threadripper PRO CPU running up to 5.1GHz providing up to 50% performance overhead over similar air-cooled solutions. In addition to unexcelled performance Comino solutions come with up to 3-years maintenance-free period, maintenance as easy as air-cooled systems and remote Comion Monitoring System (CMS) ready to be integrated into your software stack via API.
Grando DL workstations are pre-tested to run cuDNN, PyTorch, TensorFlow, Keras, JAX frameworks & libraries and equipped with FOUR NVIDIA GPUs (A100 / H100 / L40S / 4090) paired with the most modern high-frequency multi-core CPUs to provide best in class Machine and Deep Learning performance combined with silent operation even for the most demanding and versatile workflows that include Stable Diffusion, Midjourney, Hugging Face, Character.AI, QuillBot, DALLE-2, etc.
Talk To Engineer
Let's talkGrando AI DL Product Specifications
inference PRODUCT LINE
Comino Grando AI INFERENCE servers are designed for high-performance, low-latency inference and fine-tuning on pre-trained machine learning or deep learning Generative AI models like Stable Diffusion, Midjourney, Hugging Face, Character.AI, QuillBot, DALLE-2, etc. Unique multi-GPU cost-optimised and adjustable configurations are perfect for scaling on-premise or in a Data Center.
Multi-GPU Server
6x Nvidia RTX4090 GPUs
1x AMD Threadripper Pro 7975WX CPU
Comino Liquid Cooling
Multi-GPU Server
6x Nvidia L40S GPUs
1x AMD Threadripper Pro 7985WX CPU
Comino Liquid Cooling
Multi-GPU Server
6x Nvidia A100 / H100 GPUs
1x AMD Threadripper Pro 7995WX CPU
Comino Liquid Cooling
Grando AI INFERENCE BASE server is a unique and most cost-effective solution hosting SIX liquid-cooled NVIDIA 4090 GPUs with 24GB of GDDR6X Memory each, which is considered a sweet spot for the majority of real-life inference tasks. Efficient cooling eliminates any thermal throttling providing up to 50% performance overhead over similar air-cooled solutions. In addition to unexcelled performance Comino solutions come with up to 3-years maintenance-free period, maintenance as easy as air-cooled systems and remote Comion Monitoring System (CMS) ready to be integrated into your software stack via API.
Grando INFERENCE servers are pre-tested to run cuDNN, PyTorch, TensorFlow, Keras, JAX frameworks & libraries and equipped with SIX NVIDIA GPUs (A100 / H100 / L40S / 4090) paired with the most modern high-frequency multi-core CPUs to provide best in class Inference performance and throughput for the most demanding and versatile workflows.
"INFINITE Inference Power for AI"
Unlock the power of performance with Sendex!
"A lot of inference power comes from this Powerhouse machine from Comino which has not one, not two, not three - it has six 4090s inside!
Harrison Kinsley, the coding maestro aka Sentdex, dives into the ultimate tech thrill with the Comino Grando Server featuring a mind-blowing 6x RTX 4090s!
Talk To Engineer
Let's talkGrando AI Inference Product Specifications
Praised by the Top Tech Leaders worldwide
"The main factor as to why I love the Grando RM is its ability to be diverse with training and modelling, where I can give it any and all assignments and I am able to just utilise the tools and focus on the art".
"God of computers".
"On this machine, compute take such little time, that I've been having trouble getting all GPUs to get fully loaded".
"It appears to be rock freaking solid stable".
"This is the coolest deep learning machine that I have ever had the opportunity to use. It’s the most power in the smallest form factor also, that I’ve ever used, and finally, it also runs the coolest temperatures, that I’ve ever used"