Design of high-performance compute environments for machine learning

/, What we do/Design of high-performance compute environments for machine learning

Design of high-performance compute environments for machine learning

Until now, high-performance computing has been exclusively the ownership of governmental institutions, universities and select industries such as aerospace and oil exploration.

Being used traditionally for physics simulations and data analysis of extreme data volumes, HPC was typically without much commercial use outside their specific domains of science or industry.

This is about to change BIG TIME. Industry followers are talking about democratization of high performance computing. With the advent of specialized co-processors such as GPUs and FPGAs, HPC is now within the reach of even the smallest companies. It is now possible to run traditional HPC-type simulations on a workstation or a small grid of servers. Workloads that a decade ago would require a purpose-built data center filled with expensive computer systems can now run within a few computer racks of servers, aided by the exponential growth of for instance GPU performance.

Today we see HPC being applied to new domains in an ever-increasing manner. Workloads within financial trade, DNA analysis in bioinformatics, sustainable energy simulations such as optimizing wind turbine deployment and computational chemistry such as protein folding for pharmaceuticals.

But the biggest rave has to be in the AI discipline of machine learning.

A single algorithm called Deep Learning or HHMM has profoundly changed the AI landscape by enabling computers to surpass human capability in a number of fields and that number growths by the day. Classifying objects in a video stream in real-time, translating written and spoken language accurately and driving cars would all have seemed like far-out science fiction just a few years ago, but now it is the new reality. Numerous startups have strung up on the simple idea of applying machine learning to existing industries and we are probably not far from seeing the first of those companies seriously disrupting incumbents across these industry verticals.

Although the HHMM algorithm was developed in the 80s, there simply wasn’t enough compute power available to train it until recently. Five years ago compute power and specialized knowledge was available only to the big IT companies such as Google and Facebook, who have almost infinite compute power in their giant data center and who employ thousands of PhDs.

With the advent of standardized AI frameworks such as Microsoft Cognitive Toolkit, Caffe2 and TensorFlow as well as affordable HPC environments using GPUs, Deep Learning is now something even a startup company can afford.

arqitekta has the knowledge required to build modern HPC environments from the ground up and how to leverage public cloud providers to train your Deep Learning initiatives.

Designing HPC environments for Deep Learning is a non-trivial task requiring expert knowledge of compute nodes, GPU accelerators, high-speed interconnects, deployment tools and computing frameworks. We have that knowledge and are willing to share.

By | 2017-11-02T17:09:05+00:00 October 24th, 2017|