What we do

/What we do
What we do 2017-11-01T18:37:06+00:00


This illustration is a simplified view of our transformation services. They are all targeted at either reducing total cost of ownership of keeping lights on of your data center infrastructure or at introducing new technologies to your business.

The following diagram explains how our transformation services can free up resources to allow you to spend more on innovation by intelligently applying architectural transformations to your legacy infrastructure.

All transformations begin with a thorough discovery phase. From experience, CMDBs and manually maintained spreadsheets or databases do not describe a company’s infrastructure accurately enough or in a timely enough fashion.

This is why arqitekta has developed a machine assisted rapid wall-to-wall discovery method that finds all endpoints, inventories servers, network and storage and finds logical relationships between systems. This is all done without installing agents on servers and is highly un-intrusive.

From starting the discovery it typically takes 2-3 weeks to obtain a 98% accuracy of the environment. By looking at communication between production systems and identifying the protocols used, we can derive the application type as well as integrations between disparate systems.

Using the relationship information we can group servers into systems and systems into landscapes. Thus we can start to describe the entire application portfolio of the customer’s data center.

This provides us the basis for further analysis in planning the transformations.

Many customers today have an ambition to utilize private and public cloud, but need help in defining the strategy for when to use cloud and what workloads to move there, based on technology constrains, cost savings and compliancy.

arqitekta can assist in developing cloud guidelines, which are tailored specifically to the type of industry you are in, what countries your company operates in and who the consumers of your digital services are.

Often industry or national regulations will decide where you can place your data and thus your systems. This in turn dictates the use of private and public cloud as well as to which providers you can use.

When this is done it is time to develop the transformation storyboard. This is a strategy and conceptual design for how the different systems and infrastructure platforms will be transformed.

This storyboard will be the guide for the future transformation phases.

When the discovery has been performed we use our domain experience to pinpoint areas of interest in the environment. This could be legacy systems, which traditionally has a high maintenance cost or collections of non-virtualized servers that needs to be understood.

Combining our newly gained architectural knowledge of the systems, we will then try to find the corresponding financial book value or maintenance cost of the systems.

This enables us to build a TCO model of the systems, so we know our baseline before transformation.

Taking an architectural approach, we suggest transformations that will lower the burn rate of the infrastructure needed to host an application.

Often it is possible to choose a more modern platform to host these workloads.

The preferred platform today is x86 running either Linux or Windows. Transforming the legacy systems to this platform enables virtualization and even private and public cloud hosting.

Often applications can be re-platformed because they rely on middleware and databases, which function identically on x86 platforms. The enabling factor is to test the application on newer versions of middleware or to get the ISV to support these.

For remaining workloads, code modifications and testing is required.

For workloads coded to legacy operating systems APIs directly, there are code refactoring factories that specializes in transforming source code from legacy platforms to x86.

Most private cloud solutions being sold today, whether it is a hyper-converged, vendor engineered stack or a homegrown virtualization environment, are nothing more than a bunch of hypervisors with addition of an orchestration layer to automate a few high-level provisioning processes. For hyper-converged stacks, a software layer has been introduced to enable scale-out storage and a rare few might use software defined networking too.

But not all workloads are virtualizable…

First of all, not all workloads are x86-based and if they are, there might be restrictions to why they cannot be virtualized. Some of the primary reasons are licensing restrictions, hardware affinities such as dongles, real-time applications that need fine-grained timer access, workloads needing many GBs of RAM that would be too costly on a hypervisor and scale-out workloads where it simply does not make sense to virtualize them. On top of this, branch office servers and some DMZ servers using micro-segmentation will not have required critical mass to make a hypervisor viable for such deployments.

Across the industry the actual virtualization rates are in the 50-60% range and only increasing slightly at about 3-4% yearly. This means 40-50% of server instances are not virtual.

Going with even conservative VM-to-hypervisor ratios, this also means that 70-80% of all physical servers are not running hypervisors, but rather are bare-metal OS installs. Asking your local server vendor will most likely confirm that insight.

To be fair, a lot of new applications are now born cloud-native, especially when being developed as part of a DevOps life-cycle.

The issue is that a lot of legacy workloads simply cannot easily migrate to cloud.

This is where the true software defined data center comes into play. By supporting bare-metal workloads, as well as being able to fully orchestrate the compute, networking and storage layers for both bare-metal and virtualized environments, it is possible to reach automation levels close to 80% across all data center processes.

arqitekta architects have been involved in designing virtualization and cloud environments for more than 10 years and have also designed multiple generations of SDDN. We have direct contacts with the vendor’s product teams across the compute, networking and storage layers, and have worked with these vendors on features and prioritization of their roadmaps.

This knowledge is now available for our customers to leverage.

The new mantra of digital enterprises is to instrument your business. This means deploying sensors throughout the supply chain in the shape of IOT devices, to stream the collected data in near real time and finally to have enough compute power to perform data analysis on this data.

arqitekta can help designing the infrastructure for all three areas.

We can assist in down-selecting on the right type of IOT devices with the relevant type of sensors for your business, which is rugged enough for your environment and which is within your budget.

Furthermore, we can design the delivery network and mechanism for transporting the data to your data center or to a public cloud provider.

Finally, we have extensive design experience with regards to scale-up and scale-out database environments to give you the compute power you require for your analysis work. Whether you have OLAP or Hadoop type workloads, we can design the infrastructure that meets your requirements.

The latest addition to big data is to layer machine learning on top of your fast or slow data pipeline.

Until now, high-performance computing has been exclusively the ownership of governmental institutions, universities and select industries such as aerospace and oil exploration.

Being used traditionally for physics simulations and data analysis of extreme data volumes, HPC was typically without much commercial use outside their specific domains of science or industry.

This is about to change BIG TIME. Industry followers are talking about democratization of high performance computing. With the advent of specialized co-processors such as GPUs and FPGAs, HPC is now within the reach of even the smallest companies. It is now possible to run traditional HPC-type simulations on a workstation or a small grid of servers. Workloads that a decade ago would require a purpose-built data center filled with expensive computer systems can now run within a few computer racks of servers, aided by the exponential growth of for instance GPU performance.

Today we see HPC being applied to new domains in an ever-increasing manner. Workloads within financial trade, DNA analysis in bioinformatics, sustainable energy simulations such as optimizing wind turbine deployment and computational chemistry such as protein folding for pharmaceuticals.

But the biggest rave has to be in the AI discipline of machine learning.

A single algorithm called Deep Learning or HHMM has profoundly changed the AI landscape by enabling computers to surpass human capability in a number of fields and that number growths by the day. Classifying objects in a video stream in real-time, translating written and spoken language accurately and driving cars would all have seemed like far-out science fiction just a few years ago, but now it is the new reality. Numerous startups have strung up on the simple idea of applying machine learning to existing industries and we are probably not far from seeing the first of those companies seriously disrupting incumbents across these industry verticals.

Although the HHMM algorithm was developed in the 80s, there simply wasn’t enough compute power available to train it until recently. Five years ago compute power and specialized knowledge was available only to the big IT companies such as Google and Facebook, who have almost infinite compute power in their giant data center and who employ thousands of PhDs.

With the advent of standardized AI frameworks such as Microsoft Cognitive Toolkit, Caffe2 and TensorFlow as well as affordable HPC environments using GPUs, Deep Learning is now something even a startup company can afford.

arqitekta has the knowledge required to build modern HPC environments from the ground up and how to leverage public cloud providers to train your Deep Learning initiatives.

Designing HPC environments for Deep Learning is a non-trivial task requiring expert knowledge of compute nodes, GPU accelerators, high-speed interconnects, deployment tools and computing frameworks. We have that knowledge and are willing to share.

Depending on industry, storage amounts to 15-30% of the total infrastructure budget. Furthermore, storage consumption grows 20-40% on a yearly basis and cost of capacity decreases by only 12-18% a year. This means that careful planning is needed when laying out a strategy for a corporation’s storage platform to ensure that the cost is contained and well understood.

In most cases this is not performed and most corporations battle with increasing storage cost and a too low utilization of their storage assets.

CIOs need to understand the difference in TCO between buying an asset or leasing it, as well as new options of public cloud storage and even utility storage on premise.

This is where arqitekta has unique knowhow as we have spent half a career designing storage environments and service offerings, and we understand the full storage economy life-cycle.

For instance, areas like end-of-lease negotiations, public cloud exit cost scenarios and migration cost.

We can make an assessment of the complete environment, its utilization rate and cost, suggest changes and refresh scenarios according to the overall IT strategy and budget contraints.

Most IT shops have an Oracle footprint. Whether we might only be talking Oracle databases or we are talking the whole enchilada of Oracle middleware, analytics and Oracle engineered stacks, such as SuperCluster, Exadata or Exalogic, no one seems to be able to escape Oracle as a vendor.

With Oracle’s ever-changing licensing rules and policies, it is increasingly difficult to estimate licensing cost over time. The latest of these changes was on January 23, 2017 when public cloud licensing costs doubled over night, unless of course you were an Oracle Cloud customer.

This, rather aggressive enticement model, designed to push customers further into the Oracle ecosystem, does not fare well with most customers, many of whom are already spending a disproportionate large share of their licensing budget on Oracle licenses.

Thus, minimizing the dependency on Oracle or at least minimizing the footprint consumed, is a priority with many IT shops.

arqitekta has met this demand with a novel architectural approach. By using the latest technologies such as NVDIMMs, NVMe over Fabrics and containers, combined with re-platforming to x86 for middleware and databases we can use ultrafast storage to reduce the CPU cores required to run workloads.

Typically we see 15-25% reduction in core count, which can amount to significant savings on the IT budget.