Univa Announces Universal Resource Broker for Hyperscale Workload Optimization
Sometimes it is hard to remember that all of those racks inside massive data centers with blinking lights are working for a reason. After all, they are not just working in the conventional sense of being turned on, but are doing the hard and complex work of efficiently and effectively serving up, storing and analyzing the applications and services that are the lifeblood of enterprises in an increasingly software-controlled, applications and data center-centric world.
The workloads being done on those racks—interconnected via high-speed intra and inter-data center networking capabilities—need to be optimized. This is particularly true in hyperscale data centers not only because of their current scope, but also to accommodate the continuing exponentially growing data storms heading their way thanks to things like containerization, big data and the Internet of Things (IoT) just to name a few.
Data Center compute workload optimization solutions specialists, Hoffman Estates, Illinois-based Univa understands the growing requirements facing hyperscale data center operators as well as their need for simplicity in managing all of their compute resources in an increasingly complex environment. It has aimed its new Universal Resource Broker (URB), an enterprise-class workload optimization solution for high performance, containerized and shared data centers, squarely at meeting those needs.
As Univa says, the Universal Resource Broker—powered by the company’s workload optimization Grid Engine—enables organizations to manage and optimize distributed applications, data center services, legacy applications and big data frameworks in a single, dynamically shared compute pool. In addition, hyperscale data center operators will not face a fork lift to get to where they desire. Univa says the solution “works seamlessly with legacy systems for IT ease and control of creating a single compute pool out of distributed data center resources.”
"Univa GridEngine has long been recognized as the solution of choice for the large scale and complex clusters typically found in HPC and Big Data communities," said Fritz Ferstl, CTO at Univa Corporation. "Grid Engine is the most widely deployed workload optimization solution and is used today in more than 10,000 data centers across thousands of applications and use cases. We're now using our expertise and superior technology as the engine for even higher level of computing needs that exist today - like Big Data distributed applications and frameworks - while melding new and legacy systems into one shared resource."
"ActiveState was an early adopter of the most recent generation of containers", said Bernard Golden, Vice President Strategy, ActiveState Software and author of Amazon Web Services for Dummies, "and it's clear from our experience that they will drive much larger computing footprints as their ease and efficiency becomes more accepted. This means that managing the much larger underlying infrastructure environments will be a critical challenge. Univa's Universal Resource Broker will enable organizations to meet that challenge and prepare for a future of container-based application portfolios."
How the Universal Resource Broker works on data center workloads
At a high level, the attraction here is the URB’s ability to create a single, virtual, high throughput and high performance compute pool out of distributed data center resources by integrating batch, low-latency workload, data center services and Big Data frameworks. In addition, data center operators are likely to appreciate its ability to easily integrate distributed application environments consuming dynamic partitions of the cluster.
Aptly named because of its brokering capabilities, the URB is also versatile, supporting heterogeneous machine types and architectures, allowing organizations to run and manage bare-metal servers or leverage virtual machines, hybrid cloud and containers such as Docker.
Univa cites the Universal Resource Broker benefits as including:
- Advanced policies and lifecycle management
- Centralized arbitration and resource allocation
- Centrally manage distributed applications, data center services and Big Data frameworks
- Unified Accounting and Reporting
- Run and manage existing Apache Mesos frameworks without modification with High Availability and automatic service fail-over
- Deployment on Unix, Linux, X86_64 or RISC servers
As has been observed in almost every recent new product or service offering in the data center industry, the three biggest needs/trends are “visibility,” “scalability/agility” and “control.” The last one refers more specifically to automated and increasingly virtualized capabilities being under software control, which of course relates back to their being visible and adaptable based on dynamic needs. All of this is work. It is work that needs to be optimized and because of competition for resources needs to be brokered just like other transactional situations.
In fact, resource brokerage and orchestration are the order of the day in hyperscale data centers with optimization always the ultimate objective.
Edited by Dominick Sorrentino