As the convergence of HPC and Big Data has appeared an elusive if not impossible promise, both technologies remaining distant, frozen and immobile in their respective markets: the Evolve project in a very pragmatic way is actually pushing things and… And yes it moves!

Injection of HPC technologies such as hardware counter monitoring and NUMA allocation we improve workload execution of containers for node consolidation, with an average speed-up of 20%. Without altering the simplicity of deploying containers, by paying attention to the ‘bare-metal’ aspects Evolve provides acceleration. Still in this idea of getting more performance out of the platform Evolve is investigating and implementing scheme for GPU sharing, where a single GPU can be shared between jobs. This fine grain resource allocation scheme is an important step forward in term of data center efficiency, for CAPEX as well as OPEX.

The use cases are taking advantage of the platform, not only from the hardware standpoint but of course from the burgeoning converged software stack. Our partners in the automotive market are able to train neural networks over one hundred million data base records. More importantly use case providers can now deploy their containerized big data pipeline on the HPC testbed with the ability to reach extreme level of performance.

Data management is making progress as well with the implementation of Spark over flash native storage now in production.

At last, and we are really proud of it, the complete Data Life Cycle Framework from IBM supported and partially developed within the Evolve project is now Open Source.

Bringing back value to the community is an key focus of the Evolve project… Let’s keep moving!