Pure Storage expert makes predictions for 2020

Pure Storage expert makes predictions for 2020

Assaad El Saadi, Regional Director – Middle East, Pure Storage, says 2020 looks to be the year that most enterprises evolve their private and hybrid cloud platforms beyond VMs, deploying an enterprise-wide container strategy.

Customers will demand a subscription to innovation with As-a-Service business models

As-a-Service models have existed since the beginning of public cloud. For most consumers of storage, hybrid cloud is the reality and the future – and they are looking to get the best out of both worlds; to drive simplicity and automation from their on-premise infrastructure so they can manage it like they manage the cloud, and to get the same enterprise capabilities and control in the cloud, as they have on-premise – both in a flexible, subscription-based As-a-Service model.

In 2020, the demand for As-a-Servicein storage will increase and organisations are speaking with their wallets with more investment in OPEX models, but successful models need to balance both the operations and purchasing aspects. From an operations perspective, key attributes include standardisation (vs snowflakes), on-demand access, API-driven management and limitless scale.

On the consumption side, key traits include a pay for what you use model, bursting capabilities (flex up/down as needed), and a non-disruptive evergreen experience, services can be grown/evolved over time without disruption. And all this delivered as a 100% pay-per-month OPEX service.

The re-emergence of Object Storage 

Object Storage has shaken off its roots as cheap-and-deep cold storage and has started to emerge as the new form of primary storage. Originally conceived to support the management of extremely large datasets (beyond what traditional file systems could handle), Object Storage has become the storage standard for cloud-native applications – for its ability to support highly parallel and distributed access to large data sets.

As applications are developed or re-platformed for cloud-friendly architectures, Object Storage will become the natural choice for enabling applications to decouple and disaggregate applications and their compute resources from a pool of shared storage. This pattern has taken hold not only in custom SW development but also with large software vendors such as Splunk and Vertica.

Modern analytics has reached rocketship status

Fuelling the growth for modern analytics is more affordable infrastructure options such as more powerful CPUs, consumption-based infrastructure, available both on-prem and in the public cloud, and lower priced flash memory.

There is also a significant growth in stream analytics platforms, both open source (Apache Flink, Apache Beam and Spark Streaming) and commercial (Splunk DSP) replacing more and more batch-based processing platforms. Modern analytics can now reach larger scale with cloud native analytics architectures comprised of stateless servers and container and high-performance S3 object stores. Additionally, the unbridled growth of data sources including smart devices (smart home, wearables, connected cars, industrial Internet, etc.) will drive the adoption of modern analytics in order to drive more insights. 

Flash will defy the impossible with next-gen media like QLC

Since the introduction of flash, it’s largely been relegated to Tier 1, performance-centric applications, but with new solid state technologies such as Storage Class Memory (SCM) and QLC coming online and stratifying the memory space, flash is really poised to break out and address whole new swaths of data.

On the high-end, with the combination of SCM and high-speed protocols like NVMe-oF, shared storage arrays can now provide server-based storage like performance to the most latency-sensitive applications.

This set of applications is one of the last holdouts sitting on DAS which can now get all of the data-services common to shared storage (data protection, data reduction, etc.) – it’s now possible to get the top-end performance and rich data services. At the same time, the impending introduction of QLC is bringing flash to tiers of storage that have largely stayed on magnetic disk to date.

This cost reduction enables all applications to take advantage of the benefits of flash beyond performance: simplicity, reliability, and reduced data centre power and space.

AI operations will go from advisory roles to automated action as customers want a hands-free approach 

Organisations will be more open to AI making decisions for them. Customers want to set policies and let the vendors implement the policies, which is partially driven by the declarative nature of Kubernetes and container management.

The simplicity of containers will enable organisations to define a state, and the container will be the catalyst. The technology should then drive and deliver insights within the whole environment. AI will be applied to efficiently finding where the predictive model performs poorly and augmenting data for that feature space. This is critical for AI applications like anomaly detection and automatic root cause analysis to scale and be applicable in more contexts.

Containers are breaking into mainstream, requiring persistent storage options

Containers were born to make deploying stateless applications as simple and low-overhead as possible. But as the emergence of Kubernetes and the endorsement of containers by VMware is rapidly expanding container usage towards mainstream applications, delivering persistent storage for containers is critical to enable databases and applications to re-platform for containers.

The year 2020 looks to be the year that most enterprises evolve their private and hybrid cloud platforms beyond VMs, deploying an enterprise-wide container strategy, including building the storage foundation that enables stateful, mission-critical applications to embrace containers.

Browse our latest issue

Intelligent Data Centres

View Magazine Archive