Steve McDowell, Author at Moor Insights & Strategy https://staging3.moorinsightsstrategy.com/author/steve-mcdowell/ MI&S offers unparalleled advisory and insights to businesses navigating the complex technology industry landscape. Tue, 17 Sep 2024 15:42:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://moorinsightsstrategy.com/wp-content/uploads/2020/05/cropped-Moor_Favicon-32x32.png Steve McDowell, Author at Moor Insights & Strategy https://staging3.moorinsightsstrategy.com/author/steve-mcdowell/ 32 32 RESEARCH PAPER: Infinidat Brings Mission-Critical Storage To Every Enterprise https://moorinsightsstrategy.com/research-papers/research-paper-infinidat-brings-mission-critical-storage-to-every-enterprise/ Wed, 31 Aug 2022 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-infinidat-brings-mission-critical-storage-to-every-enterprise/ Infinidat’s journey through the storage industry has been unique. Where other enterprise infrastructure providers may begin with a modest offering and scale up over time, Infinidat has taken the opposite approach. The company serves the high-capacity, high- performance needs of the mission-critical workloads prevalent across the Fortune 1000. You can download the paper by clicking on […]

The post RESEARCH PAPER: Infinidat Brings Mission-Critical Storage To Every Enterprise appeared first on Moor Insights & Strategy.

]]>
Infinidat’s journey through the storage industry has been unique. Where other enterprise infrastructure providers may begin with a modest offering and scale up over time, Infinidat has taken the opposite approach. The company serves the high-capacity, high- performance needs of the mission-critical workloads prevalent across the Fortune 1000.

You can download the paper by clicking on the logo below:

Table Of Contents

  • Inifinidat’s Momentum
  • InfiniBox
  • InfiniGuard With InfiniSafe
  • Flexible Consumption Models
  • The Analyst’s Take

Companies Cited

  • Infinidat

The post RESEARCH PAPER: Infinidat Brings Mission-Critical Storage To Every Enterprise appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: VAST Data Set To Become the Foundation Of An AI-Powered World https://moorinsightsstrategy.com/research-papers/research-paper-vast-data-set-to-become-the-foundation-of-an-ai-powered-world/ Thu, 03 Mar 2022 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-vast-data-set-to-become-the-foundation-of-an-ai-powered-world/ VAST Data was founded only five years ago by a group of storage veterans who realized that the rapidly evolving world of storage technology wasn’t utilized to its fullest potential. The group believed that they could do better and, as a result, defined VAST’s mission “to bring an end to decades of complexity and application […]

The post RESEARCH PAPER: VAST Data Set To Become the Foundation Of An AI-Powered World appeared first on Moor Insights & Strategy.

]]>
VAST Data was founded only five years ago by a group of storage veterans who realized that the rapidly evolving world of storage technology wasn’t utilized to its fullest potential. The group believed that they could do better and, as a result, defined VAST’s mission “to bring an end to decades of complexity and application bottlenecks.”  That’s an audacious mission statement, but it’s one that VAST has demonstrated it can live up to.

You can download the paper by clicking on the logo below:

Table Of Contents

  • Founded To Disrupt Traditional Enterprise Infrastructure
  • Data Drives The Business
  • VAST Universal Storage
  • Figure 1: VAST DASE Universal Storage Architecture
  • VAST’s Unprecedented Success And Momentum
  • The Analyst’s Perspective

Companies Cited

  • Agoda
  • Athinoula A. Martino Center for Biomedical Imaging
  • Dell Technologies Capital
  • Mellanox Capital
  • Norwest Venture Partners
  • NVIDIA
  • VAST

The post RESEARCH PAPER: VAST Data Set To Become the Foundation Of An AI-Powered World appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Inside The World’s Fastest Database Machine – Oracle Exadata X9M https://moorinsightsstrategy.com/research-papers/research-paper-inside-the-worlds-fastest-database-machine-oracle-exadata-x9m/ Thu, 13 Jan 2022 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-inside-the-worlds-fastest-database-machine-oracle-exadata-x9m/ It may seem obvious to state that a database that supports a business-critical application should run on a platform tuned to exploit the full capabilities of that database. Yet time and again, IT organizations host their critical database applications on general-purpose servers, hoping that the performance is good enough to keep those business-critical applications running. […]

The post RESEARCH PAPER: Inside The World’s Fastest Database Machine – Oracle Exadata X9M appeared first on Moor Insights & Strategy.

]]>
It may seem obvious to state that a database that supports a business-critical application should run on a platform tuned to exploit the full capabilities of that database. Yet time and again, IT organizations host their critical database applications on general-purpose servers, hoping that the performance is good enough to keep those business-critical applications running.

You can download the paper by clicking on the logo below:

Table Of Contents:

  • The Oracle Exadata X9M
  • Exadata Cloud@Customer X9M
  • The Analyst’s View
  • Table 1: Oracle Exadata Cloud@Customer X9M vs. Top CSP On-Premises Database-As-A-Service/Hybrid DBAAS Cloud

Companies Cited:

  • Intel
  • Oracle

The post RESEARCH PAPER: Inside The World’s Fastest Database Machine – Oracle Exadata X9M appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Deploying Multi-Cloud With Nutanix And Lenovo ThinkAgile HX https://moorinsightsstrategy.com/research-papers/research-paper-deploying-multi-cloud-with-nutanix-and-lenovo-thinkagile-hx/ Wed, 27 Oct 2021 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-deploying-multi-cloud-with-nutanix-and-lenovo-thinkagile-hx/ IT infrastructure has become incredibly complex. Enterprise architecture has evolved from the traditional hub-and-spoke model of a centralized data center into a rich mix of on-premises servers and storage, public cloud, private cloud, and even edge computing. You can download the paper by clicking on the logo below: Table Of Contents The Evolving IT Landscape […]

The post RESEARCH PAPER: Deploying Multi-Cloud With Nutanix And Lenovo ThinkAgile HX appeared first on Moor Insights & Strategy.

]]>
IT infrastructure has become incredibly complex. Enterprise architecture has evolved from the traditional hub-and-spoke model of a centralized data center into a rich mix of on-premises servers and storage, public cloud, private cloud, and even edge computing.

You can download the paper by clicking on the logo below:

Table Of Contents

  • The Evolving IT Landscape
  • Managing Multi-Cloud With Nutanix
  • Lenovo ThinkAgile HX
  • Figure 1: Top Cloud Challenges
  • Figure 2: Three-Tier vs HCI Architecture
  • Figure 3: Nutanix Product Portfolio
  • Figure 4: Addressing Security & Compliance Challenges
  • Figure 6: Lenovo ThinkAgile HX
  • Figure 7: Benefits Of Lenovo TruScale As-A-Service

Companies Cited

  • Intel
  • Lenovo
  • Nutanix

The post RESEARCH PAPER: Deploying Multi-Cloud With Nutanix And Lenovo ThinkAgile HX appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Delivering The AI-Enabled Edge With Dell Technologies https://moorinsightsstrategy.com/research-papers/research-paper-delivering-the-ai-enabled-edge-with-dell-technologies/ Tue, 24 Aug 2021 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-delivering-the-ai-enabled-edge-with-dell-technologies/ Nurses in a hospital depend on edge devices to monitor and interpret signals from dozens of sensors that alert them when a patient needs attention. Edge devices in a chemical refinery predict, in real time, equipment failures before they occur by continuously analyzing hundreds of data points. Cameras inside a retail store notice that inventory […]

The post RESEARCH PAPER: Delivering The AI-Enabled Edge With Dell Technologies appeared first on Moor Insights & Strategy.

]]>
Nurses in a hospital depend on edge devices to monitor and interpret signals from dozens of sensors that alert them when a patient needs attention.

Edge devices in a chemical refinery predict, in real time, equipment failures before they occur by continuously analyzing hundreds of data points.

Cameras inside a retail store notice that inventory is running low and automatically dispatch an employee to replenish with merchandise from the backstock.

These examples of AI-enabled edge computing just scratch the surface of what’s possible when extending IT infrastructure beyond the traditional walls of the enterprise.

Deploying IT infrastructure at the edge enables organizations to quickly generate insights and deliver value where data is generated. As a result, edge computing extends the reach of enterprise IT, enabling new and impactful use cases that change the way many businesses operate.

You can download the paper by clicking on the logo below:

Table Of Contents

  • Enterprise At The Edge
  • The AI-Enabled Edge
  • Edge Computing With Dell Technologies
  • Real World Example
  • Concluding Thoughts
  • Figure 1: Edge Computing Applications
  • Figure 2: Example Edge Applications That Benefit From AI
  • Figure 3: The AI Life Cycle
  • Figure 4: Dell Technologies At The Edge

Companies Cited

  • Dell Technologies
  • EDAG Group

The post RESEARCH PAPER: Delivering The AI-Enabled Edge With Dell Technologies appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Oracle Exadata Cloud Service X8M https://moorinsightsstrategy.com/research-papers/research-paper-oracle-exadata-cloud-service-x8m/ Tue, 04 May 2021 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-oracle-exadata-cloud-service-x8m/ It’s not an exaggeration to say that IT organizations across nearly every business segment suffer from an overabundance of data. Although data has become an essential basis of competitive differentiation, organizations generate it faster than they can consume it, and this volume of data stresses the systems that store and process it. You can download […]

The post RESEARCH PAPER: Oracle Exadata Cloud Service X8M appeared first on Moor Insights & Strategy.

]]>
It’s not an exaggeration to say that IT organizations across nearly every business segment suffer from an overabundance of data. Although data has become an essential basis of competitive differentiation, organizations generate it faster than they can consume it, and this volume of data stresses the systems that store and process it.

You can download the paper by clicking on the link below:

Table Of Contents:

  • The Oracle Exadata Cloud Service X8M
  • Completely Positioned
  • The Analyst View
  • Table 1: Oracle Exadata Cloud Service X8M vs. Top CSP Database-As-A-Service
  • Table 2: Oracle Exadata Cloud Service X8M vs. Top-Tier Storage OEM

Companies Cited:

  • Oracle

The post RESEARCH PAPER: Oracle Exadata Cloud Service X8M appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Overcoming The Risks & Complexities Of Legacy Data Migration https://moorinsightsstrategy.com/research-papers/research-paper-overcoming-the-risks-complexities-of-legacy-data-migration/ Fri, 15 Jan 2021 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-overcoming-the-risks-complexities-of-legacy-data-migration/ Migrating data off legacy storage systems is a necessary part of maintaining an evolving IT infrastructure. Migrations can occur for any number of reasons: managing technical debt, reducing maintenance fees on legacy equipment, upgrading storage systems to meet new and increasing capacity, or performance requirements. You can download the paper by clicking on the logo […]

The post RESEARCH PAPER: Overcoming The Risks & Complexities Of Legacy Data Migration appeared first on Moor Insights & Strategy.

]]>
Migrating data off legacy storage systems is a necessary part of maintaining an evolving IT infrastructure. Migrations can occur for any number of reasons: managing technical debt, reducing maintenance fees on legacy equipment, upgrading storage systems to meet new and increasing capacity, or performance requirements.

You can download the paper by clicking on the logo below:

Table Of Contents:

  • Executive Summary
  • The Daunting Task Of Legacy Data Migration
  • Element Of A Successful Legacy Data Migration
  • The Professional Services Approach
  • Summary
  • Figure 1: Stages Of A Data Migration Project

Companies Cited:

  • Pure Storage

The post RESEARCH PAPER: Overcoming The Risks & Complexities Of Legacy Data Migration appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Avoiding Needless Cost And Complexity For Amazon RDS Data Protection https://moorinsightsstrategy.com/research-papers/research-paper-avoiding-needless-cost-and-complexity-for-amazon-rds-data-protection/ Wed, 08 Jul 2020 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-avoiding-needless-cost-and-complexity-for-amazon-rds-data-protection/ Cloud architectures require data protection services designed from the ground up to meet the unique demands of the cloud environment. Using tools designed for on- premises workloads to address the unique needs of a cloud deployment can lead to unwanted (and often unexpected) cost and complexity. There is a gap between data protection tools designed […]

The post RESEARCH PAPER: Avoiding Needless Cost And Complexity For Amazon RDS Data Protection appeared first on Moor Insights & Strategy.

]]>
Cloud architectures require data protection services designed from the ground up to meet the unique demands of the cloud environment. Using tools designed for on- premises workloads to address the unique needs of a cloud deployment can lead to unwanted (and often unexpected) cost and complexity.

There is a gap between data protection tools designed for the traditional data center and solutions that provide a fully integrated cloud-native experience across a range of cloud services. Nowhere is this gap more evident than with Amazon Relational Database Services (Amazon RDS).

You can download the paper by clicking on the logo below:

Table Of Contents:

  • Abstract
  • Cloud Is A Reality
  • Data Protection Challenges For Today’s IT Organization
  • Amazon RDS Snapshots
  • Clumio’s Cloud-Native Data Protection Services
  • Concluding Thoughts
  • Table 1: Essential Elements For Data Protection
  • Table 2: Stakeholder Challenges
  • Figure 1: Snapshots & Shapshot Managers Drive Cost & Complexity
  • Figure 2: Clumio Integrated Dashboard
  • Figure 3: Clumio Backup As A Service For AWS RDS
  • Figure 4: Granular Record Retrieval
  • Figure 5: Clumio Cost VS Snapshot-Based Approaches

Companies Cited:

  • Amazon
  • AWS
  • Clumio
  • Microsoft
  • VMware

The post RESEARCH PAPER: Avoiding Needless Cost And Complexity For Amazon RDS Data Protection appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Scale Computing HC3: A Qualitative Evaluation https://moorinsightsstrategy.com/research-papers/research-paper-scale-computing-hc3-a-qualitative-evaluation/ Wed, 22 Jan 2020 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-scale-computing-hc3-a-qualitative-evaluation/ Simplicity is valued in nearly every area of enterprise IT but not often easy to deliver effectively. The past decade has seen the rise of a new kind of IT solution from which compute, storage and networking can be simplified into a single, easy-to-use system, and they can be delivered with enterprise-class scalability and reliability, […]

The post RESEARCH PAPER: Scale Computing HC3: A Qualitative Evaluation appeared first on Moor Insights & Strategy.

]]>
Simplicity is valued in nearly every area of enterprise IT but not often easy to deliver effectively. The past decade has seen the rise of a new kind of IT solution from which compute, storage and networking can be simplified into a single, easy-to-use system, and they can be delivered with enterprise-class scalability and reliability, too.

You can download the paper by clicking on the logo below:

Table of Contents:

  • Hyperconverged Infrastructure
  • Scale Computing
  • Scale Computing And HCI
  • HC3 HyperCore Architecture
  • Scale Computing HC3 Product Family
  • Evaluating Scale Computing HC3
  • Summary
  • Figure 1: High Availability Configuration
  • Figure 2: HC3 Cluster Configuration Screen
  • Figure 3: Overall Management Screen To Configure Virtual Machine
  • Figure 4: Virtual Machine Configuration Options

Companies Cited:

  • Scale Computing

The post RESEARCH PAPER: Scale Computing HC3: A Qualitative Evaluation appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Storage For Container Deployments https://moorinsightsstrategy.com/research-papers/research-paper-storage-for-container-deployments/ Thu, 09 Jan 2020 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-storage-for-container-deployments/ Containers provide a nearly unparalleled ability to efficiently deploy and manage application workloads and give enterprises the tools required to quickly, safely and easily deploy workloads across multi-site, multi-cloud infrastructure. Built on a foundation of simplicity, isolation and efficient resource sharing, they have become an indispensable tool for IT administrators and DevOps practitioners. You can […]

The post RESEARCH PAPER: Storage For Container Deployments appeared first on Moor Insights & Strategy.

]]>
Containers provide a nearly unparalleled ability to efficiently deploy and manage application workloads and give enterprises the tools required to quickly, safely and easily deploy workloads across multi-site, multi-cloud infrastructure. Built on a foundation of simplicity, isolation and efficient resource sharing, they have become an indispensable tool for IT administrators and DevOps practitioners.

You can download the paper by clicking on the logo below:

Table Of Contents:

  • Containers In The Enterprise
  • Container Architecture
  • Storage & Containers
  • Developing A Container Storage Strategy
  • Concluding Thoughts
  • Figure 1: Container Configuration Diagram
  • Figure 2: OpenShift Kubernetes Architecture
  • Figure 3: Kubernetes Storage Diagram

Companies Cited:

  • IBM
  • Red Hat

The post RESEARCH PAPER: Storage For Container Deployments appeared first on Moor Insights & Strategy.

]]>
NVIDIA Brings AI To DC https://moorinsightsstrategy.com/nvidia-brings-ai-to-dc/ Thu, 14 Nov 2019 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/nvidia-brings-ai-to-dc/ Nearly every enterprise is experimenting with artificial intelligence and deep learning. It seems like every week there’s a new survey out detailing the ever-increasing amount of focus that IT shops of all sizes put on the technology. If it’s true that data is the new currency, then it’s artificial intelligence that mines that data for […]

The post NVIDIA Brings AI To DC appeared first on Moor Insights & Strategy.

]]>
Nearly every enterprise is experimenting with artificial intelligence and deep learning. It seems like every week there’s a new survey out detailing the ever-increasing amount of focus that IT shops of all sizes put on the technology. If it’s true that data is the new currency, then it’s artificial intelligence that mines that data for value. Your C-suite understands that, and its why they continually push to build AI and machine learning capabilities.

Nowhere is AI/ML more impactful than in the world of government and government contractors. It’s not just the usual suspects of defense and intelligence who demand these capabilities—AI/ML is fast becoming a fact-of-life across the spectrum of government agencies. If you’re a government contractor, then you’re already seeing AI/ML in an increasing number of RFP/RFQs.

AI impacts everything

I’m a storage analyst. I don’t like to think about AI. I like to think about data. I advise my clients on how storage systems and data architecture must evolve to meet the needs of emerging and disruptive technologies. These days, those technologies all seem to be some variation of containerized deployments, hybrid-cloud infrastructure and enterprise AI. There’s no question that Artificial Intelligence is the most disruptive.

High-power GPUs dominate machine learning. Depending on the problem you’re trying to solve, that may be one GPU in a data scientist’s workstation, or it may be a cluster of hundreds of GPUs. It’s also a certainty that your deployment will scale over time in ways that you can’t predict today.

That uncertainty forces you to architect your data center to support the unknown. That could mean deploying storage systems that have scalable multi-dimensional performance that can keep the GPUs fed, or simply ensuring that your data lakes are designed to reduce redundancies and serve the needs of all that data’s consumers.

These aren’t problems of implementing AI, but rather designing an infrastructure that can support it. Most of us aren’t AI experts. We manage storage, servers, software or networking. These are all things will be disrupted by AI in the data center.

The single best way to prepare for the impacts of AI in the data center is to become educated on what it is and how it’s used. The dominant force in machine learning and GPU technology for AI is NVIDIA. Thankfully, NVIDIA has a conference to help us all out.

NVIDIA’s GPU technology conference for AI

Every spring NVIDIA hosts its massive GPU Technology Conference (GTC) near its headquarters in Silicon Valley. It’s there where 6,000+ attendees gather to hear about all aspects of what NVIDIA’s GPUs can do. This ranges from graphics for gaming and visualization, to inference at the edge, to deep learning in the enterprise. It’s one of my favorite events each year (read my recap the most recent GTC here, if interested).

Next week NVIDIA brings a more focused version of its GTC conference to Washington, DC.  Gone are the sessions and talks about gaming, with the focus instead on the business of deploying artificial intelligence with a purpose. NVIDIA’s GTC DC focuses on autonomous machines, cybersecurity, computer vision, HPC, robotics and augmented reality. These are the topics that dominate discussions of AI around the DC beltway.

As much as I enjoy watching NVIDIA’s CEO Jensen Huang give a keynote, I’m much more excited to hear Ian Buck’s keynote at GTC DC. Ian Buck is NVIDIA’s vice president of accelerated computing. He’s also the man who invented CUDA, the programming framework that allows GPUs to be used for machine learning.

Ian won’t be talking about CUDA much. I expect he’ll focus on the real problems that can be solved by the technology. According to NVIDIA, he’ll talk extensively about how organizations of all types can best utilize the power of artificial intelligence to boost their competitiveness.

Beyond NVIDIA, there are speakers from the US OMB, the White House Office of Science and Technology Policy, the NIH and more. Exhibitors will include over fifty companies showing off AI, robotics, and high-performance computing. Key exhibitors including powerhouse government contract players Booz Allen Hamilton, Lockheed Martin and Dell Technologies.

Summary

Artificial intelligence is in the data center with a rapidly growing footprint. It’s already impacting every area of IT architecture. IT practitioners, whether directly involved in AI or not, need to understand how that affects their areas of expertise.

I don’t work in AI, but I’ll still be in Washington, DC, next week for NVIDIA’s GTC DC event. It’s critical that I understand how these technologies impact the world that I live in. The best way to prepare for the future is to understand it.

NVIDIA is inventing the future of machine learning right in front of our eyes. You should come to DC next week and take a look for yourself. You’ll be in good company.

The post NVIDIA Brings AI To DC appeared first on Moor Insights & Strategy.

]]>
IBM Launches New Storage Solutions Geared Towards AI And Container Environments https://moorinsightsstrategy.com/ibm-launches-new-storage-solutions-geared-towards-ai-and-container-environments/ Thu, 14 Nov 2019 06:00:00 +0000 https://staging3.moorinsightsstrategy.com/ibm-launches-new-storage-solutions-geared-towards-ai-and-container-environments/ IBM lives at the intersection of disruptive technology and real-world solutions. There are no two more disruptive technologies in the enterprise today than artificial intelligence and containerized applications. Artificial intelligence is in the enterprise. Nearly every IT organization has either already deployed, or is preparing to deploy, some sort of AI solution. The problem with AI […]

The post IBM Launches New Storage Solutions Geared Towards AI And Container Environments appeared first on Moor Insights & Strategy.

]]>
IBM lives at the intersection of disruptive technology and real-world solutions. There are no two more disruptive technologies in the enterprise today than artificial intelligence and containerized applications.

Artificial intelligence is in the enterprise. Nearly every IT organization has either already deployed, or is preparing to deploy, some sort of AI solution. The problem with AI in the data center is that we’re all still figuring out exactly what to do with it. Today’s experiments will blossom into tomorrow’s major deployments. AI requires multi-dimensional performance, scalability to meet the needs of applications over time and sophisticated tools to manage the onslaught of data that feeds AI.

Containers live in a similar space. The technology provides the means to safely and reliably deploy applications and server-less workloads to both end-users and dev-ops teams. Containers can become overwhelming, and orchestration tools such as Kubernetes and Red Hat ’s OpenShift help alleviate that complexity. Missing from the container ecosystem, however, is means by which to intelligently manage the underlying storage.

IBM this past week made announcements that will help IT organizations build lasting solutions that scale and mature with AI implementations and container deployments. Let’s take a look at what the company is doing.

The appliance approach allows for very fast storage deployments. Instead of installing a FlashSystem array, with separately installed Spectrum Scale software, there is now a single point of installation. IBM demonstrated a turnkey deployment that took less than three hours, from delivery to serving data.

The Elastic Storage System 3000 is designed from the ground-up for scalability, meeting the needs of AI and analytics deployments of nearly any size. The system scales from a base 40Gbps to multiple terabytes of bandwidth, all with the low latency that’s inherent in NVMe storage.

Simplicity in the IT world is generally a good thing. IBM delivered just that. Combining the proven high-throughput NVMe performance of IBM’s Flash System 9100 with IBM’s highly scalable Spectrum Scale software is a combination that just makes good sense.  I’m not aware of another offering on the market that contains the functionality that the Elastic Storage System 3000 delivers in an appliance form-factor.

Containers

Managing storage for container data is complex business. It’s important to remember (and easy to forget!) that container data is enterprise data and should be treated as such. Kubernetes, and its cousin Red Hat OpenShift, require a storage strategy that relies on shared storage and integration with advanced data services. This is an often-overlooked component in many container deployments.

Kubernetes provides the Container Storage Interface (CSI) to connect external data services to containers running within a cluster. This provides raw data services. It’s up to storage vendors to provide drivers that integrate with CSI to allow mapping between containers within a Kubernetes cluster and their storage arrays. Nearly all storage vendors at this point have basic CSI driver support. Integration with complex data services is a different story.

IBM leapfrogs its competitors by providing enterprise-grade data protection for Kubernetes and OpenShift deployments. IBM’s Spectrum Protect Plus now utilizes the CSI snapshot interface to allow developers to backup, recovery and retain persistent volumes using predefined policies in Kubernetes and, soon, OpenShift environments. IBM Spectrum Protect Plus allows enterprises to treat container data as enterprise data, with the same data protection capabilities demanded by an organization’s most critical data.

The new Spectrum Protect capabilities extend IBM’s already aggressive delivery of CSI drivers to allow its storage solutions to be efficiently deployed into hybrid multi-cloud environments. IBM’s CSI work delivers security through intelligent volume mapping, dynamic provisioning, persistent data volumes, and infrastructure agility to IT teams looking to deploy Kubernetes or OpenShift.

This is just the beginning. IBM is aggressively attacking the OpenShift market, and we’re seeing rapid integrations between IBM products and OpenShift and Kubernetes. It won’t surprise me at all to see the IBM Spectrum Storage suite embracing containers across the board as the next year unfolds.

Summary

IBM’s storage team likes to make big multi-part announcements, and this announcement day was no different. Beyond what I’ve mentioned already, the company released a new virtual tape library, an update to Spectrum Scale to support erasure coding, an update to Spectrum Discover that allows it discover what’s in your backups, a myriad of software updates and even new storage-as-a-service offerings.

These are all solid announcements that both round out IBM’s portfolio and keep it fresh, but it’s IBM’s new storage appliance and its embracing of containers that has me most excited. The impact of AI on storage architecture is just beginning to be understood, and IBM’s Elastic Storage System 3000 is an appliance that allows IT organizations to easily and quickly scale data services as AI and analytics solutions evolve.

Containers need an intelligent storage strategy. It’s a hard, unsolved problem. IBM’s integration of CSI with its Spectrum Storage software begins to take us there with enterprise-class data services. The industry will follow, but once again IBM leads the way.

The post IBM Launches New Storage Solutions Geared Towards AI And Container Environments appeared first on Moor Insights & Strategy.

]]>
Dell’s PowerMax Wears An Optane Bow-Tie https://moorinsightsstrategy.com/dells-powermax-wears-an-optane-bow-tie/ Thu, 03 Oct 2019 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/dells-powermax-wears-an-optane-bow-tie/ You’ll notice my friend James Myers, from Intel, when you inevitably see him roaming the halls of your next tech conference. He’s hard to miss, with a warm smile and distinctive bow tie that draw you instinctively closer. If it’s his friendly demeanor that attracts you, it’s the depth of his passion for a specific set […]

The post Dell’s PowerMax Wears An Optane Bow-Tie appeared first on Moor Insights & Strategy.

]]>
Dell's PowerMax Wears An Optane Bow-TieYou’ll notice my friend James Myers, from Intel, when you inevitably see him roaming the halls of your next tech conference. He’s hard to miss, with a warm smile and distinctive bow tie that draw you instinctively closer. If it’s his friendly demeanor that attracts you, it’s the depth of his passion for a specific set of technologies that holds your attention. James is a man on a mission. Just let him talk to you about Intel Optane.

Optane’s promise

Intel’s Optane persistent memory technology seemed set-up to fail. The expectations were simply set too high. From the outset, it was described in terms that sounded too good to be true. Based on a new semiconductor technology that its inventors at Intel and Micron Technologies dubbed 3D XPOINT (pronounce that “3D cross-point”), it was designed to overcome the limitations of traditional NAND-based SSDs. The technology is also blindingly fast, with near-zero access latencies. Optane was designed as the ultimate storage media.

Optane, like 3D NAND before it, provides persistence. Optane takes a different approach from NAND, though, exposing persistence with a byte-level addressability that’s more akin to how memory is used than how traditional SSDs are usually accessed. It’s that byte addressability, coupled with those nearly non-existent latencies, that give Optane its magical powers.

When Intel first introduced an Optane SSD in 2017 it made the strategic mistake of letting its consumer marketing teams sell the device. An Optane SSD could speed up the disk accesses of your next game of Call of Duty, or make your video editing run a little smoother, but it came at an early adopter cost that consumers weren’t quite ready to swallow. The new semiconductor technology committed the dual sins of being both more expensive than 3D NAND, while also delivering less capacity. The home PC market thrives on the promise of more for less, and the reviews in the consumer technology press were unkind.

The problem, as James will tell you, was that the technology was misunderstood and a victim of its own hype. Try thinking about it this way: instead of replacing 3D NAND SSDs, Optane introduces a new storage tier into your system’s architecture. The guys in Silicon Valley who like to make up acronyms call it Storage Class Memory, or SCM.

Optane SSDs are for your hot storage. Data that needs persistence, but data that also needs to be accessed quickly and frequently. If you need something hotter than hot, then there’s Optane DC Persistent Memory. Intel Optane PM exploits the byte-addressability of the technology and puts Optane into a DIMM slot, right next to your main memory and CPU. Traditional NAND-based SSDs don’t go away—rather, they become the repository for your workaday storage. Behind those fall traditional hard-drives (and soon-to-be QLC NAND SSDs).

The problem is that this new storage tiering doesn’t just happen when you plug in an Optane device. It requires a system engineered to accept it, along with software intelligent enough to understand what data is hot enough for Optane. It’s a hard problem to solve. If you watched the storage market in 2018, you noticed nearly every storage vendor touting their arrays as “SCM Ready,” but with a fair amount of hand-waving in terms of what it all really meant. The storage vendors who have implemented SCM, such as Hewlett Packard Enterprise , in its 3PAR line, are using the technology primarily for read caching. This still provides a benefit, but not the level that full use of the technology  as a persistent memory store would yield.

The hand-waving about SCM can now end. This week Dell Technologies  opened its garage door and demonstrated to the world what a souped-up SCM-enabled array can do.

PowerMax: Intelligent Storage

Dell introduced its PowerMax storage architecture just last year at its Dell Technologies World conference in Las Vegas. A powerful beast of a machine, the PowerMax delivered the full potential of an all-flash system enabled with latency-killing NVMe. It yielded performance numbers that pushed the limits of what was achievable in a storage array. During its introduction at the conference, Dell’s Jeff Clarke said that Powermax had “artificial intelligence” within it to help “adapt” to the workloads it was servicing. Nobody from Dell mentioned SCM, but I still walked away with that familiar feeling of having watched a marketer’s hands waving.

This past week Dell announced an updated PowerMax, with features that make it quite clear where its embedded AI is focused. It’s also a textbook illustration of the benefits of properly-tiered Optane memory. The PowerMax now supports dual-ported Optane SCM. To unlock the power of Optane, the PowerMax’s artificial intelligence engine watches data patterns and makes decisions about which data belongs in SCM and which belongs on NAND-based SSDs. This is similar to how a hybrid-flash array provides SSD-like speeds by intelligently placing hot data on SSDs and colder data on spinning drives (but without the spinning drives).

The machine learning engine in the PowerMax isn’t just about intelligent data placement. The software also makes decisions that keep its performance at steady levels by, for example, temporarily disabling in-line deduplication during busy periods. The AI engine will also enforce rules and perform dynamic tuning to meet application SLAs that the storage administrator defines.

Beyond intelligently tiering Optane SCM and traditional SSDs, Dell added NVMe-over-Fabric to its offering. The PowerMax now supports 32Gbps FC-NVMe to bring low-latency clustering between controllers. This continues a trend that we’re seeing across the industry, with Fibre Channel a natural early target for NVMe interconnect. There’s a lot of expensive existing SAN infrastructure out there, and this is great path forward to maintain that investment.

The only real measure of a product like this is in its performance numbers. Does Optane and machine learning-based tuning make a difference? Dell’s numbers seem to suggest that it does. In one published benchmark, the inclusion of Optane delivered 26% lower latency, showing a 0.21 ms response time on a 100K IOPS random read workload. The big win, though, is in write performance. A write-intensive mixed workload showed that Optane outperformed traditional SSDs by nearly 500% (that’s not a typo).

If you’re interested in raw numbers and the nitty-gritty of the benchmarks and capabilities of the new PowerMax, it’s all here on Dell’s website to find.

Wrapping up

Dell’s PowerMax is an impressive family of devices. The introduction of its latest models is well-timed, landing a week ahead of Pure Storage’s annual Accelerate Conference, and just days before IBM  showed off its new mainframe-class array. I’m sure both marketing teams had an uncomfortable day updating their competitive analysis charts.

Pure Storage is expected to announce updates to its product line-up this week at Pure Accelerate.  We don’t know yet what will be announced, but there’s already strong speculation in the technology press that it will include adoption of SCM technology. Optane is quickly finding a home in storage systems.

Optane’s time is near. While the Dell EMC PowerMax nicely demonstrates what the technology can do in a storage array, the developers of memory-intensive software are also beginning to show off the potential of Intel Optane DC Persistent Memory in application architecture. For example, Optane in a DIMM slot shows recovery times for applications like SAP HANA that are orders-of-magnitude faster than what is achievable with SSDs. Persistent memory sitting next to the CPU changes the conversation about what it possible, whether for databases or big key-value stores.

I haven’t talked to James since Dell released its new PowerMax, but I imagine he’s smiling brighter, with his tie straightened up just a little bit. Dell’s demonstration of how Optane should be used has made his job a little easier. If you want to know where else Optane is changing the world, just ask him—he’s easy to pick out. Look for the bow tie.

The post Dell’s PowerMax Wears An Optane Bow-Tie appeared first on Moor Insights & Strategy.

]]>
Is The Fastest Storage Array In The World Now From IBM? https://moorinsightsstrategy.com/is-the-fastest-storage-array-in-the-world-now-from-ibm/ Thu, 03 Oct 2019 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/is-the-fastest-storage-array-in-the-world-now-from-ibm/ There are very few architectures in the computing world like a mainframe. It delivers obscenely high levels of performance while simultaneously scaling towards the upper reaches of what you’d believe is feasible. It does this without any performance hiccups. Mainframes are built for continuous up-time. They set the benchmark for the industry in information security. […]

The post Is The Fastest Storage Array In The World Now From IBM? appeared first on Moor Insights & Strategy.

]]>
Is The Fastest Storage Array In The World Now From IBM?
3D illustration of super computer server racks in datacenter depositphoto

There are very few architectures in the computing world like a mainframe. It delivers obscenely high levels of performance while simultaneously scaling towards the upper reaches of what you’d believe is feasible. It does this without any performance hiccups.

Mainframes are built for continuous up-time. They set the benchmark for the industry in information security. Mainframes are deployed into the most mission-critical environments, powering everything from the hundreds of millions of transactions that flow through our financial markets each day, all the way to scalable hybrid cloud running hundreds of Linux instances on the hundreds of cores within the computer.  It is that hybrid cloud world that IBM is most strongly positioning the technology.

The IBM Z series mainframe computer is architected differently from the x86-based servers that populate the racks of your datacenter. A mainframe is full of processors dedicated to managing I/O, to prevent any bottlenecks as data traverses the paths into and out of the system. Those I/O channels also free up the 190 client configurable cores from having to worry about servicing interrupts at the expense of your workload.  I/O moves fast, without much latency, while compute workloads execute without stalling for data.

IBM’s just-announced z15 mainframe, which my colleague Patrick Moorhead described today also in Forbes, ups the stakes in the mainframe game. The z15 delivers 20% more I/O channels and 50% more physical Coupling Facility connections than its older brother, the z14. It also sports a cryptographic processor that’s twice as fast.

What does all this have to do with storage? Simple. You can’t deliver on the promise of the performance of a machine like the z15 without a storage array that can keep the z15’s hungry I/O controllers fed with data. These are not inexpensive machines. You don’t want all of that performance sitting around just waiting for data.

IBM understands this better than anyone else. The new IBM DS8900F data system is deliberately designed to meet the needs of the new z15.  It might just be the fastest storage array on the market.

Mainframe-class storage

The single most meaningful metric of any storage system is latency.  The amount of time it takes to service a read or write request from a storage array directly impacts the performance of the workloads requiring that data. A machine like the z15 computer consumes a constant stream of data.

IBM’s DS8900F has some of the lowest latencies ever delivered by a storage array. IBM promises 18 microseconds latency to the mainframe, which is more than 5X better than its closest competition. The latency goes up to 90 microseconds in a distributed environment. These numbers are at the core of a 50% reduction in transaction time on an IBM Db2 real-world workload.

The second most meaningful metric of any storage system is availability. We’ve spent decades now talking about “5 nines” of availability. IBM’s DS8900F delivers seven nines (that’s 99.99999% uptime per year) with its HyperSwap technology. They also promise 3 and 4 site replication with only 3-5 seconds RPO. These are the kind of numbers that almost define mission-critical.

Looking at capacity, the DS8900F family scales from 12 to 5,898 TB of flash storage, delivering up to 2,320K IOPS (measured with a standard 4K R/W mix). That’s a lot of data but, remember, this is an array built for a mainframe.

Mainframe-class security and manageability

The new IBM z15 mainframe hosts mission-critical applications. Delivering data into those mission-critical applications in today’s environment means protecting against constant cyber-attacks with both security and data protection features. It’s here where the DS8900F shines.

IBM Safeguarded Copy is a collection of features that provides up to 500 safeguarded backups to prevent sensitive point-in-time copies of data from being modified or deleted due to malicious software, ransomware, or the still-common user error. IBM DS8900F Safeguarded Copy implements dual management control for an increased level of security, while also integrating with external HA and DR capabilities.

Working in conjunction with features inherent in the IBM Z and LinuxONE systems, the DS8900F also provides both encryption at rest and encryption in-flight. It’s that in-flight encryption that is hard to find outside of mainframe environments today.  The encryption capabilities are powered by hardware accelerators with no performance stolen from the array, or the Z-series, to implement.

Bringing it all together is IBM’s AI-driven cloud-based management and support system, IBM Storage Insights. IBM Storage Insights delivers a consistent set of management and diagnostic capabilities not just on the DS8900F system, but across all of your enabled IBM storage products.   IBM Storage has always been a software-first story, and the new capabilities delivered with the new DS8900F only solidify that.

Wrapping up: not just for mainframes

The reality is that, for all of its benefits, most IT organizations will never touch a mainframe computer. Those that do will deploy it into isolated applications. Maybe it’s the backbone of an enterprise hybrid-cloud architecture, or perhaps it continues to power the on-line transaction processing system that it always has.

The beautiful thing about the technology world is that it always seems to find a way to take the best and most exotic ideas and deliver them as more accessible products. The mainframe world has given us everything from spinning disk drives to virtualization, technologies that today we can find in our phones. That innovation isn’t stopping, and the storage industry continues to learn.

The enterprise storage market is at its most competitive point in decades. Pure Storage, Dell EMC , Hewlett Packard Enterprise , and even Lenovo are in a brutal fight for your storage dollars. Each product release from any of these players seems more extraordinary than the last. There are almost no wrong choices for an IT buyer in this never-ending game of leapfrog.

IBM Storage, however, is a different creature. It leverages everything that it learns from building architectures like the DS8900F and folds those learnings into the industry’s most substantial and robust storage portfolio. It’s a portfolio that seamlessly marries hardware with a broad suite of software with its Spectrum Storage products, all designed for seamless tiering across the data center and into the cloud. The new DS series product doesn’t merely raise the bar; it continues a long tradition.

It won’t surprise me when we see the performance, security, and data protection features that define this new product delivered into IBM’s more mainstream FlashSystem arrays somewhere down the road.   There’s an old saying in enterprise IT that nobody ever got fired for buying IBM. There’s a good reason for that.

The post Is The Fastest Storage Array In The World Now From IBM? appeared first on Moor Insights & Strategy.

]]>
RESEARCH PAPER: Enterprise Machine & Deep Learning With Intelligent Storage https://moorinsightsstrategy.com/research-papers/research-paper-enterprise-machine-deep-learning-with-intelligent-storage/ Wed, 26 Jun 2019 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/research-paper-enterprise-machine-deep-learning-with-intelligent-storage/ Fueled by data, infrastructure advances, and the ubiquity of machine learning and deep learning (ML/DL) toolkits, artificial Intelligence (AI) solutions are fast becoming a mainstay in the enterprise data center. AI turns data into insights across a broad swath of enterprise verticals as diverse as automotive, healthcare, life sciences, finances, technology, retail, and beyond. Data […]

The post RESEARCH PAPER: Enterprise Machine & Deep Learning With Intelligent Storage appeared first on Moor Insights & Strategy.

]]>
Fueled by data, infrastructure advances, and the ubiquity of machine learning and deep learning (ML/DL) toolkits, artificial Intelligence (AI) solutions are fast becoming a mainstay in the enterprise data center. AI turns data into insights across a broad swath of enterprise verticals as diverse as automotive, healthcare, life sciences, finances, technology, retail, and beyond. Data is now a competitive advantage in industries suchas insurance − where predictive AI removes risks from underwriting, finance − wherereal-time deep-learning recognizes fraud as it happens, and even data centermanagement − where patterns are analyzed to predict failures and scalability issues.

You can download the paper by clicking on the logo below:

Table Of Contents

  • Summary
  • Deep Learning Is Changing The Enterprise
  • Architecting For Deep Learning In The Data Center
  • Data In A Deep Learning Environment
  • Dell EMC: Delivering Storage For Deep Learning
  • Dell EMC: Full Stack Deep Learning
  • Conclusion
  • Figure 1: The Relationship Between AI, ML, and DL
  • Figure 2: Typical Machine Learning / Deep Learning Pipeline
  • Table 1: Examples Of Some Of The Available Ready Solutions And Reference Architectures

Companies Cited

  • Dell EMC
  • NVIDIA

The post RESEARCH PAPER: Enterprise Machine & Deep Learning With Intelligent Storage appeared first on Moor Insights & Strategy.

]]>
HPE Delivers Storage And Convergence At Discover 2019 https://moorinsightsstrategy.com/hpe-delivers-storage-and-convergence-at-discover-2019/ Tue, 18 Jun 2019 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/hpe-delivers-storage-and-convergence-at-discover-2019/ It seems like it’s been a long time since either storage or HCI were front and center at an HPE event. Hewlett Packard Enterprise fixed that this week with a flurry of announcements during its keynote at its marquee HPE Discover event in Las Vegas. HPE’s has a robust storage portfolio. Its Nimble Storage and […]

The post HPE Delivers Storage And Convergence At Discover 2019 appeared first on Moor Insights & Strategy.

]]>
HPE CEO Antonio Neri kicks off HPE Discover 2019 in Las Vegas
 STEVE MCDOWELL

It seems like it’s been a long time since either storage or HCI were front and center at an HPE event. Hewlett Packard Enterprise fixed that this week with a flurry of announcements during its keynote at its marquee HPE Discover event in Las Vegas.

HPE’s has a robust storage portfolio. Its Nimble Storage and 3PAR products are healthy and meeting the needs of the market. HPE  delivered market-beating growth during the first quarter of this year. HPE supplemented that strength with a new high-end storage offering call Primera.

HPE also continues its investment in its SimpliVity HCI portfolio. Despite many analysts (including this one) reading much into HPE’s budding relationship with Nutanix, HPE is doubling down on SimpliVity. There are two substantial additions to the SimpliVity portfolio, as well as news that HPE’s InfoSight predictive analytics platform now supports Simplivity.

HCI is also at the root of a fascinating product announcement from HPE’s Nimble Storage team called HPE Nimble StoragedHCI. This new product is designed to provide the flexibility of composable infrastructure with the benefits of HCI.

Let’s jump into the details of what HPE announced, along with what it all means.

Primera, a new high-end storage solution

The big storage news is the announcement of a new high-end storage array called Primera.  Positioned above 3PAR in the HPE’s line-up, Primera is designed to compete against Dell’s PowerMax and Pure’s FlashArray//X at the upper edges of the storage market. These are the competitors that HPE sees most often in deals.

Primera is a new product, both hardware and software, but its legacy derives directly from 3PAR. Primera’s software stack is based on 3PAR’s operating system. It uses an ASIC based on 3PAR’s latest generation technology.  But don’t let its legacy fool you. There are some reasonably substantial architectural changes within this product.

The Primera uses dual active controllers (or ‘active-active’) based on new electronics to facilitate near-instant failover times when a controller goes down. HPE is also using container technology to keep non-critical data services on the array isolated.

Primera’s container approach should bring an increased level of reliability to the table. This is the direction that the industry is heading, with even Dell Technologies executives indicating in interviews that the company’s upcoming storage architecture, Midrange.next, will leverage containers for precisely the same set of benefits. This is a nice win for HPE.

3PAR isn’t going away. HPE is using its brand distinction with Nimble, 3Par, and now Primera, to align with natural market tiering for storage arrays.  It’s a brand strategy that makes sense. It will help IT customers distinguish products. It also removes expectations about backward compatibility as Primera evolves.

HPE is incorporating Primera into its Synergy Composable Cloud offering, as well as its Google Anthos hybrid-cloud solutions.  These are integration efforts, not new products, but I’m happy to see it all available at launch.

Primera is a strong offering that will keep HPE competitive at the high-end of the storage market.  3PAR and Nimble are both healthy and competitive, and Primera complements those products nicely.

Simplivity at the edge

HPE wants to be clear. It views SimpliVity is as HPE’s preferred HCI offering.

There have been recent joint announcements from HPE and Nutanix detailing joint offerings, but HPE’s lead dog in the HCI race is its  SimpliVity line.  Nutanix exists in the portfolio to provide choice, HPE executives tell us. Supporting this position are three updates to the SimpliVity line.

The HPE SimpliVity 325 is a new offering that brings the power of Simplivity to a dense 1U form-factor powered by AMD EPYC processors.  HPE is smartly targeting the SimpliVity 325 to Edge applications, as well as remote office and space constrained environments.  Edge is a natural application space for HCI, with SimpliVity likewise being a good fit.

The company also announced a SimpliVity archive node.  The archive node provides a mix of traditional hard disk drives and SSDs in a 2U package to provide long-term storage options for a Simplivity HCI environment.   The SimpliVity archive node is a much-needed offering, as not every HCI implementation exists within a classic IT infrastructure with baked-in archival storage.

It should come as no surprise to see that HPE InfoSight now supports SimpliVity.  InfoSight, HPE’s predictive analytics solution, continues to expand across HPE’s data center portfolio.  The AI-driven predictive capabilities inherent in InfoSight once applied to Simplivity, will become a strong competitive differentiator. I’m thrilled to see this integration happen.

These add up to yield a great set of updates to SimpliVity. HPE is aggressively attacking the HCI market and is not slowing down with its direct investments in this space.

Disaggregating HCI with Nimble

HPE Nimble Storage dHCI (the “d” stands for “disaggregated”) is a bit of an odd duck, but one sitting at a unique place within the HPE portfolio. This software was first developed at Nimble before its acquisition by HPE, but updated to meet the current needs of the market. The offering is a bit of a departure for HPE in that it’s an HCI-like offering that is entirely unrelated to the software powering HPE’s Simplivity HCI.

If not Simplivity, then what it is? HPE’s dHCI is a software stack providing the utility of HCI, including single-pane-of-glass manageability for storage and compute through a sophisticated vCenter plug-in. Instead of a dedicated appliance, dHCI runs on a combination of Nimble arrays and HPE ProLiant servers.

HPE Nimble Storage dHCI provides the simplified experience that has driven the success of HCI, while at the same time providing the flexibility of HPE composable infrastructure. It allows compute and storage to scale independently, delivering the right mix of resources for the hosted workloads. Its Nimble legacy ensures that InfoSight integration is there day one, bringing nearly a decade of predictive analytics models to the storage nodes.

HPE is targeting Nimble Storage dHCI at the mid-market where traditional composable infrastructure may be an over-fit, and traditional HCI too limiting. It’s an unexpected move for HPE, and we’ll be watching to see how the market responds.

The bottom line

There are a few consistent themes embedded within HPE’s announcements. IT is a world filled with many clouds. The software-defined data center is a reality, whether delivered through hybrid-cloud, composable infrastructure, some flavor of HCI, or even the new Nimble Storage disaggregated HCI.  Underpinning it all lay a series of performant products.  It’s hard to disagree with any of that, and I like the announcements.

There are gaps. Late last year when HPE delivered server-class memory with its “Flash Accelerated Memory” into its 3PAR products, the company declared that the Nimble-branded products would be updated with the technology sometime this year. I’m anxious to see how (or if) it makes a difference in those arrays, but that’s a minor disappointment in the face of what was delivered.

There were no updates to its file and object storage offerings. HPE has the StoreEasy line for NAS, but StoreEasy isn’t a strong competitor against either NetApp’s scalable NAS systems or Dell EMC’s Isilon series. Given NetApp’s current struggle to close deals, there’s a real opportunity for someone to come in and steal its existing customers. We’re seeing many of these NetApp deals going to Dell’s Isilon. HPE shouldn’t let itself lose too many sales to Isilon before deciding to address the gap.

HPE also has strong stories around both multi-cloud and its GreenLake flexible capacity model. GreenLake, in particular, is one of the best in the industry for capacity-on-demand. The stories are strong, but they suffer where they meet storage. Beyond incorporating Primera into Synergy, no new announcements touched on cloud storage or storage-on-demand.

HPE has offerings here, but the company is being out-innovated in both cloud storage capabilities and storage-on-demand by both Pure Storage and IBM.  HPE is in good company, as Dell EMC is equally lagging in these spaces (I touched on Dell’s differing strategy in a recent column). Good company or not, HPE needs a fresh vision in cloud storage.

Gaps aside, HPE storage delivered a good set of announcements at Discover. The core of data storage remains block storage, and HPE is making a strong play with the new Primera solution. Primera is an excellent product with smart positioning by HPE.  The company also demonstrated that it’s committed to SimpliVity with substantial updates to the line.  Infosight continues its march to provide predictive analytics across the HPE-power data center.

HPE’s sales teams are hitting it out of the park when it comes to storage.  It grew its external storage revenue over 14% year/year in the first calendar quarter of this year. That is over two times Dell’s growth, and only trailing the perpetually on-fire Pure Storage.

Dell may dominate the storage market, but it is a market that still clearly values choice. The numbers demonstrate that HPE’s global sales teams have figured out how to sell HPE storage against formidable competitors. That’s fantastic for HPE, and this week’s announcements should do nothing but make that job easier.

The post HPE Delivers Storage And Convergence At Discover 2019 appeared first on Moor Insights & Strategy.

]]>
NVIDIA Is Coming For Your Data Center https://moorinsightsstrategy.com/nvidia-is-coming-for-your-data-center/ Fri, 12 Apr 2019 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/nvidia-is-coming-for-your-data-center/ Article by Steve McDowell. NVIDIA’s GPU Technology Conference (GTC) took place a few weeks ago near the company’s headquarters in San Jose, California. Be careful how you say “GPU,” though—NVIDIA’s founder and CEO Jensen Huang was clear that it’s a term he avoids, preferring to use product names.  NVIDIA’s processing technology, you see, is about […]

The post NVIDIA Is Coming For Your Data Center appeared first on Moor Insights & Strategy.

]]>
NVIDIA

Article by Steve McDowell.

NVIDIA’s GPU Technology Conference (GTC) took place a few weeks ago near the company’s headquarters in San Jose, California. Be careful how you say “GPU,” though—NVIDIA’s founder and CEO Jensen Huang was clear that it’s a term he avoids, preferring to use product names.  NVIDIA’s processing technology, you see, is about much more than just graphics.

“NVIDIA is a data center company,” the usually hyperkinetic Jensen Huang said, very matter-of-factly to a room full of industry analysts. He went on to say that NVIDIA is “focusing on the big problems of data center scale computing,” and “delivering an accelerated computing platform.”

Jensen’s vision aligns with where the world is going. Enterprise workloads are increasingly being enabled by AI technologies, and corporate data is being mined for insights by data center machine learning stacks. Edge compute and the rise of 5G is going to accelerate the need to deliver real-time analytics and insights, enabled by exactly the kinds of technologies that NVIDIA delivers.

Artificial intelligence, whether it is client-side inference, or driven by deep machine learning technologies, is the future of compute. It’s a future that runs on specialized hardware enabled by a consistent software stack.

Heading into the datacenter

NVIDIA built a machine learning supercomputer that it calls DGX. Jensen told the industry analysts at GTC that he really didn’t want to build the DGX, explaining that it was a big expensive project that the tier one OEMs would have been better at. The problem was he couldn’t find an OEM who shared his vision—so he built it himself. Then the OEMs came calling.

Machine learning in the data center requires a different way of thinking about data and storage. Machine learning pipelines are hungry beasts that need to be fed. NVIDIA recognized this and partnered with innovative partners in the storage space to bring a hyper-converged machine learning solution to market, ready-made for the various OEMs’ channel partners.

Pure Storage last year introduced its AIRI platform, coupling its high-performance FlashArray with the DGX. IBM  recently released its Spectrum Storage for AI, marrying IBM storage with NVIDIA’s DGX series. Even Dell EMC partnered with the company, and very recently announced a coupling of NVIDIA’s DGX with the Dell EMC Isilon. There are no shortage of solutions.

NVIDIA’s DGX is a stellar offering, but it’s a very specialized piece of gear. In order to truly penetrate the data center, it’s critical that the server OEM world is engaged and building products around the solution. To that end, at GTC, NVIDIA introduced a number of server designs, all of which are being built by the server OEM community. Dell EMC, Hewlett Packard Enterprise , Inspur, Fujitsu, Sugon, and Lenovo are all building workstations and servers based around NVIDIA specifications and validation suites.

This is a critical step forward, not just for NVIDIA and the server supplier world, but for enterprise IT. The strength of these relationships allows enterprise buyers to trust the solutions that they are deploying for AI and ML workloads.

It’s all about that software

NVIDIA’s machine learning success comes from the intelligent choices the company makes in enabling the software ecosystem. I don’t know whether AI and machine learning were always a part of NVIDIA’s vision when it delivered CUDA to the world over a decade ago, or if it were a happy accident. It’s likely that the company’s GPUs were simply well-suited to solve machine learning problems, and CUDA gave the computer scientists the right sets of tools to exploit the raw capabilities.

It’s hard to remember that far back, but NVIDIA’s only real competition when it delivered CUDA was ATI, who had just been acquired by AMD . AMD seriously thought about joining NVIDIA in enabling a CUDA ecosystem (I know this because I was on the AMD corporate strategy team at the time), but instead made the fateful decision to place its bets on OpenCL.

The machine learning world rallied around the CUDA ecosystem, allowing NVIDIA to dominate that space today. Not resting on that dominance, NVIDIA significantly stepped up its development tools offerings at GTC. NVIDIA released CUDA-X, which packages a number of enterprise-ready libraries with the CUDA ecosystem, for easy deployment in most of the popular machine learning frameworks.

Container vision

NVIDIA also enhanced RAPIDS. RAPIDS is a suite of open source software libraries for data science and analytics pipelines that are accelerated by CUDA. RAPIDS is well-named, as it quickly enables enterprise level analytics.

Beyond CUDA and RAPIDS, NVIDIA is driving a vision of container-based workflows—where each step of an AI workflow is in a container that’s positioned where it needs to be in the data center. It’s not an accident that NVIDIA scooped Mellanox up from a pending Intel Corporation acquisition. NVIDIA sees flexible workload deployment as the optimal architecture for complex real-time AI and ML/DL workflows, a vision that relies on fast and reliable interconnects.

Beyond tools: jumpstarting solutions

It’s a simple matter to deliver tools to the industry and hope that the tools are used to build something great. It’s an order-of-magnitude greater to deliver entire pieces of a long-term vision in order to jump-start the future. NVIDIA has long been associated with autonomous cars. Its DRIVE program is a targeted set of technologies to help autonomous vehicle companies jumpstart their development. GTC saw the introduction of a slew of new capabilities for DRIVE, including a hardware platform, new software tools, and DRIVE Constellation which provides autonomous vehicle simulation. The icing on the cake for NVIDIA’s DRIVE announcements was news of its collaboration with Toyota. Toyota will be leveraging NVIDIA’s DRIVE tools as it moves its autonomous vehicle program forward.

Autonomous cars are interesting, but more world-changing is the potential of deep learning in the medical industry. NVIDIA Clara is a set of targeted software tools, including training sets, that enable developers to build medical imaging workflows using NVIDIA CUDA-enabled processors to deliver AI-enabled medical diagnostics.

These are just a few examples of how NVIDIA is going deep with software to enable machine learning. There were also announcements and activities around robotics, embedded ML, real-time ray tracing, accelerated computing, design and visualization, and more.

Concluding thoughts

NVIDIA’s greatest contribution to machine learning and AI is not its GPUs, or even its software tools. NVIDIA has positioned itself as a lighthouse, illuminating its vision for how ML-driven AI can revolutionize every industry. Yes, build your solution with us, NVIDIA says, but let us show you the power of where it will all lead.

NVIDIA preaches a vision where AI and ML look at our world, interpret it, and help us make better sense of it all. This is true across the spectrum, from technologically interesting applications such as robotics and autonomous vehicles, to business-enhancing services such as autonomous customer relations. This all leads to humanity-impacting intelligent systems such as those that can look at your medical image and help you live a longer and healthier life.

NVIDIA is both ambitious and successful. I like where the company going, and how it’s taking us there. NVIDIA could easily relax and enjoy its near monopoly in machine learning, but instead Jensen has decided to push hard for an ambitious future. We’re all just getting started in this space, and I’m happy with what NVIDIA is doing to move us forward.

Steve McDowell is a Moor Insights & Strategy Senior Analyst covering storage technologies.

The post NVIDIA Is Coming For Your Data Center appeared first on Moor Insights & Strategy.

]]>
Inside IBM’s New FlashSystem 9100 https://moorinsightsstrategy.com/inside-ibms-new-flashsystem-9100/ Tue, 10 Jul 2018 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/inside-ibms-new-flashsystem-9100/ It’s been a crowded season of storage industry announcements. It seemed like things were finally quieting down, but today IBM shook things up again with its announcement of a new end-to-end NVMe powerhouse storage solution, the IBM FlashSystem 9100. I’ve written previously about IBM’s current cadence of innovation. In addition to the new IBM FlashSystem […]

The post Inside IBM’s New FlashSystem 9100 appeared first on Moor Insights & Strategy.

]]>
Inside IBM's New FlashSystem 9100

It’s been a crowded season of storage industry announcements. It seemed like things were finally quieting down, but today IBM shook things up again with its announcement of a new end-to-end NVMe powerhouse storage solution, the IBM FlashSystem 9100. I’ve written previously about IBM’s current cadence of innovation. In addition to the new IBM FlashSystem 9100, IBM made several other announcements this week—all of which deliver on its data-centric strategy to flexibly put data where it is best served.

This data could be on-site feeding traditional data processing, virtual machines, or containers. To that end, IBM announced it is stepping up its support of Dockers and Kubernetes. This data could also live across a multi-cloud environment. On this front, IBM provides key cloud technologies and is now delivering blueprints for tighter integration. IBM has also updated its Storage Insights tool to support the new platforms. Storage Insights provides predictive analytics, storage resource management, and support integration.

The new IBM FlashSystem 9100

IBM has been a long-time proponent of NVMe in the storage world. Just over a year ago, in May 2017, IBM announced its intention to bring NVMe to the FlashSystem line. It has been executing on that plan, demonstrating NVMe in December 2017, and launching it in February 2018. Now, just over a year after its original announcement comes the FlashSystem 9100. Built around IBM’s own storage technology, the IBM FlashCore NVMe Module, the new FlashSystem is one of the fastest storage arrays in production.

The FlashSystem 9100 promises to deliver some of the lowest latencies in the industry—as low as 100 microseconds. That’s two times lower than its nearest competitor. The throughput offered by the new products is equally impressive, with a maximum 34GB/second on a single system, and an incredible 136GB/second and 10M IOPS in a four-way cluster. IBM claims that it is nearly two times faster than the Pure Storage X90 single system, and nearly 7.5 times faster than the X90 four-way cluster. If these numbers prove out, it’s going to be a hard system to beat.

IBM is also upping its compression and data deduplication game. The new offering is promising 5X data reduction, allowing for a 2PB usable capacity in a 2U form-factor while scaling to 32PB in a single rack. This compression is on the high end of what’s available in the industry today. I’m anxious to see if the promises hold up in real-world usage.

At launch, the product will be available in two flavors. Both sporting dual-active controllers and shared NVMe FlashCore modules, the FS9110 offers a dual 8-core processors solution with up to 1.5TB cache per enclosure. The higher-end FS9150 allows greater performance with dual 14-core Intel processors. In short, these are powerful machines.

IBM Storage Insights

Storage in the modern software-defined datacenter requires intelligence. Computational workloads are fluid. Containers and virtualization enable the dynamic reconfiguration of workloads across the infrastructure. Data, at the same time, has gravity. This gravity makes it difficult to efficiently scale and migrate without intelligent tools in place to assist IT administrators.

Analytics-driven capabilities, like those found in IBM’s Storage Insights, bring intelligent management and flexibility to storage. Its predictive analytics allow preemptive problem solving and proactive scalability.  IBM has enhanced Storage Insights to support the functionality delivered by the new FlashSystem 9100.

IBM isn’t alone in offering AI-driven predictive analytics.  Hewlett Packard Enterprise provides its InfoSight capability on its Nimble and 3PAR arrays, while Dell EMC is incorporating AI-driven decision making directly into its new PowerMax platform. Pure Storage has its Pure1. IBM Storage Insights is hugely competitive against these offerings.

Clouds and containers

Cloud isn’t replacing the enterprise data center, but it is becoming another facet of how IT delivers services. Building an infrastructure that is cloud-aware and enabled for the flexible integration of cloud-provided services and on-site capabilities is critical. IBM continues to excel in this category, offering its impressive array of data management functions available thru the IBM Spectrum Software Suite.  Additionally, IBM is providing many blueprints that customers can follow to implement multi-cloud solutions for data reuse, protection, business continuity, and the like.

Additionally, containers are becoming an increasingly common part of enterprise data center workflows. Supporting that trend, IBM is offering configuration blueprints and functionality within the Spectrum software suite to seamlessly integrate with Dockers and Kubernetes environments. The approach of offering customers validated configurations and blueprints for cloud integration is one that is becoming more common in the storage industry. IBM is doing a good job here.

Concluding thoughts

IBM’s announcement of the FlashSystem 9100 comes on the heels of many other similar announcements from other vendors. At Dell Technologies World, Dell EMC released its PowerMax (which we’ve previously written about). At Pure Accelerate, Pure Storage released its new line-up of the Pure FlashArray //X (which we have also written about).

This week’s announcements from IBM keep IBM Storage in the top tier of storage providers.  IBM’s Storage Insights capabilities continue the industry trend of building intelligence into infrastructure, something that will be required in the modern datacenter. IBM has continually demonstrated that it is a serious player in enterprise storage. This week’s announcements only serve to solidify that position.

The post Inside IBM’s New FlashSystem 9100 appeared first on Moor Insights & Strategy.

]]>
Pure Storage Keeps Up Momentum With New Offerings https://moorinsightsstrategy.com/pure-storage-keeps-up-momentum-with-new-offerings/ Wed, 06 Jun 2018 05:00:00 +0000 https://staging3.moorinsightsstrategy.com/pure-storage-keeps-up-momentum-with-new-offerings/ Pure Storage has been an unlikely leader in the storage industry. It came out of seemingly nowhere, introducing the industry to the power of all-flash arrays. IT shops responded and began adopting Pure Storage’s all-flash solutions into their performance applications. The entire storage industry scrambled to respond to the threat of all-flash arrays, and over […]

The post Pure Storage Keeps Up Momentum With New Offerings appeared first on Moor Insights & Strategy.

]]>
Pure Storage Keeps Up Momentum With New Offerings

Pure Storage has been an unlikely leader in the storage industry. It came out of seemingly nowhere, introducing the industry to the power of all-flash arrays. IT shops responded and began adopting Pure Storage’s all-flash solutions into their performance applications. The entire storage industry scrambled to respond to the threat of all-flash arrays, and over just the past two years, all-flash arrays have become a standard part of every vendor’s catalog. This industry is littered with stories of one-hit wonders who drive new technology into the market, only to be stomped upon and demolished. Pure Storage isn’t one of those companies. Pure has a second act.

Pure shook things up yet again with its very high-performance FlashBlade product and architecture. We’ve written about the FlashBlade previously. FlashBlade challenges the way industry thinks about storage and high-performance computing. Pure Storage threw itself a party last month with an event called Pure//Accelerate. Accelerate celebrated Pure’s achievements, and demonstrated that it isn’t resting on its laurels. Pure is determined to keep the industry moving forward.

Democratizing NVMe

Storage is about balancing throughput with latency. The industry attacked both of these attributes with the introduction of NVMe, an interconnect technology that removes the overhead of the disk subsystem from the data-path. Processors can talk directly to flash memory. NVMe improves every aspect of disk subsystem performance for SSD-based storage systems.

NVMe is just beginning to take hold. Providers today are targeting the technology towards the very high-end of storage systems. Dell EMC announced its first NVMe-based storage platform, the PowerMax, last month at Dell Technology World. Dell EMC dubbed it the “world’s fastest storage array.” A few weeks later, NetApp announced its own “world’s fastest array,” the AFF A800.

Pure Storage doesn’t think about the market in the same way that its competitors do. Pure doesn’t view NVMe as a performance enabler at the high-end of the storage stack. Instead, Pure will tell you that it believes NVMe can change the performance equation for every storage array in production and that every storage array should be “the world’s fastest” within in its class.

With this in mind, Pure Storage recently released an all-new series of all-NVMe arrays, called FlashArray//X, that span the gamut of performance capacity curves, from entry-level to high-performance. The //X10 and //X20 are entry-level arrays which can be configured either with SATA or NVMe. The //X50 and //X70 hit the sweet spot of mid-range storage, with scalability to 650TBe and 1PBe respectively. The FlashArray//X90 is the beast of the bunch, providing up to 3PB effective in 6U, with performance more than double its predecessor. It’s an excellent line-up. More impressively, Pure Storage claims that there is a zero-premium for adopting NVMe into its arrays. The price-per-byte doesn’t budge.

Accompanying the new arrays is an updated management suite, Pure1. Pure1 supplements its existing predictive analytics and management capabilities with new functionality providing workload planning, workload analytics with full-stack visibility into VMware vSphere environments, and new abilities to manage fleets of storage arrays across the enterprise. Pure is keeping up, and in some cases beating, the competition with its management suite.

Pure Storage released its first NVMe-enabled flash array in 2017.  Less than a year later, NVMe is enabled across its line.  No other vendor has moved this quickly. This is a serious wake-up call for Pure’s competitors.

A hybrid world

Pure Storage stands with NetApp Inc. as one of the two strongest pure-play providers in the storage industry. This leadership position forces a reliance on strong partnerships to compete with full-stack providers such as Hewlett Packard Enterprise , Dell EMC, and IBM Corporation.

The tier-one technology providers are moving the world towards solutions that benefit the portfolios offered by those same technology providers. It’s becoming a software-defined world, full of convergence and composability. Servers, storage, and networking are becoming interconnected in ways that border on the proprietary.

The reality is that the future of enterprise data is one of hybrid solutions. Private and public cloud will co-exist. Hyper-converged infrastructure will sit next to traditional direct attached storage. Emergent workloads will appear, such as enterprise AI and machine learning, which resist traditional architectures.

It is in this hybrid world that the pure-play storage vendors will excel. Their success will depend on sound strategy, and sounder partnerships. The tier-one providers are in no danger of fading away, and solutions from each will co-exist with the other. However, there is an opportunity for the pure-play providers to make a difference.

New solutions

Pure Storage announced several full-stack solutions at Pure//Accelerate that illustrate its resourcefulness in competing in this new world. The AIRI Mini shrinks the AIRI architecture engineered jointly by Pure Storage and NVIDIA , which we wrote about in April, into more accessible configurations for data scientists. The AIRI Mini combines two NVIDIA DGX-1 compute servers with the power of the Pure Storage FlashBlades to deliver two petaFLOPS of deep-learning performance. AIRI Mini is a converged solution targeted towards deep learning.

In this same converged vein, Pure Storage unveiled the FlashStack CI for Oracle which marries its FlashStack Cisco-powered convergence bundle with the ability to efficiently run and manage Oracle instances. This solution includes LicenseFortress capabilities to manage Oracle licenses, and copy automation tools that reduces Oracle copy times by up to 90%.

Lastly, Pure is the first storage vendor that we’re aware of that is reacting with a new offering to new financial accounting guidelines that will begin to impact how IT leases impact corporate budgeting. To oversimplify things, beginning in January 2019 equipment leases will no longer be considered as a budget-friendly operation expense (OpEx), but rather a wide-impacting capital expense (CapEx). What this means in practical terms is that your accounting department is going to begin paying a lot more attention to your equipment leases.

Pure Storage has responded to this new burden for enterprise IT with the introduction of its on-premises storage-as-a-service, which it is dubbing the Pure Evergreen Storage Service (ES2).  ES2 is a simple concept: Pure Storage will install its equipment on a customer site, manage that equipment, and allow the customer access to data on a pay-as-you-go basis. ES2 provides the performance and security benefits of on-site leased equipment, with the budget-friendliness and manageability offered by a cloud services model. It’s a great solution.

Leaders, and followers

Less than a year ago, there was a swirl of rumors that Pure Storage was going to be acquired. Nimble Storage had just given up the fight and sold itself to Hewlett Packard Enterprise. EMC was a year into its new marriage with Dell. The age of the stand-alone storage company looked perilous. Those rumors have since quieted.

It is not easy being a pure-play technology provider. History hasn’t been kind to that business model. There is no question that Pure Storage will face challenges competing against its larger rivals. It will be increasingly dependent on strong partnerships, a technology vision that exceeds the average of the industry, and a fearlessness to take on seemingly insurmountable obstacles.  So far, Pure has shown that it has all of those things.

The technology industry has always been a balance of leaders, pushing the boundaries of the status quo, and followers, who defend their entrenched positions before finally following those leaders. Pure Storage brought the storage world kicking and screaming into the age of the all-flash array. Now it is doing it again with NVMe and parallel storage. I have no idea where it will all end up, but it’s fun to watch.

The post Pure Storage Keeps Up Momentum With New Offerings appeared first on Moor Insights & Strategy.

]]>