Storage Unpacked Podcast

Storage Unpacked Podcast

A weekly podcast on deploying and managing enterprise storage and data

  • 48 minutes 3 seconds
    Storage Unpacked 264 – Hitachi Vantara Infrastructure-as-a-Service

    In this episode, Chris is in conversation with Jeb Horton, SVP Global Services at Hitachi Vantara, discussing the capabilities of Hitachi Vantara’s Global Services offerings, which deliver infrastructure management and infrastructure as a service to its customers.

    In addition to EverFlex, Hitachi Vantara has a long history of managed services capabilities that span more than just outsourced storage. As Jeb explains, the company also manages storage infrastructure from other vendors, in addition to non-storage systems.

    The interesting aspect of this discussion is the complex nature of the interaction between customers and Hitachi. Solutions offerings aren’t merely “transactional”, but have a human aspect and are tailored to meeting the specific goals of the customer. This conversation explores some of the nuances of working with customers to transfer the burden of infrastructure management to Hitachi, enabling businesses to focus on more strategic opportunities.

    To learn more about Hitachi Vantara check out the Infrastructure as a Service section on the Hitachi website – https://www.hitachivantara.com/en-us/services/infrastructure-as-a-service.

    Elapsed Time: 00:48:02

    Timeline

    • 00:00:00 – Intros
    • 00:01:43 – What is “Infrastructure as a Service”?
    • 00:03:25 – What else to customers want from a service (other than cost saving)?
    • 00:05:20 – Public cloud has increased the appetite for service-based consumption
    • 00:06:24 – What is the core of the Hitachi Vantara services offering?
    • 00:07:14 – Hitachi added automation into a “services platform”
    • 00:10:26 – The human aspect involves skills but also relationships
    • 00:12:20 – A service contract involves a detailed commercial model
    • 00:13:51 – Service also means service levels and agreements
    • 00:16:53 – Cloud is transactional, what is Hitachi’s “value add”?
    • 00:19:45 – Data has value, which is the focus of service offerings
    • 00:22:26 – How does Hitachi help government institutions?
    • 00:26:50 – What sort of data issues does Hitachi deal with?
    • 00:28:33 – Data and AI will be a key issue to manage
    • 00:30:40 – How does the engagement process work with Hitachi (and what is EverFlex)?
    • 00:37:15 – What are real-world examples of Hitachi customers and requirements?
    • 00:46:51 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #4d3x

    22 November 2024, 12:00 pm
  • 39 minutes 59 seconds
    Storage Unpacked 263 – The HYCU State of SaaS Data Resilience Report 2024

    In this recording, Chris talks to Subbiah Sundaram, SVP of Products at HYCU, Inc. about the 2024 edition of the HYCU State of SaaS Data Resilience Report. The report surveys customers to understand the gaps in perceived and actual data protection for SaaS platforms and the results are quite surprising. Subbiah walks through the top four findings, covering the understanding of the pervasive nature of SaaS in modern business, perceptions of data protection and the unexpected risks created by SaaS platforms.

    HYCU provides a robust and comprehensive approach to SaaS data protection, called R-Graph, part of R-Cloud. We’ve covered these products in previous podcasts, shown in the related content section below. We recommend downloading the report, which can be found here – The State of SaaS Data Resilience in 2024. Details on R-Graph can be found here – R-Graph.

    Elapsed Time: 00:39:59

    Timeline

    • 00:00:00 – Intros
    • 00:01:39 – What is the SaaS Resiliency Report for 2024?
    • 00:02:23 – There are over 35,000 global SaaS applications
    • 00:04:11 – SaaS has become embedded in business process
    • 00:05:02 – Businesses underestimate SaaS applications by 10x
    • 00:06:29 – Businesses don’t realise SaaS data isn’t protected like on-premises
    • 00:09:20 – 61% of data breaches occur through SaaS platforms
    • 00:13:40 – Businesses assume cloud platforms protect their data
    • 00:15:18 – The reasons for data restoration are multi-fold and business related
    • 00:17:47 – 75% of critical infrastructure (identity management) was not being protected
    • 00:19:21 – All credentials management systems operate slightly differently
    • 00:22:30 – Business process creates historical security exceptions
    • 00:26:09 – use R-Graph to discover your application dependencies
    • 00:27:42 – Protect your identity management systems
    • 00:31:09 – R-Cloud enables anyone to add data integrations for backup
    • 00:32:26 – Protect your endpoints, protect your data, protect your customer data
    • 00:34:18 – Where does SaaS data protection go next? Tracking behaviour
    • 00:37:14 – R-Cloud can be used for cross-environment data seeding
    • 00:39:12 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #vcxz

    4 November 2024, 12:00 pm
  • 52 minutes 17 seconds
    Storage Unpacked 262 – The Ethics and Regulation of AI

    In this podcast episode, Chris is in conversation with Jeffries Briginshaw (Head of EMEA Government Relations at NetApp) and Adam Gale (CTO for AI & Cyber Security, NetApp) discussing the EU AI Act and the regulation of artificial intelligence across the world. The EU AI Act is an early introduction into the regulation of the use of AI by businesses within their engagements and interactions with customers. As explained in this conversation, there are classifications of AI types and within that, restrictions on what businesses are permitted to implement based on those categorisations. Some AI usage will be banned, while others will require human intervention and close monitoring.

    How should your business engage with AI and ensure compliance with the act? Listen to the discussion for more details. As mentioned in the recording, for details on what NetApp can offer, point your favourite browser to https://www.netapp.com/artificial-intelligence/ to learn more.

    Elapsed Time: 00:52:17

    Timeline

    • 00:00:00 – Intros
    • 00:01:19 – Why should we be regulating AI?
    • 00:02:30 – What will the impacts of AI be on personal and work life?
    • 00:03:55 – What if we get regulation wrong?
    • 00:05:30 – What happens if AI goes wrong, such as data poisoning?
    • 00:09:04 – Existing EU/UK law has been successful at regulation (GDPR)
    • 00:10:25 – What is the EU AI Act?
    • 00:11:46 – “Prohibited Practices” will be banned from 2025
    • 00:14:00 – How will the use of business in AI be regulated?
    • 00:18:05 – The EU AI Act appears to focus on protection for individuals
    • 00:20:56 – EU citizens are broadly positive to AI – if it is successfully regulated
    • 00:21:52 – Compliance has an overhead – in terms of hard costs (developers)
    • 00:25:20 – What are the penalties for not complying with the EU AI Act?
    • 00:29:50 – What about the rest of the world – the US and elsewhere?
    • 00:35:10 – Could we see “cross-border” complexity?
    • 00:37:40 – What are the technology implications for AI regulation?
    • 00:40:07 – Should businesses be demonstrating their AI compliance?
    • 00:44:03 – What does NetApp offer customers to help AI compliance?
    • 00:47:38 – AI will require a “big red stop button”
    • 00:50:00 – Wrap Up

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #dfsx

    18 October 2024, 11:00 am
  • 38 minutes 33 seconds
    Storage Unpacked 261 – Pure Storage Platform Announcements at Accelerate 2024 (Sponsored)

    In this podcast episode, Chris discusses the platform update announcements from Pure Accelerate 2024 with Prakash Darji, VP and GM of the Digital Experience BU at Pure Storage. The new features focus on usability and operational enhancements, including AI-based features and support for AI workloads. Highlighted in this discussion are:

    • Fusion automation enhancements for fleet management and individual arrays
    • New Generative AI Copilot for storage to provide querying capabilities and advice
    • Evergreen//One for AI – an AI-level tier of Storage-as-a-Service
    • NVIDIA SuperPOD Ethernet Certification
    • Secure application Workspaces using Portworx
    • Cyber Recovery and Resilience SLAs
    • Security Assessment SLA
    • AI-Powered Anomaly Detection enhancements
    • Site Rebalance SLA
    • AI-Powered Reserve Expansion recommendations

    As the list shows, there are lots of new updates to make the management and operation of a Pure Storage fleet more efficient and easy. As Prakash explains the reasoning behind the features, it is clear that AI is being used to deliver simplicity, while the platform will provide support for customers wanting to build AI-focused workloads.

    To learn more, follow the news from Pure Accelerate 2024 here (link). Prakash mentions two blog posts, which can be found here – Ransomware is a Darwinian Problem That Will Never Be Solved and Editorial: Why Centralised Storage Refuses to Go Away.

    Elapsed Time: 00:38:33

    Timeline

    • 00:00:00 – Intros
    • 00:00:51 – It’s not all about AI!
    • 00:01:34 – What changes have been announced to the Pure Storage platform?
    • 00:02:37 – New features include cybersecurity enhancements and simplicity of management
    • 00:03:30 – How do we manage systems at scale?
    • 00:04:27 – Applications need policy management
    • 00:05:08 – Fusion has been enhanced to enable array or fleet management at the same time
    • 00:08:10 – Pure is introducing a GenAI Copilot in preview
    • 00:12:19 – Evergreen now has an AI storage-as-a-service tier
    • 00:14:00 – Pay for performance and capacity is a feature of Evergreen
    • 00:15:55 – SuperPod certification for Ethernet is coming to Pure Storage arrays
    • 00:16:40 – There must be many Jensen clones
    • 00:18:27 – Pure is introducing secure application workspaces using Portworx
    • 00:22:32 – New cybersecurity features include a security assessment for configuration settings
    • 00:23:19 – There is also a security SLA for fixing and certificating security settings
    • 00:24:01 – The AI Copilot will also recommend security improvements
    • 00:24:32 – Anomaly detection is now performance-based, looking at typical profiles
    • 00:30:45 – Reserve expansion recommendation is now AI-powered
    • 00:31:55 – Reserve commit across sites can now be rebalanced once per year
    • 00:33:40 – It’s easy for storage to become fragmented between sites
    • 00:36:27 – When will the new features be made available?
    • 00:37:45 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #

    19 June 2024, 1:00 pm
  • 15 minutes 29 seconds
    Storage Unpacked 260 – Hitachi VSP One Updates with Dan McConnell

    In this podcast episode, Chris catches up with Dan McConnell, Senior VP for Product Management at Hitachi Vantara. The company recently announced VSP One Block, a new mid-range appliance for block storage. This follows on from two product announcements in April, which we covered in this Research Note, and the restructuring of Hitachi Vantara announced towards the end of last year (see this Research Note).

    Dan discusses VSP One Block, an appliance that targets mid-range storage requirements. He also covers VSP One SDS, a software-defined solution which runs in AWS and on-premises. The third product announcement covers file, with VSP One File, the latest iteration of the technology that came from the BlueArc acquisition over a decade ago.

    You can find out more about the Block Storage Appliance here (link and here). Details on the VSP One SDS announcement can be found here (link), which includes details on VSP One File.

    Elapsed Time: 00:15:29

    Timeline

    • 00:00:00 – Intros
    • 00:01:18 – April 2024 announcement – VSP One SDS & VSP File
    • 00:02:00 – Hitachi blog products use SVOS
    • 00:03:13 – VSP one SDS is scale-out
    • 00:03:51 – VSP File is the evolution of previous file-based products
    • 00:05:16 – The VSP One family introduces consistent management & hybrid support
    • 00:06:24 – EverFlex introduces multiple consumption models
    • 00:08:30 – VSP Block 20 is the next generation mid-range storage array
    • 00:10:10 – Dynamic Carbon Reduction optimises power usage by workload demand
    • 00:12:03 – What comes next?
    • 00:13:15 – Cloud storage products shouldn’t be a “lift and shift”
    • 00:14:47 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #4dcx

    14 June 2024, 11:00 am
  • 52 minutes 8 seconds
    Storage Unpacked 259 – Sustainable Storage in the World of AI with Shawn Rosemarin (Sponsored)

    In this episode, Chris discusses the topic of building sustainable storage solutions with Shawn Rosemarin, Global VP of Customer Engineering at Pure Storage. AI and specifically Generative AI (GenAI) has become a hot topic over the past 12 months. Businesses are looking at projects to use AI internally for productivity gains, but also to drive additional business.

    However, AI is still relatively expensive and requires huge volumes of training data. Training is an ongoing process that must react to changes in the data landscape, such as rights and permissions, and government regulation. With AI hardware being so expensive, it’s important to get the storage piece right, and that means having a scalable and cost effective solution. Shawn details how Pure Storage has focused on two aspects. First, the hardware, where DFMs (direct flash modules) have reached 75TB, with commitments to deliver 150TB and 300TB drives in the next few years. Second, the software management capability delivered through Purity, the operating system of Pure Storage hardware.

    It’s clear that building cost and power-efficient flash devices will be a challenge for the wider industry, where the focus lies with consumer devices. Pure Storage believes it is well positioned to help customers and potentially hyper-scalers in their goals to deliver efficient storage for AI.

    As Shawn highlights, this topic and more will be discussed at Pure Accelerate, to be held in Las Vegas from 18-21 June 2024. Check out the website where you can learn more.

    Elapsed Time: 00:52:08

    Timeline

    • 00:00:00 – Intros
    • 00:01:44 – We’ve been quiet on the topic of AI
    • 00:03:10 – AI has become cost-effective (sort of)
    • 00:04:00 – Efficient AI is a 10-15 year journey
    • 00:05:22 – AI technology needs to be efficient due to the resource demands
    • 00:06:41 – Data growth is currently growing at 30% per annum
    • 00:07:31 – Early mover may not be the best move with AI
    • 00:08:16 – 149 foundational models were released in 2023
    • 00:09:10 – Businesses will want to merge public and private data
    • 00:10:40 – Results accuracy is super-important
    • 00:13:30 – Trusted AI will be adopted in areas like security & vehicle evasive manoeuvres
    • 00:15:10 – Where will AI models be developed?
    • 00:16:37 – Model retraining will be required due to changing data ownership & permissions
    • 00:18:30 – Model training also needs to be resource efficient
    • 00:19:49 – $100 million to do the basic training of an AI model
    • 00:22:26 – How do you feed GPUs with adequate data to run at 100%
    • 00:24:10 – Edge devices could be used for AI processing
    • 00:25:18 – How will data centres need to evolve for AI?
    • 00:28:08 – Sustainability, regulation and jobs will all be issues in AI deployment
    • 00:31:05 – With HPC, many users built bespoke systems and that’s a problem for AI
    • 00:33:45 – How will businesses “industrialise” their AI projects?
    • 00:36:46 – Storage density will help resolve the operational issues of AI storage
    • 00:38:19 – SSD vendors’ main market is 2TB consumer SSDs
    • 00:39:32 – 300TB drives are great, but how will software manage the hardware?
    • 00:41:51 – Pure Storage DFMs will grow exponentially in capacity
    • 00:42:43 – Hardware engineering is cool again!
    • 00:45:15 – How will the hyper-scalers deal with massive storage growth?
    • 00:51:30 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #xs2w

    31 May 2024, 11:00 am
  • 47 minutes 27 seconds
    Storage Unpacked 258 – Introducing Infinidat G4, InfuzeOS 8 and InfiniSafe ACP (Sponsored)

    In this episode, Chris talks to Infinidat CMO, Eric Herzog. Infinidat has announced one of the biggest upgrades in eight years, with the release of InfiniBox and InfiniBox SSA G4, the fourth generation of enterprise-class storage. Accompanying the new hardware is an upgrade to InfuzeOS, the Infinidat storage operating system, and a new feature for InfiniSafe – Automated Cyber Protection, or ACP.

    Infinidat has upgraded both the InfiniBox and InfiniBox SSA platforms with a generation 4 release that includes a switch to AMD processors. Using the EPYC 9554P enables Infinidat to use a single-socket design, while gaining from the move to DDR5 system memory and PCIe 5.0 I/O. The savings to the customer are space, power and cooling. The AMD move also enables Infinidat to release new hardware configurations, including a 14U rack-mount solution for edge data centres, rather than just the custom rack used to ship existing products.

    InfuzeOS gains an upgrade to version 8, with support for InfuzeOS in the public cloud on Microsoft Azure (AWS was announced last year). The new hardware and software improvements result in a 2x performance gain for customers. One final announcement covers InfiniSafe and the ability to automate snapshots through the integration of cyber-detection technology with InfiniBox and InfiniBox SSA. Customers can now automate the creation of immutable snapshots if their SIEM or SOAR platform detects malicious activity. This capability reduces the size of the threat window and the potential volume of data needing recovery, should a breach occur.

    There’s a lot more detail in the podcast, so go ahead and listen! For more information on any of the announcements in this podcast episode, visit https://www.infinidat.com/.

    Elapsed Time: 00:47:27

    Timeline

    • 00:00:00 – Intros
    • 00:01:00 – What’s new with Infinidat? G4 hardware, InfuzeOS updates and InfiniSafe ACP
    • 00:01:40 – G4 -new platform, both hybrid & all-flash, using AMD processors
    • 00:02:50 – Processor choice is available, but software is the key
    • 00:04:00 – InfuzeOS 8.0 is compatible with previous hardware generations
    • 00:04:40 – How has the physical specification of systems changed?
    • 00:07:15 – 14U option now available for use in standard racks
    • 00:10:00 – Why new form factor? Increased TAM
    • 00:11:00 – New controller upgrade programme introduced – Mobius
    • 00:12:15 – In-place upgrades are more practical with flash systems
    • 00:15:29 – What is InfiniVerse?
    • 00:18:25 – Fleet Management is now table stakes – and a differentiator
    • 00:20:17 – InfuzeOS is now available in AWS and Azure
    • 00:24:45 – Why use a cloud SDS solution – portability
    • 00:27:09 – InfiniSafe – what is Automated Cyber Protection?
    • 00:29:02 – Guaranteed immutable snapshots & recovery times
    • 00:33:39 – Dynamic snapshots based on threat identification reduces threat windows
    • 00:37:00 – ACP provides an holistic approach to data security
    • 00:40:41 – InfiniSafe cyber-protection now scans VMware virtual machine datastores
    • 00:44:00 – What is the availability of all the new offerings?
    • 00:45:30 – Live demos and Webinars are coming over the next few months
    • 00:46:42 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #xs2w

    22 May 2024, 1:00 pm
  • 49 minutes 53 seconds
    Storage Unpacked 257 – The Future of Data Storage in the Enterprise (Sponsored)

    In this sponsored episode, Chris talks to Fred Lherault and Larry Touchette from Pure Storage on the evolution of storage in the enterprise and the impacts on storage administration. The conversation is divided into three areas focusing on the customer, the administrator and the business.

    From the customer’s perspective, the requirements of on-premises data centre storage have changed significantly. Users expect resources to be deployed on demand, using APIs, CLIs or a GUI, without the intervention of a storage administrator. The self-service aspect is also aligned with 100% availability, an expectation that has evolved from the public cloud. End users have less interest in the hardware itself, but instead focus on metrics (IOPS, latency, throughput) and see storage as an endpoint to be consumed.

    The role of the storage administrator has evolved to be one similar to that of a product manager. The administration role is much more focused on ensuring storage is available and operating efficiently, rather than on the mundane task of provisioning resources. This means keeping close control on capacity growth, upgrades and patching.

    For the business, costs and efficient consumption models are key. With 30-40% annual growth in consumed terabytes, year-on-year costs need to decline, while systems must become more power, space and cooling efficient. Pure Storage has introduced Pure1 and Fusion, tools for the business and administrators to ensure that the storage infrastructure operates efficiently and meets the SLAs expected by internal customers.

    During the discussion, we highlight Pure Storage’s annual user conference, Accelerate, which will take place in Las Vegas between June 18th and 21st. Here is a list of some useful related content that discusses the evolution of storage in the data centre.

    Elapsed time: 00:49:53

    Timeline

    • 00:00:00 – Intros
    • 00:02:25 – How has storage management changed over the last two decades?
    • 00:03:07 – What are the modern storage requirements of enterprise customers?
    • 00:04:32 – The speed and agility of the public cloud is driving on-premises expectations
    • 00:06:40 – There is a mix of customer maturity in the enterprise
    • 00:09:48 – Customers expect less focus on hardware and more on metrics of delivery
    • 00:12:00 – Sustainability – including power costs – are increasingly important to customers
    • 00:13:05 – Automation – via GUI, API and CLI is expected, to reduce delivery times
    • 00:14:51 – Businesses expect 100% uptime, with no downtime requirement for upgrades
    • 00:17:01 – Storage “arrays” are now virtual, as data outlives the hardware
    • 00:18:46 – Do storage administrators now have an easier job?
    • 00:20:33 – Pure Storage takes some of the admin burden off the customer
    • 00:22:13 – Admins need to manage infrastructure, while providing access to the technology
    • 00:24:52 – How do businesses manage the financial demands of growing storage needs?
    • 00:27:27 – Modern consumption models are driven by architectural features
    • 00:29:16 – Pure Storage has operational processes to manage customer on-demand consumption
    • 00:33:15 – Efficient resource management is analogous to retail stock control
    • 00:34:00 – Pure1 and analytics tools provide the capability to efficiently model workload placement
    • 00:36:53 – Modern storage has many internal management functions that need AI/ML planning
    • 00:39:00 – So what should storage vendors be delivering, as minimum functional requirements?
    • 00:40:39 – Pure hardware and software is intrinsically linked
    • 00:41:45 – As flash improves, vendors like Pure can address many more performance & cost use cases
    • 00:45:00 – Pure systems started at 5.5TB, now into multi-petabytes
    • 00:48:10 – Wrap Up

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #khv9.

    26 April 2024, 11:00 am
  • 33 minutes 9 seconds
    Storage Unpacked 256 – Hyper-scalers and SAS with Rick Kutcipal

    In this episode, Chris chats to Rick Kutcipal, “At-Large Director” with the SCSI Trade Association. The topic of conversation is the adoption of SAS media (both HDDs and SSDs) by hyper-scale customers that include public cloud vendors and companies such as Meta. Market perception implies that NVMe-based drives are taking over the world, but that’s far from the truth. As Rick explains, some 90% of exabytes shipped on SSDs and HDDs are still using the SAS interface. SAS scales much better (in terms of drives in systems) than NVMe, while offering a competitive price point when looking at “slot cost”.

    There’s a lot of detail to digest in this discussion. It touches on some novel features of HDDs, for example, including Depop and Command Duration Limits. What is clear from the conversation is the longevity of SAS into the future, even as the transition to flash-based media continues.

    To learn more about the SCSI Trade Association, check out their website at https://www.snia.org/groups/sta-forum. You can also find them on Linkedin – here.

    Elapsed Time: 00:33:09

    Timeline

    • 00:00:00 – Intros
    • 00:01:30 – Hyper-scalers are big users of SAS devices
    • 00:02:55 – Refresher – What are SAS and SATA?
    • 00:04:55 – What are storage requirements for Hyper-scalers?
    • 00:05:40 – Requirements differ by area (engineers, operations and architects)
    • 00:07:15 – Small percentage savings make a big difference to Hyper-scalers
    • 00:08:45 – SAS scales to thousands of drives, with built-in management
    • 00:10:00 – Certain features have been added specifically for Hyper-scalers
    • 00:12:10 – I/O density continues to decline with HDD capacity increases
    • 00:13:55 – Drive systems can be a mix of NVMe and SAS/SATA drives
    • 00:15:30 – Reliability is critical, to avoid data centre interventions
    • 00:18:15 – Scale is only achievable with SAS
    • 00:19:35 – The supplanting of HDDs by SSDs is debatable
    • 00:21:30 – Large-scale SSDs are seeing the same issue as large HDDs
    • 00:23:30 – Tiering will continue to be important within the storage industry
    • 00:25:00 – Exabytes shipped still shows 90% remains behind SAS infrastructure
    • 00:27:50 – Power comparisons between SSD and HDD are not clear cut
    • 00:29:35 – Hyper-scalers focus on “slot cost”
    • 00:30:30 – What businesses are using SAS solutions – Meta
    • 00:31:55 – Wrap Up

    Related Podcasts & Blogs

    • #238 – SAS 24GB+ Updates with Rick Kutcipal
    • #74 – All About Serial Attached SCSI with Rick Kutcipal

    Copyright (c) 2016-2024 Unpacked Network. No reproduction or re-use without permission. Podcast episode #fr3a.

    23 February 2024, 12:00 pm
  • 31 minutes 37 seconds
    Storage Unpacked 254 – Announcing VSP One and Hitachi Vantara Reorganisation with Gary Lyng

    In this live episode, recorded at Hitachi Exchange in Paris, Chris chats to Gary Lyng, VP of Products and Solutions at Hitachi Vantara. The company recently announced the VSP One platform, plus some organisational changes that will take Hitachi Vantara back to a focus on core infrastructure. This recording dives into the strategy behind VSP One, before new products and services follow in 2024.

    We covered the VSP One announcement and reorganisation details in a blog post, available here – https://www.architecting.it/blog/hitachi-vsp-one/

    More information on Hitachi Vantara is available through our X-Ray eBook (subscription required).

    Elapsed Time: 00:31:37

    Timeline

    • 00:00:00 – Intros
    • 00:01:50 – The 100% availability guarantee is 20+ years old
    • 00:02:40 – What is the VSP One announcement?
    • 00:04:10 – With many silos, infrastructure has become complex
    • 00:07:05 – The current announcement is a strategy, products due in 2024
    • 00:08:10 – Modern storage requirements have evolved
    • 00:09:55 – Reliability, consistency and availability are key attributes of modern systems
    • 00:13:30 – GenAI and analytics are driving data volumes
    • 00:14:55 – Can data be culled or at least tidied?
    • 00:15:50 – Humans like to keep “stuff”
    • 00:19:45 – Modern IT systems can never be offline
    • 00:21:20 – Cloud now has high performance instances that can build virtual SANs
    • 00:24:20 – DLM is back, but in a new way, as data value can increase over time
    • 00:25:05 – Keeping data “forever” doesn’t really mean forever
    • 00:27:00 – Hitachi Vantara has restructured to move capabilities to Hitachi Digital Services
    • 00:30:50 – Wrap Up

    Related Podcasts & Blogs

    Copyright (c) 2016-2023 Unpacked Network. No reproduction or re-use without permission. Podcast episode #r4dx.

    10 November 2023, 12:00 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.