Gestalt IT

Gestalt IT

The On-Premise IT Roundtable gathers the best independent enterprise IT voices, puts them around a table, and gets them talking around a single topic. It breaks the traditional IT silos, taking on topics from across the isolated realms of servers, networking, storage, cloud, and mobility.

  • AI Doesn’t Make App Dev Any Better

    Generative AI is transforming many industries where people create content. Software development is no different; AI agents are in almost every development platform. But is AI improving application development and software quality? This episode of the Tech Field Day Podcast looks at some of the issues revolving around AI and App Dev with Alastair Cooke, Guy Currier, Jack Poller, and Stephen Foskett. The ultimate objective of a software development team is to deliver an application that fulfills a business need and helps the organization be more successful. An AI that can recommend basic code snippets doesn’t move that needle far. More sophistication is needed to get value from AI in the development process. The objective should be to have AI handle the repetitive tasks and allow humans to focus on innovative tasks where generative AI is less capable. AI agents must handle building tests and reviewing code for security and correctness to enable developers to concentrate on building better applications that help organizations.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Learn more about AppDev Field Day 2 on the Tech Field Day website, including information on presenting companies, delegates, analysts, and more.

    AI Doesn’t Make AI Any Better

    The ultimate objective of a software development team is to deliver an application that fulfils a business need and helps the organization be more successful. An AI that can recommend basic code snippets doesn’t move that needle far. More sophistication is needed to get value from AI in the development process. The objective should be to have AI handle the repetitive tasks and allow humans to focus on innovative tasks where generative AI is less capable. A vital first step is making the AI aware of the unique parts of the organization where it is used, such as the standards, existing applications and data. A human developer is more effective as they learn more about the team and organization where they work, and so can an AI assistant. 

    One of the ways AI can be used to improve software development is in data normalization, taking a diverse set of data and presenting it in a way that allows simple access to that data. An example is a data lake with social media content, email archives, and copies of past transactions from our sales application, all in one place. An AI tool that reads the unstructured social media and emails, presenting it as more structured data for SQL-based querying. Handling these types of low-precision data is an ideal generative AI task; reporting on the exact data in the sales records is not somewhere we want hallucinations. Generative AI might also be great for working out my address from my vague description rather than demanding that I enter my street address and postcode precisely as they are recorded in the postal service database.  

    Software testing is another place where AI assistants or agents can help by taking care of routine and tedious tasks. Testing every new feature is essential to automating software development and deployment, but writing tests is much less satisfying than writing new features. An AI agent that creates the tests from a description of how the feature should work is a massive assistance to a developer and ensures code quality through good test coverage. Similarly, AI-based code review can reduce the effort required to ensure new developers write good code and implement new features well. Reviews for style, correctness, and security are all critical for software quality. Both testing and code review are vital parts of good software development and take considerable developer effort. Reducing these tedious tasks would leave more time for developers to work on innovation and align better with business needs.

    The challenge of AI agents and assistants is that we don’t yet trust the results and still need a human to review any changes proposed by the AI. Tabnine reports that up to 50% of the changes suggested by their AI are accepted without modification. That leaves 50% of suggestions that aren’t wholly acceptable. That rate must be much higher before this AI can operate without human oversight.  Ideally, the AI could identify which changes will likely be accepted and flag a confidence rating. Over time, we might set a confidence threshold that requires human review. Similarly, we might take a manufacturing approach to code reviews and tests. Allow the AI to operate autonomously and sample test the resulting code every ten or hundred changes.

    Podcast Information:

    Alastair Cooke is a Tech Field Day Event Lead, now part of The Futurum Group. You can connect with Alastair on LinkedIn or on X/Twitter and you can read more of his research notes and insights on The Futurum Group’s website.

    Guy Currier is the VP and CTO of Visible Impact, part of The Futurum Group. You can connect with Guy on X/Twitter and on LinkedIn. Learn more about Visible Impact on their website. For more insights, go to The Futurum Group’s website.

    Jack Poller is and industry leading cybersecurity analyst and Founder of Paradigm Technica. You can connect with Jack on LinkedIn or on X/Twitter. Learn more on Paradigm Technica’s website.

    Stephen Foskett is the Organizer of the Tech Field Day Event Series, now part of The Futurum Group. Connect with Stephen on LinkedIn or on X/Twitter.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: AI Doesn’t Make App Dev Any Better

    12 November 2024, 3:00 pm
  • AI is the Enabler of Network Innovation

    Artificial Intelligence is creating the kind of paradigm shifts not seen since the cloud revolution. Everyone is changing the way their IT infrastructure operates in order to make AI work better. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by John Freeman, Scott Robohn, and Ron Westfall as they discuss how AI is driving innovation in the networking market. They talk about how the toolsets are changing to incorporate AI features as well as how the need to push massive amounts of data into LLMs and generative AI constructs is creating opportunities for companies to show innovation. They also talk about how Ethernet is becoming ascendant in the AI market.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Learn more about Networking Field Day 36 and the presenting companies on the Tech Field Day website.

    AI is the Enabler of Network Innovation

    Artificial Intelligence is creating the kind of paradigm shifts not seen since the cloud revolution. Everyone is changing the way their IT infrastructure operates in order to make AI work better. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by John Freeman, Scott Robohn, and Ron Westfall as they discuss how AI is driving innovation in the networking market. They talk about how the toolsets are changing to incorporate AI features as well as how the need to push massive amounts of data into LLMs and generative AI constructs is creating opportunities for companies to show innovation. They also talk about how Ethernet is becoming ascendant in the AI market.

    Modern network operations and engineering teams have a bevy of tools they need to leverage like Python, GitHub, and cloud platforms. AI is just another one of those tools, such as using natural language conversational interfaces to glean information from a dashboard. This can also be seen in the way that AI is having a societal impact on the way that we live and work. The move toward incorporating AI into every aspect of software can’t help but sweep up networking as well.

    Large amounts of data are being sent to large language models (LLMs) for storage and processing. Much like the big data crazy of years gone by we’re pushing more and more information into systems that will operate on it to discover context and meaning. Even more than before, however, is the need to deliver the data to the AI compute clusters that need to do the operations. The idea of data gravity is lost when the AI clusters have an even stronger pull. That means that the network must be optimized even more than ever before.

    Ethernet is quickly becoming the more preferred alternative to traditional InfiniBand. While there are clear advantages in some use cases, InfiniBand’s dominance is waning as Ethernet fabrics gain ground in performance. When you add in the ease with which Ethernet can scale to hundreds of thousands of nodes you can see why providers, especially those that are offering AI-as-a-Service, would prefer to install Ethernet today instead of spending money on a technology that has an uncertain future.

    Lastly, we discuss what happens if the AI bubble finally bursts and what may drive innovation in the market from there. This isn’t the first time that networking has faced a challenge from drivers of feature development. It wasn’t that long ago that OpenFlow and SDN were the hottest ticket around and everything was going to be running in software sooner or later. While that trend has definitely cooled we now see the benefits of the innovation it spurred and how we can continue to create value even if the primary driver for that innovation is now a footnote.

    Podcast Information:

    Tom Hollingsworth is the Networking Analyst for The Futurum Group and Event Lead for Tech Field Day. You can connect with Tom on LinkedIn and X/Twitter. Find out more on his blog or on the Tech Field Day website.

    Ron Westfall is The Research Director at The Futurum Group specializing in Digital Transformation, 5G, AI, Security, Cloud Computing, IoT and Data Center as well as the host of 5G Factor Webcast. You can connect with Ron on LinkedIn and on X/Twitter and see his work on The Futurum Group’s website.

    Scott Robohn is the VP of Technology at Cypress Consulting and Cofounder of Network Automation Forum. You can connect with Scott on X/Twitter or on LinkedIn. Learn more on the Network Automation Forum.

    John Freeman is an equity analyst at Ravenswood Partners. Connect with John on LinkedIn and learn more about him over on Ravenswood Partners’ website.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: AI is the Enabler of Network Innovation

    5 November 2024, 3:00 pm
  • Edge Computing is a Melting Pot of Technology

    Edge computing is one of the areas where we see startup vendors offering innovative solutions, enabling applications to operate where the business operates rather than where the IT team sit. This episode of the Tech Field Day podcast focuses on the melting pot of edge computing and features Guy Currier, John Osmon, Ivan McPhee, and host Alastair Cooke, all of whom attended the recent Edge Field Day in September. To accommodate the unique nature of the diverse and unusual locations where businesses operate, many different technologies are brought together to form the melting pot of edge computing. Containers and AI applications are coming from the massive public cloud data centres to a range of embedded computers on factory floors, industrial sites, and farm equipment. ARM CPUs, sensors, and low-power hardware accelerators are coming from mobile phones to power applications in new locations. Enterprise organizations must still control and manage data and applications across these locations and platforms. Security must be built into the edge from the beginning; edge computing often happens in an unsecured location and often with no human oversight. This melting pot of technology and innovation makes edge computing an innovative part of IT.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Edge Computing is a Melting Pot of Technology

    The edge computing landscape sometimes feels like a cross between the public cloud and ROBO, yet edge computing is neither of these things. The collection of unique drivers bringing advanced applications and platforms to ever more remote locations requires a unique collection of capabilities. Edge computing is a melting pot of existing technologies and techniques, with innovation filling the gaps to bring real business value. 

    The original AI meme application, Hotdog or Not, has become a farming application, Weed or Crop. An AI application runs on a computer equipped with cameras and mounted to a tractor as it drives down the rows in a field, identifying whether the plants it sees are the desired crop or an undesirable weed. The weeds get zapped with a laser, so there is no need for chemical weed killers as the tractor physically targets individual pest plants. The AI runs on a specialized computer designed to survive hostile conditions on a farm, such as dust, rain, heat, and cold. The tractor needs some of the capabilities of a mobile phone, connectivity back to a central control and management system, plus operation on a limited power supply. Is there enough power to run an NVIDIA H100 GPU on the tractor? I doubt it. This Weed vs Crop AI must run on a low-power accelerator on the tractor. Self-driving capabilities get melted into the solution; a tractor that drives itself can keep roaming the field all day. Freed from the limitations of a human driver, the tractor can move slower and may even use solar power for continuous operation.

    There is an argument that the edge is the same as the cloud, a tiny cloud located where the data is generated and a response is required. This often has a foundation in attempts to solve edge problems by being cloud-first and reusing cloud-native technologies at edge locations. From the broader business perspective, cloud and edge are implementation details for gaining insight, agility, and profit. The implementation details are very different. Simply lifting methodologies and technologies from a large data centre and applying them to every restaurant in your burger chain is unlikely to end well. Containerization of applications has also been seen as a cloud technology that is easily applied to the edge. Containers are a great way to package an application for distribution, and the edge is a very distributed use case. At the edge, we often need these containers to run on small and resource-limited devices. Edge locations usually have little elasticity, which is a core feature of public cloud infrastructure. Container orchestration must be lightweight and self-contained at the edge. Management through a cloud service is good, but disconnected operation is essential.

    Surprisingly, edge locations also lack the ubiquitous connectivity part of the NIST cloud definition. Individual edge sites seldom have redundant network links and usually have low-cost links with low service levels. Applications running at an edge location must be able to operate when there is no off-site network connectivity. The edge location might be a gas station operating in a snowstorm; the pumps must keep running even if the phone lines are down. This feels more like a laptop user use case, where the device may be disconnected, and IT support is usually remote. Device fleet management is essential for edge deployments. A thousand retail locations will have more than a thousand computers, so managing the fleet through policies and profiles is far better than one by one.

    Security at the edge also differs from data centre and cloud security; edge locations seldom have physical security controls. Even our staff working for minimum wage at these locations may not be trusted. The idea of zero trust gets melted into many edge computing solutions. Validating every part of the device and application startup to ensure nothing has been tampered with or removed. Zero trust may extend to the device’s supply chain when sent to the edge location. Many edge platform vendors pride themselves on the ability of an untrained worker to deploy the device at the edge, a long way from the safe-hands deployments we see in public cloud and enterprise data centres. 

    Edge computing has a unique set of challenges that demand multiple technologies combined in new ways to fulfil business requirements. This melting pot of technologies is producing new solutions and unlocking value in new use cases. 

    Podcast Information

    Alastair Cooke is a Tech Field Day Event Lead, now part of The Futurum Group. You can connect with Alastair on LinkedIn or on X/Twitter and you can read more of his research notes and insights on The Futurum Group’s website.

    John Osmon is a consultant and a network designer / coordinator. You can connect with John on Twitter or on LinkedIn and check out his writing on Miscreants in Action.

    Guy Currier is the VP and CTO of Visible Impact, part of The Futurum Group. You can connect with Guy on X/Twitter and on LinkedIn. Learn more about Visible Impact on their website. For more insights, go to The Futurum Group’s website.

    Ivan McPhee is a Senior Security and Networking Analyst at GigaOm. You can connect with Ivan on LinkedIn and on X/Twitter. You can learn more about his work on the GigaOm website.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: Edge Computing is a Melting Pot of Technology

    29 October 2024, 2:00 pm
  • There are Too Many Clouds

    Public Cloud computing is a large part of enterprise IT alongside on-premises computing. Many organizations that had a cloud-first approach and are now gaining value from on-premises private clouds and seeing their changing business needs leading to changing cloud use. This episode of the Tech Field Day podcast delves into the complexity of multiple cloud providers and features Maciej Lelusz, Jack Poller, Justin Warren, and host Alastair Cooke, all attendees at Cloud Field Day. The awareness of changing business needs is causing some re-thinking of how businesses use cloud platforms, possibly moving away from using cloud vendor specific services to bare VMs. VMs are far simpler to move from one cloud to another, or between public cloud and private cloud platforms. Over time, the market will speak and if there are too many cloud providers, we will see mergers, acquisitions or failures of smaller specialized cloud providers. In the meantime, choosing where to put which application for the best outcome can be a challenge for businesses.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    There are Too Many Clouds

    Public Cloud computing is a large part of enterprise IT alongside on-premises computing. Many organizations that had a cloud-first approach and are now gaining value from on-premises private clouds and seeing their changing business needs leading to changing cloud use. Whether it a return to on-premises private clouds or moving applications between cloud providers, mobility and choice are important for accommodating changing needs.

    In the early days of public cloud adoption, on-premises cloud was more of an aspiration than a reality. Over the years, private cloud has become a reality for many organisations, even if the main service delivered is a VM, rather than rich application services. If VMs are the tool of mobility between public clouds, then VMs are quite sufficient for mobility to private clouds. The biggest challenge in private cloud is that VMware by Broadcom has refocussed and repriced the most common private cloud platform. The change provides an opportunity for VMware to prove its value and for competing vendors to stake their claim to a large on-premises Virtualization market.
    Beyond the big three, four or five public cloud providers, there are a plethora of smaller public clouds that offer their own unique value. Whether it is Digital Ocean with an easy consumption model or OVH jumping into the GPU-on-demand market for AI training, there is a public cloud platform for many different specialised use cases. Each cloud provider makes a large up-front investment in platform, their technology, and often their real estate. The investment is only to generate a return for their founders, if the market doesn’t adopt their services, then the provider’s lifespan is very finite. Sooner or later the market will drive towards a sustainable population of cloud providers delivering the services that help their clients.

    One challenge to using multiple clouds is that there is little standardization of the services across clouds. In fact, public cloud providers aim to lock customers into their cloud by providing unique features and value. The unique value may be in providing developer productivity or in offering unique software licensing opportunities. Anywhere a business uses this unique cloud value to provide business value, the cost of leaving the specific cloud provider increases. There is an argument that using the lowest common denominator of cloud, the virtual machine or container, is a wise move to allow cloud platform choice. A database server in a VM is much easier to move between clouds that migrating from one cloud’s managed database service to a different provider. If the ability to do cloud arbitrage is important, then you need your applications to be portable and not locked to one cloud platform by its unique features and value.

    Whether there are too many clouds is a matter of perspective and opinion. Time will tell whether there are too many cloud providers and whether standardization of cloud services will evolve. Right now, some companies will commit to a single cloud provider and seek to gain maximum value form that one cloud while other companies play the field and seek to gain separate value from each cloud. We are certainly seeing discussions about private cloud as an option for many applications and a concern as the incumbent primary provider is changing approach. Will we see more clouds over time or fewer?

    Podcast Information

    Alastair Cooke is a Tech Field Day Event Lead, now part of The Futurum Group. You can connect with Alastair on LinkedIn or on X/Twitter and you can read more of his research notes and insights on The Futurum Group’s website.

    Justin Warren is the Founder and Chief Analyst at PivotNine. You can connect with Justin on X/Twitter or on LinkedIn. Learn more on PivotNine’s website. See Justin’s website to read more.

    Jack Poller is and industry leading cybersecurity analyst and Founder of Paradigm Technica. You can connect with Jack on LinkedIn or on X/Twitter. Learn more on Paradigm Technica’s website.

    Maciej Lelusz is the Founder & CEO of evoila Poland. You can connect with Maciej on LinkedIn and on Twitter. Learn more about him on his website.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: There are Too Many Clouds

    22 October 2024, 2:00 pm
  • You Don’t Need Post-Quantum Crypto Yet

    With the advent of quantum computers, the likelihood that modern encryption is going to be invalidated is a possibility. New standards from NIST have arrived that have ushered in the post-quantum era. You don’t need to implement them yet but you need to be familiar with them. Tom Hollingsworth is joined by Jennifer Minella, Andrew Conry-Murray, and Alastair Cooke in this episode to discuss why post-quantum algorithms are needed, why you should be readying your enterprise to use them, and how best to plan your implementation strategy.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    You Don’t Need Post-Quantum Crypto Yet

    With the advent of quantum computers, the likelihood that modern encryption is going to be invalidated is a possibility. New standards from NIST have arrived that have ushered in the post-quantum era. You don’t need to implement them yet but you need to be familiar with them. Tom Hollingsworth is joined by JJ MInella, Drew Conry-Murray, and Alastair Cooke in this episode to discuss why post-quantum algorithms are needed, why you should be readying your enterprise to use them, and how best to plan your implementation strategy.

    The physics behind using quantum computers may be complicated but the results for RSA-based cryptography are easy to figure out. Once these computers reach a level of processing power and precision that allows them to instantly factor numbers it will invalidate the current methods of encryption key generation. That means that any communication using RSA-style keys will be vulnerable.

    Thankfully the tech industry has known about this for years. The push to have NIST implement new encryption standards has been going on for the past two years. The candidates were finalized in mid-2024 and we’re already starting to see companies adopting them for use. This is hopeful because it means that we will have familiarity with the concepts behind the methods used before the threshold is reached that will force us to use these new algorithms.

    Does this mean that you need to move away from using traditional RSA methods today? No, it doesn’t. What it does mean is that you need to investigate the new NIST standards and understand when and how they can be implemented in your environment and whether or not any additional hardware will be needed to support that installation.

    As discussed, the time to figure this out is now. You have a runway to get your organization up to speed on these new technologies without the pain of a rushed implementation. Quantum computers may not be ready to break things apart today but the rate at which they are improving means it is only a matter of time before the day when you’ll need to switch over to prevent a lot of chaos with your encrypted data and communications.

    Podcast Information:

    Tom Hollingsworth is the Networking Analyst for The Futurum Group and Event Lead for Tech Field Day. You can connect with Tom on LinkedIn and X/Twitter. Find out more on his blog or on the Tech Field Day website.

    Alastair Cooke is a Tech Field Day Event Lead, now part of The Futurum Group. You can connect with Alastair on LinkedIn or on X/Twitter and you can read more of his research notes and insights on The Futurum Group’s website.

    Jennifer “JJ” Minella is Founder and Principal Advisor of Network Security at Viszen Security and the co-host of Packet Protector for Packet Pushers. You can connect with JJ on X/Twitter or on LinkedIn. Learn more about her on her website.

    Drew Conry-Murray is the Content Director at Packet Pushers Interactive and host of the Network Break podcast. You can connect with Drew on X/Twitter or on LinkedIn. Learn more about Packet Pushers on their website.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: You Don’t Need Post-Quantum Crypto Yet

    15 October 2024, 2:00 pm
  • Network Automation Is More Than Just Tooling

    The modern enterprise network automation strategy is failing. This is due in part to a collection of tools masquerading as an automation solution. In this episode, Tom Hollingsworth is joined by Scott Robohn, Bruno Wollmann, and special guest Mike Bushong of Nokia to discuss the current state of automation in the data center. They discuss how tools are often improperly incorporated as well as why organizations shouldn’t rely on just a single person or team to affect change. They also explore ideas around Nokia Event-Driven Automation (EDA), a new operations platform dedicated to solving these issues.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Network Automation is More Than Just Tooling

    The focus for most enterprises of “work reduction” when it comes to automation projects has a very short lifespan. As soon as people are satisfied they have saved themselves some time with their daily work they have a hard time translating that into a more strategic solution. Stakeholders want automation to save time and money, not just make someone’s job easier.

    Also at stake is the focus on specific tools instead of platforms. Tools can certainly make things easier but there is very little integration between them. This means that when a new task needs to be automated or a new department wants to integrate with the system more work is required for the same level out output. Soon, the effort that goes into maintaining the automation code is more than the original task that was supposed to be automated.

    The guests in this episode outline some ideas that can help teams better take advantage of automation, such as ensuring the correct focus is on the end goal and not just the operational details of the work being done. They also discuss Nokia Event-Driven Automation (EDA), which is a new operations platform that helps reimagine how data center network operations should be maintained and executed. The paradigm shift under the hood of Nokia EDA can alleviate a lot of the issues that are present in half-hearted attempts at automation and lead to better network health and more productive operations staff.

    Podcast Information:

    Tom Hollingsworth is a Networking and Security Specialist at Gestalt IT and Event Lead for Tech Field Day. You can connect with Tom on LinkedIn and X/Twitter. Find out more on his blog or on the Tech Field Day website.

    Scott Robohn is the VP of Technology at Cypress Consulting and Cofounder of Network Automation Forum. You can connect with Scott on X/Twitter or on LinkedIn. Learn more on the Network Automation Forum.

    Bruno Wollmann is a Network Architect and Networking Expert. You can connect with Bruno on LinkedIn and learn more about him on his website.

    Mike Bushong is the Vice President of Data Center at Nokia. You can connect with Mike on X/Twitter or on LinkedIn. Learn more about Nokia on their website.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: Network Automation Is More Than Just Tooling

    8 October 2024, 2:00 pm
  • Data Infrastructure Is A Lot More Than Storage

    The rise of AI and the importance of data to modern businesses has driven us too recognize that data matters, not storage. This episode of the Tech Field Day podcast focuses on AI data infrastructure and features Camberley Bates, Andy Banta, David Klee, and host Stephen Foskett, all of whom will be attending our AI Data Infrastructure Field Day this week. We’ve known for decades that storage solutions must provide the right access method for applications, not just performance, capacity, and reliability. Today’s enterprise storage solutions have specialized data services and interfaces to enable AI workloads, even as capacity has been driven beyond what we’ve seen in the past. Power and cooling is another critical element, since AI systems are optimized to make the most of expensive GPUs and accelerators. AI also requires extensive preparation and organization of data as well as traceability and records of metadata for compliance and reproducibility. Another question is interfaces, with modern storage turning to object stores or even vector database interfaces rather than traditional block and file. AI is driving a profound transformation of storage and data.

    Infrastructure Beyond Storage

    The rise of AI has fundamentally shifted the way we think about data infrastructure. Historically, storage was the primary focus, with businesses and IT professionals concerned about performance, capacity, and reliability. However, as AI becomes more integral to modern business operations, it’s clear that data infrastructure is about much more than just storage. The focus has shifted from simply storing data to managing, accessing, and utilizing it in ways that support AI workloads and other advanced applications.

    One of the key realizations is that storage, in and of itself, is not the end goal. Data is what matters. Storage is merely a means to an end, a place to put data so that it can be accessed and used effectively. This shift in perspective has been driven by the increasing complexity of AI workloads, which require not just vast amounts of data but also the ability to access and process that data in real-time or near real-time. AI systems are highly dependent on the right data being available at the right time, and this has led to a rethinking of how data infrastructure is designed and implemented.

    In the past, storage systems were often designed with a one-size-fits-all approach. Whether you were running a database, a data warehouse, or a simple file system, the storage system was largely the same. But AI has changed that. AI workloads are highly specialized, and they require storage systems that are equally specialized. For example, AI systems often need to access large datasets quickly, which means that traditional storage systems that rely on spinning disks or even slower SSDs may not be sufficient. Instead, AI systems are increasingly turning to high-performance storage solutions that can deliver the necessary bandwidth and low latency.

    Moreover, AI workloads often require specialized data services that go beyond simple storage. These include things like data replication, data reduction, and cybersecurity features. AI systems also need to be able to classify and organize data in ways that make it easy to access and use. This is where metadata management becomes critical. AI systems need to be able to track not just the data itself but also the context in which that data was created and used. This is especially important for compliance and reproducibility, as AI systems are often used in regulated industries where traceability is a legal requirement.

    Another important aspect of AI data infrastructure is the interface between the storage system and the AI system. Traditional storage systems often relied on block or file-based interfaces, but AI systems are increasingly turning to object storage or even more specialized interfaces like vector databases. These new interfaces are better suited to the needs of AI workloads, which often involve large, unstructured datasets that need to be accessed in non-linear ways.

    Power and cooling are also critical considerations in AI data infrastructure. AI systems are highly resource-intensive, particularly when it comes to GPUs and other accelerators. These systems generate a lot of heat and consume a lot of power, which means that the data infrastructure supporting them needs to be optimized for energy efficiency. This has led to a shift away from traditional spinning disks, which consume a lot of power, and towards more energy-efficient storage solutions like SSDs and even tape for long-term storage.

    The rise of AI has also blurred the lines between storage and memory. With the advent of technologies like CXL (Compute Express Link), the distinction between memory and storage is becoming less clear. AI systems often need to access data so quickly that traditional storage solutions are not fast enough. In these cases, data is often stored in memory, which offers much faster access times. However, memory is also more expensive and less persistent than traditional storage, which means that data infrastructure needs to be able to balance these competing demands.

    In addition to the technical challenges, AI data infrastructure also needs to address the growing need for traceability and compliance. As AI systems are increasingly used to make decisions that impact people’s lives, whether in healthcare, finance, or other industries, there is a growing need to be able to trace how those decisions were made. This requires not just storing the data that was used to train the AI system but also keeping detailed records of how that data was processed and used. This is where metadata management becomes critical, as it allows organizations to track the entire lifecycle of the data used in their AI systems.

    In conclusion, AI is driving a profound transformation in the way we think about data infrastructure. Storage is no longer just about performance, capacity, and reliability. It’s about managing data in ways that support the unique needs of AI workloads. This includes everything from specialized data services and interfaces to energy-efficient storage solutions and advanced metadata management. As AI continues to evolve, so too will the data infrastructure that supports it, and organizations that can adapt to these changes will be well-positioned to take advantage of the opportunities that AI presents.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Learn more about AI Data Infrastructure Field Day 1 on the Tech Field Day website. Watch the event live on LinkedIn or on Techstrong TV.

    Podcast Information:

    Stephen Foskett is the Organizer of the Tech Field Day Event Series, now part of The Futurum Group. Connect with Stephen on LinkedIn or on X/Twitter.

    Camberley Bates is the VP and Practice Lead at The Futurum Group. You can connect with Camberley on LinkedIn and her podcast Infrastructure Matters through The Futurum Group.

    Andy Banta is a consultant at MagnitionIO and a storage expert promoting simplicity and economy. You can connect with Andy on X/Twitter or on LinkedIn. Learn more about Andy on his Substack.

    David Klee is the Founder at Heraflux Technologies. You can connect with David on X/Twitter or on LinkedIn. Learn more about David on his personal website or about Heraflux Technologies on their website.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: Data Infrastructure Is A Lot More Than Storage

    1 October 2024, 2:00 pm
  • AI and Cloud Demand a New Approach to Cyber Resilience featuring Commvault

    As companies are exposed to more and more attackers, they’re realizing that cyber resilience is increasingly important. On this episode of the Tech Field Day Podcast, presented by Commvault, Senior Director of Product and Ecosystem Strategy Michael Stempf joins Justin Warren, Karen Lopez, and Stephen Foskett to discuss the growing challenges companies face in today’s cybersecurity landscape. As more organizations transition to a cloud-first operation, they’re recognizing the heightened exposure of their data protection strategies to global compliance mandates like DORA and SCI. Adding to this complexity is the emerging threat of AI, raising important questions about how businesses can adapt and maintain resilience in the face of these evolving risks.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Cyber Resilience in the Cloud-First World

    In today’s rapidly evolving cybersecurity landscape, companies are increasingly recognizing the importance of cyber resilience, especially as they transition to cloud-first operations. The shift to cloud environments has exposed organizations to new risks, including compliance mandates like DORA and SOCI, which require more stringent data protection strategies. Additionally, the rise of AI introduces further complexities, as businesses must now consider how AI can both enhance and threaten their cybersecurity efforts. The conversation around cyber resilience is no longer just about preventing attacks but ensuring that organizations can recover quickly and effectively when breaches inevitably occur.

    One of the key challenges in achieving cyber resilience is the lack of a clear, standardized definition of what it means to be resilient in the face of cyber threats. Unlike disaster recovery, which has well-established methodologies, cyber resilience is still a moving target. The nature of cyberattacks, which are often malicious and unpredictable, makes it difficult to apply traditional disaster recovery strategies. For example, while a natural disaster like a tornado may damage infrastructure, it doesn’t actively seek to corrupt data or systems. In contrast, a cyberattack forces organizations to question the integrity of their entire environment, from networks to cloud architectures. This uncertainty underscores the need for continuous testing and preparedness to ensure that recovery is possible after an attack.

    The complexity of modern IT environments, particularly with the widespread adoption of hybrid and multi-cloud setups, further complicates the task of maintaining cyber resilience. As organizations spread their data across various cloud platforms and on-premises systems, the number of moving parts increases, making it difficult for administrators to manage and protect everything manually. Automation and orchestration tools are becoming essential to handle the scale and complexity of these environments. Solutions like Commvault’s clean room recovery, which allows for dynamic scaling in the cloud and cross-platform data restoration, are helping to simplify the recovery process and reduce the time it takes to bounce back from a cyber incident.

    Compliance is another critical factor in the conversation about cyber resilience. With regulations varying across jurisdictions and industries, organizations must navigate a complex web of requirements to ensure they are protecting their data appropriately. The involvement of legal teams in discussions about data protection is becoming more common, as companies recognize the legal and financial risks associated with non-compliance. Tools that can help organizations track and manage their compliance obligations, without exposing sensitive data, are becoming increasingly valuable. Commvault’s approach, which focuses on analyzing metadata rather than customer data, allows organizations to stay compliant while minimizing the risk of data exposure.

    Finally, the role of AI in cybersecurity cannot be ignored. While AI offers powerful tools for automating tasks and identifying threats, it also presents new risks, particularly when it comes to data privacy and security. Responsible AI practices, like those advocated by Commvault, emphasize the importance of using AI in a way that respects customer data and focuses on operational improvements rather than invasive data scanning. By leveraging AI to enhance breach management and compliance tracking, organizations can improve their cyber resilience without compromising the integrity of their data. As AI continues to evolve, it will be crucial for companies to adopt thoughtful, responsible approaches to integrating these technologies into their cybersecurity strategies.

    Podcast Information:

    Stephen Foskett is the Organizer of the Tech Field Day Event Series, now part of The Futurum Group. Connect with Stephen on LinkedIn or on X/Twitter.

    Michael Stempf is the Senior Director of Product and Ecosystem Strategy at Commvault. You can connect with Michael on LinkedIn. Learn more about Commvault and Commvault Shift on their website.

    Justin Warren is the Founder and Chief Analyst at PivotNine. You can connect with Justin on X/Twitter or on LinkedIn. Learn more on PivotNine’s website. See Justin’s website to read more.

    Karen Lopez is a Senior Project Manager and Architect at InfoAdvisors. You can connect with Karen on X/Twitter or on LinkedIn.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: AI and Cloud Demand a New Approach to Cyber Resilience featuring Commvault

    24 September 2024, 2:00 pm
  • Hardware Still Matters at the Edge

    Hardware innovation at the edge is driven by diverse and challenging environments found outside traditional data centers. This episode of the Tech Field Day podcast features Jack Poller, Stephen Foskett, and Alastair Cooke considering the special requirements of hardware in edge computing prior to Edge Field Day this week. Edge locations, including energy, military, retail, and more, demand robust, tamper-resistant hardware that can endure harsh conditions like extreme temperatures and vibrations. This shift is fostering new hardware designs, drawing inspiration from industries like mobile technology, to support real-time data processing and AI applications. As edge computing grows, the interplay between durable hardware and adaptive software, including containerized platforms, will be crucial for maximizing efficiency and unlocking new capabilities in these dynamic environments.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Mobile and AI Feeds Edge Hardware Design

    In the new world of edge computing, hardware innovation is rapidly emerging. Unlike the standardized, controlled environments of data centers, edge locations present a diverse array of challenges that necessitate unique hardware solutions. This diversity is driving a wave of innovation in server and infrastructure hardware that hasn’t been seen in traditional data centers for quite some time.

    The edge is essentially defined as any location that is not a data center or cloud environment. This could range from the top of a wind turbine to a main battle tank on a battlefield, a grocery store, or even underneath the fryers at a quick-serve restaurant. Each of these locations has distinct physical and operational requirements, such as varying power supplies, cooling needs, and network connectivity. Unlike data centers, where the environment is tailored to be conducive to server longevity and performance, edge environments are often hostile, with factors like extreme temperatures, vibrations, and even potential tampering by humans.

    This necessitates a shift in design paradigms. Edge hardware must be robust enough to withstand these harsh conditions. For instance, the vibrations in a main battle tank are far more severe than what typical data center hardware can endure. Additionally, edge devices must be secure against physical tampering and theft, considerations that are not as critical in the controlled environment of a data center.

    Interestingly, the concept of edge computing is not entirely new. Decades ago, mini-computers were deployed in grocery stores, often encased in large, durable boxes to protect against spills and physical damage. Today, the resurgence of edge computing is driven by the explosion of data and the need for real-time processing, particularly with the advent of AI. In scenarios like oil and gas exploration, where seismic data needs to be processed immediately, edge computing offers significant efficiency gains by eliminating the need to transport vast amounts of data back to a central location.

    The hardware used at the edge often borrows from other industries. For example, the form factors of edge servers are reminiscent of industrial computers and fixed wireless devices, featuring big heat sinks, die-cast chassis, and power-over-Ethernet capabilities. These designs are optimized for durability and low power consumption, essential for edge environments.

    Moreover, advancements in mobile technology are influencing edge hardware. Mobile devices, with their powerful yet low-power GPUs and neural processing capabilities, are paving the way for AI applications at the edge. This convergence of technologies means that edge servers are increasingly resembling high-performance laptops, repurposed to handle the unique demands of edge computing.

    On the software side, virtualization and containerization are transforming how applications are deployed at the edge. However, these technologies must be adapted to the constraints of edge environments, such as intermittent connectivity and limited computational resources. Traditional assumptions about network reliability and computational power do not hold at the edge, necessitating innovative approaches to software development and deployment.

    The synergy between hardware and software is crucial for the success of edge computing. As edge locations become more general-purpose, capable of running multiple applications over their lifetime, the need for flexible, containerized platforms grows. However, managing these platforms in intermittently connected environments poses significant challenges in terms of distribution and control.

    AI at the edge is a particularly hot topic. The need to process data locally to avoid the inefficiencies of transporting it to a central location is driving the development of edge AI hardware. These devices must balance power consumption, cooling, and data throughput within compact, durable form factors. The IT industry’s relentless drive to make technology smaller, more powerful, and more efficient is enabling these advancements.

    The edge represents a dynamic and challenging frontier for IT innovation. The unique requirements of edge environments are driving significant advancements in hardware design, influenced by technologies from various fields. As AI and other data-intensive applications move to the edge, the synergy between innovative hardware and adaptive software will be key to unlocking new efficiencies and capabilities.

    Podcast Information

    Alastair Cooke is a Tech Field Day Event Lead, now part of The Futurum Group. You can connect with Alastair on LinkedIn or on X/Twitter and you can read more of his research notes and insights on The Futurum Group’s website.

    Jack Poller is and industry leading cybersecurity analyst and Founder of Paradigm Technica. You can connect with Jack on LinkedIn or on X/Twitter. Learn more on Paradigm Technica’s website.

    Stephen Foskett is the Organizer of the Tech Field Day Event Series, now part of The Futurum Group. Connect with Stephen on LinkedIn or on X/Twitter.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: Hardware Still Matters at the Edge

    17 September 2024, 2:00 pm
  • AI Solves All Our Problems

    Although AI can be quite useful, it seems that the promise of generative AI has lead to irrational exuberance on the topic. This episode of the Tech Field Day podcast, recorded ahead of AI Field Day, features Justin Warren, Alastair Cooke, Frederic van Haren, and Stephen Foskett considering the promises made about AI. Generative AI was so impressive that it escaped from the lab, being pushed into production before it was ready for use. We are still living with the repercussions of this decision on a daily basis, with AI assistants appearing everywhere. Many customers are already frustrated by these systems, leading to a rapid push-back against the universal use of LLM chatbots. One problem the widespread mis-use of AI has solved already is the search for a driver of computer hardware and software sales, though this already seems to be wearing off. But once we take stock of the huge variety of tools being created, it is likely that we will have many useful new technologies to apply.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Which Problems Does AI Solve?

    There is a dichotomy in artificial intelligence (AI) between the hype surrounding generative AI and the practical realities of its implementation. While AI has the potential to address various challenges across industries, the rush to deploy these technologies has often outpaced their readiness for real-world applications. This has led to a proliferation of AI systems that, while impressive in theory, frequently fall short in practice, resulting in frustration among users and stakeholders.

    Generative AI, particularly large language models (LLMs), has captured the imagination of marketers and technologists alike. The excitement surrounding these tools has led to their rapid adoption in various sectors, from customer service to content creation. However, this enthusiasm has not been without consequences. Many organizations have integrated AI into their operations without fully understanding its limitations, leading to a backlash against systems that fail to deliver on their promises. The expectation that AI can solve all problems has proven to be overly optimistic, as many users encounter issues with accuracy, reliability, and relevance in AI-generated outputs.

    The initial excitement surrounding AI technologies can be likened to previous hype cycles in the tech industry, where expectations often exceed the capabilities of the technology. The current wave of AI adoption is no different, with many organizations investing heavily in generative AI without a clear understanding of its practical applications. This has resulted in a scenario where AI is seen as a panacea for various business challenges, despite the fact that many tasks may be better suited for human intervention or simpler automation solutions.

    One of the critical issues with the current AI landscape is the tendency to automate processes that may not need automation at all. This can lead to a situation where organizations become entrenched in inefficient practices, making it more challenging to identify and eliminate unnecessary tasks. The focus on deploying AI as a solution can obscure the need for organizations to critically assess their processes and determine whether they are truly adding value.

    Moreover, the rapid pace of AI development raises concerns about the sustainability of these technologies. As companies race to innovate and bring new AI products to market, there is a risk that many of these solutions will not be adequately supported or maintained over time. This could lead to a situation where organizations are left with outdated or abandoned technologies, further complicating their efforts to leverage AI effectively.

    Despite these challenges, there is a consensus that AI has the potential to drive significant advancements in various fields. The ability of AI to analyze vast amounts of data and identify patterns can lead to improved decision-making and efficiency in many areas. However, realizing this potential requires a more nuanced understanding of AI’s capabilities and limitations, as well as a commitment to responsible implementation.

    The conversation around AI also highlights the importance of data as a critical component of successful AI applications. While the algorithms and models are essential, the quality and relevance of the data fed into these systems are equally crucial. Organizations must prioritize data governance and management to ensure that their AI initiatives yield meaningful results.

    As the AI landscape continues to evolve, it is essential for stakeholders to remain vigilant and critical of the technologies they adopt. The promise of AI is significant, but it is vital to approach its implementation with a clear understanding of its limitations and the potential consequences of over-reliance on automated solutions. By fostering a culture of critical thinking and continuous improvement, organizations can better navigate the complexities of AI and harness its potential to drive meaningful change.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Podcast Information:

    Stephen Foskett is the Organizer of the Tech Field Day Event Series, now part of The Futurum Group. Connect with Stephen on LinkedIn or on X/Twitter.

    Alastair Cooke is a Tech Field Day Event Lead, now part of The Futurum Group. You can connect with Alastair on LinkedIn or on X/Twitter and you can read more of his research notes and insights on The Futurum Group’s website.

    Frederic Van Haren is the CTO and Founder at HighFens Inc., Consultancy & ServicesConnect with Frederic on LinkedIn or on X/Twitter and check out the HighFens website

    Justin Warren is the Founder and Chief Analyst at PivotNine. You can connect with Justin on X/Twitter or on LinkedIn. Learn more on PivotNine’s website. See Justin’s website to read more.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: AI Solves All Our Problems

    10 September 2024, 2:00 pm
  • Ethernet is not Ready to Replace InfiniBand Yet

    AI networking is making huge strides toward standardization but Ethernet isn’t ready to displace the leading incumbent InfiniBand yet. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by Scott Robohn and Ray Lucchesi to discuss the state of Ethernet today and how it is continuing to improve. The guests discuss topics such as the dominance of InfiniBand, why basic Ethernet isn’t suited to latency-sensitive workloads, and how the future will improve the technology.

    Apple Podcasts | Spotify | Overcast | Amazon Music | YouTube Music | Audio

    Ethernet is not Ready to Replace InfiniBand Yet

    AI networking is making huge strides toward standardization but Ethernet isn’t ready to displace the leading incumbent InfiniBand yet. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by Scott Robohn and Ray Lucchesi to discuss the state of Ethernet today and how it is continuing to improve. The guests discuss topics such as the dominance of InfiniBand, why basic Ethernet isn’t suited to latency-sensitive workloads, and how the future will improve the technology.

    InfiniBand has been the dominant technology for AI networking since NVIDIA asserted itself as the leader in the market. The reasons for this a varied. NVIDIA acquired the technology from their 2019 acquisition of Mellanox. InfiniBand has been used extensively in high performance computing (HPC) systems for a number of years. Using it for AI, which is a very hungry application, was a natural fit. Since then, InfiniBand has continued to be the preferred solution due to low latency and the lossless nature of packet exchange.

    Companies such as Cisco, Broadcom, and Intel have championed the use of Ethernet as an alternative to InfiniBand for GPU-to-GPU communications. They’ve even founded a consortium dedicated to standardizing Ethernet fabrics focused on AI. However, even though Ethernet is a very flexible technology it’s not uniquely suited to AI networking in the same way that InfiniBand has shown. Lossy transmissions and high overhead are only two of the major issues that plague standard Ethernet when it comes to latency-sensitive information exchange. The Ultra Ethernet Consortium was founded to provide mechanisms to make Ethernet more competitive in the AI space but it still has a lot of work to do in order to standardize the technology.

    The future of Ethernet is bright. InfiniBand is seemingly being put into maintenance mode as even NVIDIA has started to develop Ethernet options with Spectrum-X using Bluefield-3 DPUs. Cloud providers offering AI services are also mandating the use of standard cost-effective Ethernet over proprietary InfiniBand. AI workloads are also undergoing significant changes due to the nature of infrastructure catching up to their needs. As the technology and software continue to develop there is no doubt that Ethernet will eventually return to being the dominant communications technology. However, that change won’t happen for a few years to come.

    Podcast Information:

    Tom Hollingsworth is a Networking and Security Specialist at Gestalt IT and Event Lead for Tech Field Day. You can connect with Tom on LinkedIn and X/Twitter. Find out more on his blog or on the Tech Field Day website.

    Ray Lucchesi is the president of Silverton Consulting and the host of Greybeards on Storage Podcast. You can connect with Ray on X/Twitter or on LinkedIn. Learn more about Ray on his website and listen to his podcast.

    Scott Robohn is the VP of Technology at Cypress Consulting and Cofounder of Network Automation Forum. You can connect with Scott on X/Twitter or on LinkedIn. Learn more on the Network Automation Forum.

    Thank you for listening to this episode of the Tech Field Day Podcast. If you enjoyed the discussion, please remember to subscribe on YouTube or your favorite podcast application so you don’t miss an episode and do give us a rating and a review. This podcast was brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of The Futurum Group.

    © Gestalt IT, LLC for Gestalt IT: Ethernet is not Ready to Replace InfiniBand Yet

    3 September 2024, 2:00 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.