Alex Williams, founder of The New Stack, hosts "The New Stack Analysts," a biweekly round-table discussion covering The New Stack's latest data research, and topics related to app development and back-end services. Listen to our other TNS Podcasts on SoundCloud: The New Stack Makers: https://soundcloud.com/thenewstackmakers The New Stack Context: https://soundcloud.com/thenewstackcontext The New Stack @ Scale: https://soundcloud.com/thenewstackatscale
Wondering what we've been up to lately? Well, we've been upping our game and making moves, literally. The New Stack podcasts have been polished, upgraded and will be at thenewstack.simplecast.com Subscribe to The New Stack Makers on Simplecast and share your favorite segments with 30, 60, or 90 second Recast audiograms.
In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-host Cheryl Hung, vice president of ecosystem at CNCF Cloud Native Computing Foundation (CNCF), discuss why secrets management is essential for DevOps teams, what the tool landscape is like and why Vault was selected as the top alternative. CNCF Tech Radar contributors and featured guests were Steve Nolen, site reliability engineer, RStudio — which creates open source software for data science, scientific research and technical communication — and Andrea Galbusera, engineering and co-founder, AuthKeys, a SaaS platform provider for managing and auditing servers authorizations and logins.
In this episode of The New Stack Analysts podcast, TNS founder and publisher Alex Williams virtually shared pancakes and syrup with guests to discuss how Apache Cassandra, gRPC and, other tools and platforms play a role in managing data on Kubernetes. Mya Pitzeruse, software engineer and OSS contributor from effx; Sam Ramji, chief strategy officer at Datastax; and Tom Offermann, a lead software engineer at New Relic were the guests. They offered deep perspectives about the evolution of data management on Kubernetes and the work that remains to be done.
On the last The New Stack Analysts of the year, the gang got together — remotely, obviously — to reflect on this year. And oh what a year! But for a year in tech, 2020 still had a lot of hits — and some misses. Publisher Alex Williams was joined by Libby Clark, Joab Jackson, Bruce Gain, Steven Vaughan-Nichols, and Jennifer Riggins. We looked back on the year that saw millions die, no one fly, and a lot of jobs in turmoil. It was also a year that, while many things screeched to a halt, much of the tech industry had to keep going more than ever.
Prisma Cloud from Palo Alto Networks sponsored this podcast. Identity and access management (IAM) was previously relatively straightforward. Often delegated as a low-level management task to the local area network (LAN) or wide area network (WAN) admin, the process of setting permissions for tiered data access was definitely not one of the more challenging security-related duties. However, in today's highly distributed and relatively complex computing environments, network and associated IAM are exponentially more complex. As application creation and deployment become more distributed, often among multicloud containerized environments, the resulting dependencies, as well as vulnerabilities, continue to proliferate as well, thus widening the scope of potential attack surfaces. How to manage IAM in this context was the main topic of this episode of The New Stack Analysts podcast, as KubeCon + CloudNativeCon attendees joined TNS Founder and Publisher Alex Williams and guests live for the latest “Virtual Pancake & Podcast.” They discussed why IAM has become even more difficult to manage than in the past and offered their perspectives about potential solutions. They also showed how enjoying pancakes — or other variations of breakfast — can make IAM challenges more manageable. The event featured Lin Sun, senior technical staff member and Master Inventor, Istio/IBM; Joab Jackson, managing editor, The New Stack and Nathaniel “Q” Quist, senior threat researcher (Public Cloud Security – Unit 42), Palo Alto Networks. Jackson noted how the evolution of IAM has not been conducive to handling the needs of present-day distributed computing. Previously, it was “not exactly a security thing” nor a “developer problem,” and wasn't even “a security problem, he said. “[IAM] really almost was a network problem: if a certain individual or a certain process wants to access another process or a resource online, then you have to have the permissions in place to meet all the policy requirements about who can ask for these particular resources,” Jackson said. “And this is an entirely new problem with distributed computing on a massive and widespread scale…it's almost a mindset, number one, about who can figure out what to do and then how to go about doing it.”
KubeCon+CloudNativeCon sponsored this podcast. How to manage database storage in cloud native environments continues to be a major challenge for many organizations. Database storage also came to the fore as the issue to explore in the latest Cloud Native Computing Foundation (CNCF) Tech Radar report. In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify discuss stateless database storage, recent results of the report findings and perspectives from the user community. The podcast guests — who both contributed to the CNCF Tech Radar report and hail from the database storage user community — were Jackie Fong, engineering leader, Kubernetes and developer experience for Ticketmaster, and Mya Pitzeruse, software engineer, OSS contributor, effx.
Accurics sponsored this podcast. Who doesn't love hotcakes? And to make them right, you need to wait until the batter starts to bubble up before you flip them. Immutable infrastructure management and related security challenges are also “bubbling up” these days, as many organizations make the shift to cloud native environments, with containerized, serverless and other layers. In this The New Stack Analysts podcast, TNS founder and publisher Alex Williams asked served up pancakes with KubeCon attendees who joined him for a “stack” at the “Virtual Pancake Breakfast and Podcast” while they offered their deep perspectives on what is at stake as immutable infrastructure security and other related concerns take hold. The guests joining the virtual breakfast were Om Moolchandani, co-founder and CTO for Accurics, Rosemary Wang, developer advocate for HashiCorp, Krishna Bhagavathula, CTO, for the NBA (who also brought his own L.A. Lakers-branded spatula), Chenxi Wang, Ph.D., managing general partner of Rain Capital, and Priyanka Sharma, general manager, for the Cloud Native Computing Foundation (CNCF).
KubeCon+CloudNativeCon sponsored this podcast. CERN, the European Organization for Nuclear Research, is known for its particle accelerator and experiments and analysis of the properties of subatomic particles, anti-matter and other particle physics-related research. CERN is also considered to be where the World Wide Web (WWW) was created. Research and experiments conducted at the largest particle physics research center consisting of a 27-km long tunnel generate massive amounts of data to manage and store. All told, CERN now manages over 500 petabytes — over half of one exabyte — which, in a decade's time, is expected to total 5,000 petabytes, said Ricardo Rocha, a staff researcher at CERN. In this episode of The New Stack Analysts, we learn from Rocha how CERN is adapting as a new accelerator goes online in the next few years with the ability to manage 10x the data it manages now.
Some legacy infrastructures are certainly more difficult to manage than others when organizations make the shift to cloud native. In the case of the heavily regulated financial services industry and the deep legacy infrastructure involved when banks transition to the cloud, challenges inherent in the sector abound. Regulatory and compliance and data-management challenges are also usually amplified when the bank has an especially large international presence. In this edition of The New Stack Analysts podcast, as part of The New Stack's recent coverage of end-use Kubernetes, Michael Lieberman, senior innovation engineer, vice president, of Tokyo-based MUFG, discusses his company's journey to scale out architectures in a microservice and Kubernetes environment in the world of financial services. Alex Williams, founder and publisher of The New Stack hosted the podcast with co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify.
In the serverless paradigm, the idea is to abstract away the backend so that developers don't need to deal with it. That's all well and good when it comes to servers and complex infrastructure like Kubernetes. But up till now, database systems haven't typically been a part of the serverless playbook. The assumption has been that developers will build their serverless app and choose a separate database system to connect to it — be it a traditional relational database, a NoSQL system, or even a Database-as-a-Service (DBaaS) solution. But the popularity of serverless has prompted further innovation in the data market. In this episode of The New Stack Analysts podcast, we talked about the latest developments in regards to managing data in a serverless system. My two guests were Evan Weaver, co-founder and chief technology officer of Fauna, and Greg McKeon, a product manager at Cloudflare. Fauna is building a “data API” for serverless apps so that developers don't even need to touch a database system, while Cloudflare runs a serverless platform called Cloudflare Workers.
Kubernetes is becoming boring and that's a good thing — it's what's on top of Kubernetes that counts. In this The New Stack Analysts podcast, TNS Founder & Publisher Alex Williams asked KubeCon attendees to join him for a short “stack” at our Virtual Pancake & Podcast to discuss “What's on your stack?” The podcast featured guest speakers Janakiram MSV, principal analyst, Janakiram & Associates, Priyanka Sharma, general manager, CNCF, Patrick McFadin, chief evangelist for Apache Cassandra and vice president, developer relations, DataStax and Bill Zajac, regional director of solution engineering, Dynatrace. The group passed the virtual syrup and talked Kubernetes, which may be stateless, but also means there's plenty of room for sides.
A lot of the time, it's harder to convince your friends and family than a stranger. The first group is usually more decisive and direct with you. The same goes for your work family. When you're building an internal infrastructure for autonomous teams, it becomes your job to not only provide that technical backbone, but to act as sales and customer support. Nobody said internal developer advocacy would be easier. The sixth episode of The New Stack Analysts End User Series brings together again our Publisher Alex Williams with co-hosts Cheryl Hung from the Cloud Native Computing Foundation and Ken Owens of Mastercard. In this episode they talk with Simone Sciarrati, the engineering team lead at Meltwater media intelligence platform about the autonomous engineering culture, molding developer experience, and those tough technological decisions.
Spotify is well known worldwide for its music service. Not so well known, is its path to Kubernetes Oz has been a road with many twists and turns. What also may be a surprise to many is that Spotify is a veteran user of Kubernetes and how it owes much of its product-delivery capabilities to its agile DevOps. Indeed, Spotify continues to increasingly rely on a container and microservices infrastructure and cloud native deployments to offer a number of advantages. This allows its DevOps teams to continually improve the overall streaming experience for millions of subscribers. In this edition of The New Stack Analysts podcast, as part of The New Stack's recent coverage of end use Kubernetes, Jim Haughwout, head of infrastructure and operations, shares Spotify's cloud native adoption war stories and discusses its past and present Kubernetes challenges. Alex Williams, founder and publisher of The New Stack; Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Ken Owens, vice president, cloud native engineering, Mastercard hosted the podcast.
KubeCon+CloudNativeCon sponsored this podcast as part of a series of interviews with Kubernetes end users. Listen to the previous stories about the ups and downs of Box's Kubernetes journey and what Wikipedia's infrastructure is like behind the firewall. It started simply enough but soon the site needed more than a server to keep things managed. Today, EquityZen runs on Kubernetes and is considering its next moves, in particular exploring how container as a service may serve them. In this edition of The New Stack Analysts podcast, Andy Snowden, engineering manager, DevOps, for EquityZen, discusses how he helped the company begin its cloud native journey and the challenges associated with the move. Alex Williams, founder and publisher of The New Stack; Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Ken Owens, vice president, cloud native engineering, Mastercard hosted the podcast. When Snowden joined EquityZen, he immediately began to apply his background managing Kubernetes environments to help solve a chief concern the company had: The reliability of its infrastructure. “During our initial conversations, they explained to me that ‘hey, we are having these issues and we are having these big site hits where the site will go down' and that is really bad for our customers. They also asked ‘what have you done in your past that has worked well for you?,'” said Snowden. “And knowing Kubernetes as I knew it, I said this sounds like a really good use case for it and I explained that these are the sort of things I might consider doing.” Once convinced that a Kubernetes environment would both boost reliability and help the company to better scale its operations, making the shift was, of course, a major undertaking.
KubeCon + CloudNativeCon sponsored this post. Box was one of the first companies to build on Kubernetes. Initially building its platform on PHP, Box's architecture still uses some parts of the PHP architecture. Today, Box serves as a case study of a software platform's cloud native journey that began a few years ago. The company also continues to rely on its legacy infrastructure dating back to the days when PHP ran on Box's bare metal servers in its data centers. In this edition of The New Stack Analysts podcast, Kunal Parmar, director of engineering, Box, discusses the evolution of the cloud content management provider's cloud native journey with hosts Alex Williams, founder and publisher of The New Stack, Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Ken Owens, vice president, cloud native engineering, Mastercard. Prior to Box's adoption of Kubernetes, the company sought ways to “create more services outside of the monolith in order to scale efficiently,” Parmar said. One way to do that, he explained, was to shift its legacy monolith applications into microservices. “For anybody who has [made the shift to Kubernetes], they would know this is a really long and hard journey. And so, in parallel, we have been focusing on adopting Kubernetes for all of the new microservices that we have been building outside of the monolith,” said Parmar. “And today we are at a point where we're actually now looking at also starting to migrate the monolith to run on top of Kubernetes so that we can take advantage of the benefits that Kubernetes brings.”
The Wikimedia Foundation‘s impact on culture and media sharing has had immeasurable benefits on a worldwide scale. As the foundation that manages the fabled Wikipedia, Wikimedia Commons, Wikisource and a number of outlets, Wikimedia's mission is to “to bring free educational content to the world” All told, Wikipedia alone is available in about 300 different languages with more than 50 million articles on 1.5 billion unique devices a month with 6,000 views a second — with 250,000 engaged editors, Chase Pettet, senior security architect, Wikimedia Foundation, said. “Editors are sort of the lifeblood of the movement,” he said. In this, The New Stack Analyst podcast, hosted by Alex Williams, founder, and editor-in-chief of The New Stack, and Ken Owens, vice president, cloud native engineering for Mastercard, Pettet discussed Wikimedia's infrastructure-management challenges, both past and present, and what makes one of the world's foremost providers of free information tick.
In this, The New Stack Analyst podcast hosted by Alex Williams, founder, and editor-in-chief of The New Stack, and Ken Owens, vice president, cloud native engineering, Mastercard, Jennifer Strejevitch, site reliability engineer for Condé Nast speaks about her experiences and observations at the front lines of the publishing company infrastructure-related challenges and successes. Condé Nast is one of the most well recognized media brands in the world, with a range of stand-out titles that include “Wired,” “The New Yorker” and “Vanity Fair.” The publishing giant also represents a case study of how a large multinational company was able to shift its entire international web and data operations to a homogenous Kubernetes infrastructure it built and now manages with open source tools. Indeed, during the past five years, Condé Nast has been able, build a single underlying platform consisting of several dozen websites spread out around the world, including Russia and China in addition to the U.S. and Europe. Its web presence now hosts more than 300 million digital unique users per month and 570 article views every second.
Thanks to the COVID-19 global pandemic, many IT systems are facing unprecedented workloads, reaching levels of usage on a daily basis that usually only happen on the busiest days of the year. The good news is that the cloud native approach has been rapidly gaining popularity with businesses large and small to help meet these sudden demands. And proper security precautions must be built into these emerging cloud native systems. Applying principles of cloud native security to the enterprise was the chief topic of discussion for our panel of experts in this virtual panel. Panelists were: Cheryl Hung, Director of Ecosystem, Cloud Native Computing Foundation. Carla Arend, Senior Program Director, Infrastructure Software, IDC. John Morello, Palo Alto Networks Vice President of Product, Prisma Cloud. Alex Williams, founder and publisher of The New Stack hosted the discussion. Certainly, operations have changed for most of us due to the outbreak of the COVID-19 global pandemic. But this can be a good opportunity for an organization to rethink how they approach business continuing and resiliency, Arend noted. Those who were on the digital journey are getting much better through this crisis than those just starting. Now is a great time to focus on digital innovation. Indeed, if anything, innovation is just accelerating in this time, Morello agreed. Without having the ability to interact in person, the tools that enable digital transformation — Kubernetes, containers — helps people operate more efficiently.
What is the role that the data plane plays in a Kubernetes ecosystem? This was the theme for our latest (virtual) pancake breakfast and panel discussion, sponsored by DataStax, the keeper of the open source Cassandra database. Last month, Datastax released a Kubernetes operator, so that the NoSQL database can be more easily installed, managed, and updated in Kubernetes container-based infrastructure. The Panelists for this discussion: Kathryn Erickson, DataStax senior director of partnerships. Janakiram MSV, principal analyst of Janakiram & Associates. Aaron Ploetz, Target NoSQL lead engineer. Sam Ramji, DataStax chief strategy officer. Alex Williams, publisher for The New Stack served as moderator for this panel, with the help of TNS Managing Editor Joab Jackson. in 2015, Ramji worked at Google and oversaw the business development around its then-newly open source project, Kubernetes, which was based on its internal container orchestrator, the Borg. The Borg provides Google a single control pane for dynamically managing all its many containerized workloads, and its scale-out database, Spanner, offered the same for the data plane. “The marriage of those two things made compute and data so universally addressable so easy to access that you could do just about anything that you could imagine,” Ramji explained.
Listen to all of our podcasts here: https://thenewstack.io/podcasts/ In this, The New Stack Analysts podcast, Lee Calcote, an analyst and founder of Layer5, and Brian “Redbeard” Harrington, a principal product manager for OpenShift service mesh at Red Hat, discussed the many nuances of what the survey numbers really mean. Calcote, for example, notes how traffic management is seen as a key feature among the many different service mesh capabilities, but it's most useful to advanced users. Speaking about the use of traffic management functionalities, Calcote said: “Folks tend to be a little more advanced as they get into that because they're at that point they're actually affecting traffic and then routing requests differently, as opposed to something like just purely observing or getting a ‘read-only' view in their environment.” Harrington agreed. “I'm happy that Lee kind of pointed out the specific distinction around traffic control, because among the users who I'm talking to that's the — pun intended — ‘gateway drug,'” Harrington said. Organizations with legacy bare metal environments and “pretty expensive hardware incumbencies” face challenges as they move “move to dynamic environments” and as they “de-prioritize” some legacy hardware, traffic management capabilities service meshes can provide help when making the shift, Harrington said. The survey results and experience in the field also indicate organizations are still mulling the best use cases for service meshes. When asked whether an organization should adopt or how they should begin to rely on service meshes, it is often “irrespective of whether they're starting on the simpler…or more sophisticated [possibilities] in that spectrum,” Calcote said.”The advice is generally the same which is you should start and adopt a bit at a time a bit of value at a time and what that value is sort of dependent upon what you're looking for out of mesh,” Calcote said. “But between you getting comfortable with what you've deployed and getting the value out of what you've deployed, [organizations should] take the next step from there to hopefully at some point leverage all of the functionality of the mesh.”
The New Stack Editor Alex Williams sat down, with Diane Patton, technical marketing engineer at NetApp, Jenny Fong, VP of marketing at Diamanti, and Sriram Subramanian, an analyst at IDC, to learn how Kubernetes has evolved into the preferred infrastructure layer for stateful environments. Listen to this episode of The New Stack Analysts to hear more about where we are heading and how we should be able to handle state in any given situation.
In this episode of The New Stack Analysts podcast, we explore how two Kubernetes Special Interest Groups (SIG) are taking into account all of the various user personas in order to help shape the user experience on Kubernetes. Guests on this episode include: Tasha Drew, co-chair of the Kubernetes Usability SIG and a product line manager for Kubernetes and Project Pacific at VMware. Lei Zhang, co-chair of the Kubernetes Application Delivery SIG and a staff engineer at Alibaba. Emily Omier, The New Stack correspondent and a content strategist for cloud native companies.
Many IT teams begin moving their applications to containers and Kubernetes after their managers mandate the switch. Then in the rush to deploy they may forget, or simply delay, some fundamentals. Only six to 12 months later does integrating security into their CI/CD pipeline becomes a priority. This gradual evolution toward cloud native security best practices is worrisome, but it's the norm among organizations adopting Kubernetes today. This is what we learned from a panel of cloud native security experts at The New Stack's pancake and podcast from KubeCon+CloudNativeCon North America this week. The New Stack founder and publisher Alex Williams was joined on the panel by: Keith Mokris, product marketing manager, container security at Palo Alto Networks; Maya Kaczorowski, product manager at Google. Santiago Torres-Arias, Ph.D. student at New York University Center for Cyber Security; Sarah Allen, co-chair of the Cloud Native Computing Foundation's (CNCF) Security Special Interest Group (SIG); Sean M. Kerner, senior editor at InternetNews.com. Prisma by Palo Alto Networks sponsored this podcast.
The developer will certainly face new challenges when making the switch to a cloud native platform. The process might include, for example, learning how to add code to Kubernetes clusters or mastering the mechanics of etcd and kubectlis. The power and scaling flexibility a cloud native platform and Kubernetes offer, among other things, are often worth more than developers' investment in time and resources when adopting these technologies. And yet. What developers are usually more concerned about is the business goals they need to achieve. They will likely care less what the underlying infrastructure is as much as it can be used to create code that might improve their organization's bottom line, or for a public institution, better meet the needs of a citizen. In this episode of The New Stack Analysts recorded at the 2019 European Cloud Foundry Summit in The Hague, The Netherlands, this month where the business needs of developers and the role of the Cloud Foundry community were dicussed — and debated. Hosted by Alex Williams, The New Stack founder and editor-in-chief and co-hosted by Devin Davis, vice president of marketing, Cloud Foundry Foundation, the panelists were: Abby Kearns, executive director, Cloud Foundry Foundation Michael Cote, marketing director, Pivotal Tammy Van Hove, distinguished engineer, IBM Udo Seidel, Tech Writer, Heise iX
Enterprise startups are building the tools that help their customers to create an agile modern enterprise that adapts quickly to market changes. But the enterprise isn't always open to that change, or even aware of the benefits of that change, said Frederic Lardinois, writer and news editor at TechCrunch, in this episode of The New Stack Analysts recorded at TechCrunch Sessions: Enterprise held on Sept. 5 in San Francisco. This is a primary challenge for enterprise software companies today. The people and technologies that help enterprise software startups grow was the focus of this recent panel discussion at The New Stack pancake breakfast and podcast at TC Sessions: Enterprise. TNS founder and publisher Alex Williams moderated the discussion, which was sponsored by GitLab. Panelists included: 1. Frederic Lardinois / writer & news editor / TechCrunch 2. Katherine Boyle / Principal / General Catalyst 3. Melissa Pancoast / founder & CEO / The Beans 4. Sameer Patel / former CEO / Kahuna 5. Sid Sijbrandij / co-founder & CEO / GitLab
What became the punk rock genre may be a harbinger of what is in store for the open source movement — but hopefully not. At any rate, open source's popularity can certainly be compared to punk rock's rise. When Linux began to be seen as a powerful and very practical alternative to Unix, and especially, Windows, during the 1990s, the movement then felt very...well, underground. Trading discs of Linux distros and sharing tips on how to hack “Doom” (okay, the hacks were open source but Doom's code was obviously wasn't). But hopefully, open source will take a different path than what became of punk rock, especially for enterprises, as open matures, or to take the punk rock analogy further, become corporate musak. The punk rock comparison was one a main theme of this episode of The New Stack Makers podcast recorded during VMware World San Francisco, with guests Tom Petrocelli, a research fellow at Amalgam Insights, and Adam Jacob, CEO at The System Initiative and former CTO of Chef Software. They described what the open source movement has become and where it's heading, and more importantly, what it all means for enterprises.
It would be a mistake to ignore the immediate impact artificial intelligence (AI) and machine learning (ML) is already having on software development processes and DevOps. While some may see AI and ML as new technologies high on the hype cycle with overall marginal influences — even though they have been available for years — many organizations are already taking advantage of how they can automate many tasks during the development process. This includes their role in performing more mundane and time-consuming tasks that developers, as well as operations staffers, would prefer not to do by letting the machine take over. During this The New Stack Analysts podcast, two DevOps and development process experts spoke about AI's and ML's effects on DevOps and the state of algorithm development today and it impact on IT operations today: Hyoun Park, CEO and chief analyst, Amalgam Insights, and Bola Rotibi, Research Director, Software Development, CCS Insight. This roundtable was hosted by Alex Williams, founder and editor in chief of The New Stack. Already, AI and ML are affecting DevOps workflows have provided “an amazing access to computing and processing” over the past few years, Park said. They have provided DevOps with the ability to test a wide variety of algorithmic strategies, as well as provide storage and data-management capabilities to handle the processing, the testing and benefits associated with machine learning, and artificial intelligence.
Just four years ago, industry analysts were wary of running production workloads in containers, but certainly the industry got over that fast. Numbers around Docker and Kubernetes adoption vary broadly, but it's safe to say that well over half of Fortune 100 companies have embraced containers. In this episode of The New Stack Analysts, our Editor in Chief Alex Williams sits down with Briana Frank, director of product management at IBM, and James Ford, independent technical strategy advisor, to reflect on the origins of containers, how Kubernetes and Docker began, and how adoption has grown so fast in only a few years. Frank said the impetus behind rapid container adoption came from Docker allowing everyone to get started quickly and simply — about ten minutes. For her, this accessibility is a continued source of inspiration when she's creating demos and Getting Started tutorials, as this ease of use accelerates innovation. “We can attribute a lot of the popularity of Kubernetes today to the Docker beginnings,” she said.
You put three journalists from leading tech news outlets in a podcast room with the founder and editor in chief of the media outlet of record for the development and DevOps community. One could guess they would have rather opinionated thoughts to share and more than enough background based on an incessant curiosity — or arguably — an obsessional drive to uncover and analyze the truth beneath the marketing-laden facades of the cloud native software development industry today. In fact, all of the above aptly describes the theme and atmosphere of a podcast meeting featuring most of the press corps members attending the KubeCon + CloudNativeCon conference in Barcelona in May.Speaking with Alex Williams, founder and editor in chief of The New Stack, the podcast guests included: Frederic Lardinois, a writer and news editor for TechCrunch; Ron Miller, an enterprise reporter for TechCrunch; Sean Michael Kerner, senior Editor for InternetNews.com Watch on YouTube: https://www.youtube.com/watch?v=2H_Ej3OCK5k
Some might assume artificial intelligence (AI) and machine learning (ML) are among the “latest and greatest technologies” that are on top of today's hype list. Their adoption cycle — if they ever do see deployment on a large scale — is years away, one might assume. After all, software developers are really just beginning to take advantage of Kubernetes and microservices on a large scale a few years after their creation. However, providers of AI and ML systems have begun to show how developers can already reap advantages of AI and ML for their existing production pipelines for cloud native deployments in a number of ways. The concept is simple: pure AI or neural network-aided ML can assume many of what developers says are the more cumbersome, time-consuming, and ultimately, boring tasks on both the development and operations side. The overlap between machine learning and cloud native and other themes were among the topics discussed during a panel discussions at an Oracle-sponsored pancake breakfast. The New Stack organized the event at the KubeCon + CloudNativeCon conference in Barcelona in May. Maple syrup was also provided. The panelists were: Bob Quillin, vice president, cloud developer relations, Oracle; Jon Girven, co-founder and CTO, Antix and Sauce; Ant Kennedy, CTO, Gapsquare; Bola Rotibi, research director, software development, delivery and lifecycle management, Creative Intellect Consulting. Watch on YouTube: https://youtu.be/22nYF4GOGRE
Service meshes have emerged as essential tools in managing deployments on containers and microservices. Indeed, one would be hard pressed to find DevOps teams that have successfully deployed on Kubernetes without the observability and management capabilities they offer give the immense complexity involved in such a project. The key role service meshes play in the cloud native world also accounts for why they were a major topic discussed during KubeCon + CloudNativeCon in Barcelona and how Envoy, Istio, Linkerd, Aspen Mesh and other projects will continue to serve as open source alternatives. Indeed, the announcements at KubeCon about Microsoft's Service Mesh Interface (SMI) specification and how Solo.io has created what it calls “the first reference implementations” for SMI were arguably the most important newsworthy developments during the conference. Solo.io's founder and CEO Idit Levine was also on hand to put service meshes into perspective as one of the panel guests during The New Stack pancake breakfast in this podcast about services meshes held during the first of the Barcelona conference. Hosted by Alex Williams, founder and editor in chief, and co-hosted by Libby Clark, editorial director, of The New Stack, the other guests on hand included, in addition to Levine: Cliff Grossner, executive director research and technology fellow, IHS Markit; Pere Monclus, vice president and CTO, networking and security, VMware; Florian Dudouet, product owner and cloud engineer, Swisscom; Lee Calcote, founder, Layer5, and author of “The Enterprise Path to Service Mesh Architectures.” Watch on YouTube: https://www.youtube.com/watch?v=c2v9zvBR7Ds
A blog post by Saad Ali, senior software engineer at Google, drew considerable attention early year when Ali first described Kubernetes GA in “Container Storage Interface (CSI) for Kubernetes GA. In that post, Ali described how “CSI was developed as a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes.” Among other things, CSI protects backwards compatibility with protection by a Kubernetes deprecation policy, Ali wrote. The implications of CSI, as well as Kubernetes storage and the evolution of containers, were the subject of a podcast episode of The New Stack Analyst hosted by Alex Williams, founder and editor-in-chief of The New Stack, with Janakiram MSV, a TNS correspondent and principal of Janakiram & Associates. Joining Ali as a guest was Anand Babu "AB" Periasamy, co-founder and CEO at MinIO. The podcast was broadcast during DockerCon 2019, Docker's flagship user conference, which recently took place in San Francisco.
Christopher Liljenstolpe is the founder and chief technology officer of Tigera, a provider of cloud native security and networking software. He formed Tigera to offer commercial support for Project Calico, a control plane he created for cloud native applications. In this episode of The New Stack Analysts podcast, TNS Managing Editor Joab Jackson and TNS contributing analyst Janakiram MSV talk with Liljenstolpe about Calico's creation, overlay networks, service meshes and IPv6. Key Takeaways: Originally created for OpenStack, Calico was designed to make it easy to get data packets from one part of the network to another, using the Internet technologies like IP routing, rather than switching, virtual networks, overlay networks or other complex approaches. Since this form of networking offers only a coarse-grained isolation across nodes, so Calico uses real-time distributed filtering engines to control which nodes can communicate with one another, in effect acting as a network policy enforcement tool. Anticipating containers, Calico was designed for very dynamic environments, and can manage hundreds of thousands of end-points that can change location at any time.
It is hard to believe for many, as it is for this writer, that Cloud Foundry has existed for more than a decade after it was founded in 2008. Since its beginning, it certainly has more than established itself as a platform as a service (PaaS) of choice for deploying and scaling open source applications. It has also certainly played a role in the growing momentum in the adoption of Kubernetes, microservices, cloud native and other new technologies as well as many other open source tools that continue to help transform DevOps practices. In this episode of The New Stack Analysts podcast, we recorded a panel discussion held during a pancake breakfast at Cloud Foundry Summit North America earlier this month in Philadelphia with Cloud Foundry as the featured topic. Hosted by The New Stack's Alex Williams, founder and editor-in-chief, and co-hosted by Joab Jackson, managing editor, panel invitees discussed Cloud Foundry's evolution over the past few years and the key role it continues to play in the open source community. The panelists included: Abby Kearns, executive director, of the Cloud Foundry Foundation; Cornelia Davis, Vice President of Technology, Pivotal; Daniel Jones, CTO for EngineerBetter; Rick Rioboli, Senior Vice President and CIO, for Comcast; and Stephen O'Grady, an analyst for Redmonk.
The magnanimous shift to cloud-native has brought with it obvious changes and shifts in how DevOps' manage application deployments. Key among the challenges are observability and monitoring, logging, routing and, of course, security. As a way to keep this all under control, service meshes are increasingly seen not only as extremely useful extra layers to have — but as a necessity production pipelines, deployments and operations running on microservices and Kubernetes live and die by. This was the main theme of a The New Stack makers podcast hosted by Alex Williams, founder and editor-in-chief of The New Stack and co-hosted by Sriram Subramanian, founder and principal at CloudDon. From Instana, which offers application performance management (APM) solutions for microservices; Williams and Subramanian were joined by Mirko Novakovic, Instana CEO and co-founder, and Michele Mancioppi, Instana senior technical product manager.
It is now widely assumed that making the shift to microservices and cloud-native environments will only lead to great things — and yet. As the rush to the cloud builds in momentum, many organizations are rudely waking up to more than a few difficulties along the way, such as how Kubernetes and serverless platforms on offer often remain a work in progress. “We are coming to all those communities and basically pitching them to move, right? We tell them, ‘look, monolithic is very complicated — let's move to microservices,' so, they are working very, very hard to move but then they discover that the tooling is not that mature,” Idit Levine, founder and CEO of solo.io, said. “And actually, there are many gaps in the tooling that they had or used it before and now they're losing this functionality. For instance, like logging or like debugging microservices is a big problem and so on.” Levine, whose company offers service mesh solutions, also described how service meshes were designed to “solve exactly this problem,” during a podcast episode of The New Stack Analyst hosted by Alex Williams, founder and editor-in-chief of The New Stack, with Janakiram MSV, The New Stack Correspondent and principal of Janakiram & Associates.
Monitoring and observability will also play a major in this brave new AI/ML landscape with Kubernetes and microservices. This was the main theme of a podcast Alex Williams, founder and editor-in-chief of The New Stack, hosted, with Janakiram MSV, The New Stack Correspondent and principal of Janakiram & Associates, as the co-host. Irshad Raihan, director of product marketing at Red Hat, was the guest who spoke about the role of data and observability in AI/ML, in addition to how DevOps is changing for AI/ML, especially with increasing availability of direct data and data streaming.
I've got to admit it's getting better (Better) A little better all the time (It can't get no worse) Paul McCartney and John Lennon, respectively, sum up the two ways to look at the future of artificial intelligence (AI) and machine learning — with optimism over limitless potential or pessimism over increased job loss and income equality. The New Stack's Libby Clark and Jennifer Riggins sat down (via Zoom) with The New York Times's Martin Ford, author of Architects of Intelligence: The truth about AI from the people building it, and Ofer Hermoni, chair of the technical advisory council for The Linux Foundation's Deep Learning Foundation projects, to talk about the current state of AI, how it will scale, and its consequences. The last year alone has seen major advancements in deep learning, machine learning, and neural networks — frameworks for machine learning algorithms to work together and process complex data inputs. However, as Ford points out in this podcast, we are only at the start of the ethical implications of AI, including the implications of reduced privacy, potential weaponization, and the unconscious bias that is feeding much of the data going into these models.
On today's episode of The New Stack Analysts podcast, TNS founder Alex Williams is joined by Janakiram MSV, a principal analyst with Janakiram & Associates, and Steve Burton, VP of Marketing at Harness.io to discuss not only the effects containers and Kubernetes have had on realizing our DevOps dreams, but also how machine learning is taking it to the next level with the evolution of AIOps. "In the last five years, DevOps has actually matured. So, we started with VMs, and DevOps was all about provisioning and configuration management and then, eventually CI/CD came in and Jenkins became the front and center of build management and release management, but that entire game was taken to the next level when containers became mainstream," said Janakiram. "We have evolved. Basically the current phase is driven predominately by container orchestration managers like Kubernetes that makes it extremely easy to spin up a staging environment or a test environment. And then we have Docker images as the unit of deployment. That fundamentally changes the game."
This week on The New Stack Analysts podcast, we take a closer look at the appeal of using virtual machines in Kubernetes environments. The discussion was sparked by a popular blog post penned last month by Pivotal Principal Technologist Paul Czarkowski. The problem with basic Docker-styled containers is that they do not offer sufficient security in multitenant environments, where multiple deployments intermingle on the same set of Kubernetes-controlled servers. So we spoke with Czarkowski to learn more of his thinking. Linux containers all rely on a shared kernel from the kernel, and isolation is provided by the kernel through namespaces. The Kubernetes API, however, is not secured, and most K8s components are not aware of the tenants. This is forcing service providers to provision Kubernetes workloads for different clients as separate clusters, not taking full advantage of the full savings that Kubernetes could provide by pooling workloads on the same cluster, Czarkowski argued.
We learned a lot about our readers upon completion of our reader's survey at the end of last year. According to those who responded to the SurveyMonkey questionnaire, working mostly in development and/or DevOps or operations, the trends and topics you are especially interested in include artificial intelligence (AI) and machine learning (ML) on Kubernetes or serverless in cloud native environments. DevOps, as well as security, of course also play a big role as data is processed, managed and stored in new and exciting ways. It was with these topics in mind that Alex Williams, founder and editor-in-chief, of The New Stack, hosted the podcast, along with Joab Jackson, TNS managing editor, hosted the last TNS podcast of 2018. The guests were Dillon Erb, CEO of Paperspace, which offers solutions for ML and AI deployments on the cloud, and Chenxi Wang, managing director of venture capital firm Rain Capital, with an emphasis on next-generation security solutions.
Functions as a Service (FaaS), and especially, serverless are major buzzwords today, but beneath the hype, they offer tremendous resource-savings and scaling opportunities. But as organizations make the shift from monolithic-centric platforms as they rely on FaaS to, for example, scale to cloud native environments, the concepts and promise of what are on offer can also make it easy to forget what is involved to make the jump on a hands-on and practical level. In other words, great things await your organization as it makes the transition, but getting there will require a lot of work — for what usually is a huge payoff as FaaS and cloud providers assume much of the heavy lifting for server management and other infrastructure-related tasks. During a panel discussion hosted by Alex Williams, founder and editor-in-chief, and Joab Jackson, managing editor, of The New Stack; at KubeCon + CloudNativeCon North America 2018, a panel of FaaS and serverless experts were on hand to discuss their down-in-the-trenches experiences and ideas about what implementing FaaS and relying on cloud providers is really like. The panel members included: - Ara (Araceli) Pulido, Kubernetes engineering manager, Bitnami; - Chad Arimura, vice president, serverless advocacy, Oracle and former CEO and cofounder of Iron.io; - Christopher Woods, research software engineer, University of Bristol; - Tom Petrocelli, analyst, Amalgam Insights Watch on YouTube: https://youtu.be/UPf8sCKNb4E
The advent of service meshes can be traced back to Linkerd, a Cloud Native Computing Foundation (CNCF) project. Now, as Linkerd's adoption curve continues to accelerate, a number of other options have emerged that allow for the management and scaling of an often vast network of microservices and the applications within them. Istio, of course, is among the leading alternatives. The state of Istio and services meshes was the main topic during a panel discussion for this podcast, hosted by Alex Williams, founder and editor-in-chief, and Joab Jackson, managing editing, of The New Stack; at KubeCon + CloudNativeCon North America 2018. The attendees, who were also treated to a pancake breakfast during the event, were able to ask questions about service meshes and Istio to the panel of subject matter experts consisting of: - Jason McGee, IBM fellow, vice president, CTO, IBM Cloud Platform; - Ken Owens, vice president, digital native architecture, Mastercard; - Jennifer Lin, director of product management, Google Cloud; - Simon Richard, analyst, Gartner; - Pere Monclus, vice president and CTO network and security BU, VMware. Watch on YouTube: https://www.youtube.com/watch?v=Wx7iGgwU9DY
It would have been difficult to predict the magnitude of open source's role in today's platforms and the explosion of choice on offer in today's computing world thanks to its massive adoption. On the industry side, IBM's purchase of Linux giant Red Hat this year for an astounding $34 billion has come as an even bigger surprise. The state of open source in 2018, and especially, the IBM's Red Hat purchase, were discussed during a podcast with Rachel Stephens, an analyst with of RedMonk, and Michael Coté, director, marketing, at Pivotal Software, hosted by Libby Clark, editorial director, and Alex Williams, founder and editor-in-chief, of The New Stack. Indeed, 2018 is even being touted at the “year of open source” in many circles, Stephens said. “The mega acquisitions and just tends to really validate open-source as the method of building in the future and as a viable approach for building your stack. And I think, at the same time, we contrast that with some kind of clouds on the horizon in terms of the growing tension between an ability to run an open source business in the face of cloud providers.”
Red Hat's acquisition by IBM has been the biggest story of the year, dwarfing Microsoft's acquisition of GitHub. But the acquisition has been notable for many reasons, one of them is that this is 3rd largest IT acquisition after Broadcom and Dell-EMC. New Stack Founder & Editor-in-Chief, Alex Williams sat down with Lauren Cooney, Founder & CEO, Spark Labs, Tyler Jewell, CEO, WSO2 and Chris Aniszczyk, CNCF CTO/COO, to discuss the repercussions of this acquisition. The core points discussed included the impact on the market, the impact on open source contribution made by Red Hat, the impact on the culture within Red Hat and the possible clash between the product teams of both companies fighting over the same client. But when companies bring two different cultures together, things could go wrong.
Successful cloud native deployments largely hinge on DevOps' ability to break down silos and other inter-organizational barriers by directly engaging all stakeholders, including business teams, developers, operations and other otherwise separate groups throughout the entire process. But aside from the general description, DevOps can mean a lot of different things to many different people. In some extreme cases, DevOps' role in cloud native development and operations might just represent a job ticket for some organizations, such as when a developer discovers a security vulnerability in an application's code in a cloud native deployment. He or she then merely delegates fixing the security hole to an on-staff security staff while their responsibility ends there. What Cloud Native DevOps really means, as well as its future, was a main theme of this podcast, hosted by The New Stack Editor-In-Chief Alex Williams during The New Stack pancake breakfast held during Cloud Foundry Summit Europe 2018. Devin Davis, vice president, marketing, for the Cloud Foundry Foundation, served as the co-host. The panel consisted of the following guests: Abby Kearns, executive director, the Cloud Foundry Foundation; Chisara Nwabara, technical program manager, Pivotal software; Dieu Cao, director of product management, Pivotal software; Frederic Lardinois, analyst and journalist, TechCrunch; Julian Friedman, the Cloud Foundry project lead, IBM. Watch on YouTube: https://youtu.be/QXL0urTSN88
The divide continues to grow between people developing serverless technologies and those in DevOps roles building, deploying, and managing Kubernetes. "Questions about serverless approaches and Kubernetes don't really come up with developers," said Timirah James, a developer advocate with Cloudinary on today's episode of The New Stack Analysts podcast. James joined TNS founder Alex Williams and Analyst/Co-Host Klint Finley, contributor at WIRED, to discuss how organizations and developers both can help bridge this gap. Detailing her experience at Notre Dame University in Belmont, California, James noted that, "If you're pursuing education in this field and trying to go to college for it, if you're trying to go to college for anything, you want to make sure you're attending an institution that has a network, community, and foundation around what you're studying. Especially something as niche as Computer Science."
Anne Currie Chief Strategist, Container Solutions, has been thinking about some of the deeper questions the software industry poses. These questions aren't just about interconnected systems, and the repercussions of everyone having access to the Internet. Rather, they're about the moral and ethical obligations faced by software developers. Are developers responsible for the things done with their software? In a recent StackOverflow survey, most developers said "No." "My job is to go out and talk to a load of people about what they're doing. I go out and talk to a lot of industry, enterprise folks, building thigns, finding out what's happening. What's become clear is that over the past couple of years we have enormously increased the speed with which we can get products to market and ideas to market by three orders of magnitude. It's actually quite astonishing. When people start to use the new DevOps technologies and cloud all together, and CI/CD, and everything that's coming through at the moment in DevOps, they're aim is to get ideas from basically developers typing into a keyboard out to customers incredibly fast. It's more like fifteen minutes as opposed to six months. Three years ago it was six months. Now it's fifteen minutes if you're really advanced with using DevOps technology, which, eventually, all of us will be," said Currie.
Blockchain has finally gotten over the Wall Street hump. Now that BitCoin and Ethereum are essentially old news, the actual technology behind these commodities is beginning to trickle into real world enterprise applications. Blockchain, it seems, has many useful use cases out there in the business world, and with the help of the Linux Foundation and IBM, enterprises can now take advantage of the open source Hyperledger implementation of blockchain technology. On today's episode of The New Stack Analysts recorded live at OSCON 2018, Klint Finley, Reporter with Wired and the author of the Wired Guide to Blockchain, said that he had a few real and hypothetical use cases the piece he wrote a few months ago. One of those use cases was that of, "Blockchain feels like the thing that is the most hard to parse the value out of the hype. With cloud computing it was clearly an enormously hyped concept, but there were use cases and benefits. With big data it was coming out of actual real use cases at places like Google and Yahoo! With the Internet of Things, that's probably the least well established of those at this point, but there are still established use cases for that. With Blockchain it feels really preliminary," said Finley.
Serverless computing and functions as a service are both growing in appeal for enterprises, but there are still many questions about how they fit into a standard business environment. For a start, how do you manage your dependencies in a serverless environment? How are applications, themselves, tracked and managed over time? And how do the various serverless and fucntions-as-a-service offerings handle in real world enterprise use cases? To find the answers to these questions and more, we sat down with Mike Roberts, partner at Symphonia.io and Austen Collins, Founder and CEO of Serverless, Inc. Together, they've got front row seats into how business application developers and administrators are utilizing serverless technologies today. One of the big topics of discussion, however, still focuses on where these functions will live: in the cloud, or in Kubernetes?
Serverless compute sounds like heaven, but without the right mindset it can be hell. Between the constraints placed on serverless environments by cloud providers, the potential for proliferation of applications and variations of versions, and the general mind shift required to build for serverless, the shift to cloud functions isn't something that just happens overnight. "The interesting thing about serverless platforms is they are a constrained execution environment, which is a good thing. It means that the cloud vendor can provide capabilities for customers at a fraction of the cost of what they were if they were fully customizable... The problem is, as these platforms are still maturing, they are necessarily constrained in what kind of execution they will allow. For instance, most of them don't let you run for more than five or ten minutes... The challenge is, what if you can't run in that uniform environment? This is where containers come into play," said Donna Malayeri, Product Manager at Pulumi. "OK, we're going to write some functions, but how do we deal with our data? We have 14 petabytes of data and we can't just whimsically move it somewhere. So, introducing more environment control from that perspective is kind of a big deal to me. I keep running into these scenarios, and a lot of the times it's still a big question mark. What are we going to do? We started to build on this new serverless architecture, but we still have huge data problems," said Adron Hall, of Thrashing Code.