Podcasts about Containerization

Intermodal freight transport system

  • 126PODCASTS
  • 164EPISODES
  • 41mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 28, 2025LATEST
Containerization

POPULARITY

20172018201920202021202220232024


Best podcasts about Containerization

Latest podcast episodes about Containerization

Cloud Unplugged
Design Meets AI: OpenAI and Jony Ives, VEO AI film creation, and how to predict the weather with M.S

Cloud Unplugged

Play Episode Listen Later May 28, 2025 39:50


This week, we dive into OpenAI's $6.5B acquisition of Jony Ive's ‘io' and what it means for the future of AI-native devices. We explore Google's VEO 3 and the deepfake dilemmas it raises, along with Microsoft's Aurora AI and its ability to predict the weather. Plus, Google's new try-on AI lets you see how clothes fit without leaving your house, and in a more random story, it turns out some plants can hear bees to protect their nectar.Whether you're deep in tech, cloud services, AI innovation, or market dynamics, this episode delivers sharp analysis, insightful predictions, and essential context to stay ahead in a rapidly evolving technological landscape.Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/

Cloud Unplugged
AI Infrastructure Boom: CoreWeave's IPO, AWS Transform, and Quantum Computings Next Leap

Cloud Unplugged

Play Episode Listen Later May 22, 2025 44:18


In this episode, we dissect industry-shaping stories, debating CoreWeave's $35 billion IPO, AWS Transform: AI for legacy app modernisation, and the exciting intersection of quantum computing and AI and how much Nvidia is investing in the Market - are they becoming the new Microsoft, Apple or Google?Whether you're deep in tech, cloud services, AI innovation, or market dynamics, this episode delivers sharp analysis, insightful predictions, and essential context to stay ahead in a rapidly evolving technological landscape.Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/

Cloud Unplugged
Big Retail Cyber Attack: Amazon's AI Offensive & the Google AI Opt‑Out Illusion

Cloud Unplugged

Play Episode Listen Later May 7, 2025 33:16


In this 30‑minute episode, Jon and Lewis unpick the coordinated ransomware wave that struck Britain's high‑street giants. They trace the attack chain that emptied Co‑op shelves, froze M&S online orders and attempted, but failed, to extort Harrods.Lewis takes a look at Amazon's latest generative‑AI arsenal: Amazon Q's new developer‑first agents, the multimodal Nova Premier family running on Bedrock, and AWS's landmark decision to let any SaaS vendor list in Marketplace regardless of where the software runs, a direct play to become the app store for the whole cloud economy. Finally, they ask whether enterprises can really keep their data out of Google's AI engines.Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/

Cloud Unplugged
CTO/Co-founder Thomas Boltze: Why Your Engineering Team is Slow - and How to Fix It | Episode 40

Cloud Unplugged

Play Episode Listen Later Apr 16, 2025 70:13


In this episode of Cloud Unplugged, Jon talks with Thomas Boltze—CTO at Santander's PagoNxt, former CTO of Funding Circle, Agile Coach, and cloud/fintech leader with 15+ years experience—about fixing broken tech teams. They cover rebuilding systems from scratch, cutting through technical debt, and why culture trumps code every time. Lessons from fintech, startups and hard-won engineering battles.Guest LinkedIn: https://www.linkedin.com/in/thomasboltze/Follow us on social media @cloudunplugged https://www.tiktok.com/@cloudunpluggedhttps://twitter.com/cloud_unpluggedhttps://www.linkedin.com/company/cloud-unplugged-podcast/Listen on All Platforms: https://cloud-unplugged.transistor.fm/Listen on Spotify: https://bit.ly/3y2djXaListen on Apple Podcasts:  https://bit.ly/3mosSFTJon & Jay's start-up: https://www.appvia.io/Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/jaykeshur/https://www.linkedin.com/in/lewismarshall/ Podcast sponsor inquires, topic requests: Hello@cloudunplugged.ioWelcome to The Cloud Unplugged Podcast, where hosts Jon Shanks (CEO of a Cloud Platform Engineering and Developer Platform Company), Lewis Marshall (Developer Evangelist, AI enthusiast, and science devotee), and occasionally Jay Keshur (COO, championing business modernisation and transformation) explore the latest in cloud technology.Each week, they investigate developments in AI, data, emerging cloud platforms, and cloud growth, occasionally highlighting the geo-political and global commercial pressures shaping the industry. Drawing on their extensive experience helping customers adopt, scale, and innovate in the cloud (and managing their own Internal Developer Product), Jon, Lewis, and Jay share insights and welcome industry experts to discuss new trends, tackle business challenges, and offer practical solutions.

Autonomous IT
Hands-On IT – The Titans of Server History: People, Rivalries, and the Machines They Created, E16

Autonomous IT

Play Episode Listen Later Mar 20, 2025 64:27


This episode dives into the fascinating evolution of server technology, from room-sized mainframes to today's AI-powered cloud computing. It explores the innovations, rivalries, and key players—IBM, Microsoft, Unix pioneers, and the rise of Linux—that shaped the industry. The discussion covers the transition from minicomputers to personal computing, the impact of open-source software, and the shift toward containerization, hybrid cloud, and AI-driven infrastructure. With a focus on the forces driving technological progress, this episode unpacks the past, present, and future of server technology and its role in digital transformation.

Crazy Wisdom
Episode #440: AI Agents, Code Wizards, and What Could Possibly Go Wrong?

Crazy Wisdom

Play Episode Listen Later Mar 3, 2025 58:25


Stewart Alsop sat down with Nick Ludwig, the creator of Kibitz and lead developer at Hyperware, to talk about the evolution of AI-powered coding, the rise of agentic software development, and the security challenges that come with giving AI more autonomy. They explored the power of Claude MCP servers, the potential for AI to manage entire development workflows, and what it means to have swarms of digital agents handling tasks across business and personal life. If you're curious to dive deeper, check out Nick's work on Kibitz and Hyperware, and follow him on Twitter at @Nick1udwig (with a ‘1' instead of an ‘L').Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:52 Nick Ludwig's Journey with Cloud MCP Servers04:17 The Evolution of Coding with AI07:23 Challenges and Solutions in AI-Assisted Coding17:53 Security Implications of AI Agents27:34 Containerization for Safe Agent Operations29:07 Cold Wallets and Agent Security29:55 Agents and Financial Transactions33:29 Integrating APIs with Agents36:43 Discovering and Using Libraries43:19 Understanding MCP Servers47:41 Future of Agents in Business and Personal Life54:29 Educational and Medical Revolutions with AI56:36 Conclusion and Contact InformationKey InsightsAI is shifting software development from writing code to managing intelligent agents. Nick Ludwig emphasized how modern AI tools, particularly MCP servers, are enabling developers to transition from manually coding to overseeing AI-driven development. The ultimate goal is for AI to handle the bulk of programming while developers focus on high-level problem-solving and system design.Agentic software is the next frontier of automation. The discussion highlighted how AI agents, especially those using MCP servers, are moving beyond simple chatbots to autonomous digital workers capable of executing complex, multi-step tasks. These agents will soon be able to operate independently for extended periods, executing high-level commands rather than requiring constant human oversight.Security remains a major challenge with AI-driven tools. One of the biggest risks with AI-powered automation is security, particularly regarding prompt injection attacks and unintended system modifications. Ludwig pointed out that giving AI access to command-line functions, file systems, and financial accounts requires careful sandboxing and permissions to prevent catastrophic errors or exploitation.Containerization will be critical for safe AI execution. Ludwig proposed that solutions like Docker and other containerization technologies can provide a secure environment where AI agents can operate freely without endangering core systems. By restricting AI's ability to modify critical files and limiting its spending permissions, businesses can safely integrate autonomous agents into their workflows.The future of AI is deeply tied to education. AI has the potential to revolutionize learning by providing real-time, personalized tutoring. Ludwig noted that LLMs have already changed how people learn to code, making complex programming more accessible to beginners. This concept can be extended to broader education, where AI-powered tutors could replace traditional classroom models with highly adaptive learning experiences.AI-driven businesses will operate at unprecedented efficiency. The conversation explored how companies will soon leverage AI agents to handle research, automate customer service, generate content, and even manage finances. Businesses that successfully integrate AI-powered workflows will have a significant competitive edge in speed, cost reduction, and adaptability.We are on the verge of an "intelligence explosion" in both AI and human capabilities. While some fear AI advancements will outpace human control, Ludwig argued that AI will also dramatically enhance human intelligence. By offloading cognitive burdens, AI will allow people to focus on creativity, strategy, and high-level decision-making, potentially leading to an era of rapid innovation and problem-solving across all industries.

Manufacturing Hub
Ep. 189 - Building Careers & SCADA Solutions Kent Melville Sales Engineering at Inductive Automation

Manufacturing Hub

Play Episode Listen Later Jan 23, 2025 80:05


Welcome back to Manufacturing Hub. In this episode, we sit down with Kent Melville, Director of Sales Engineering at Inductive Automation, to explore career growth, sales engineering, and the evolving landscape of industrial automation.Kent shares his fascinating journey, starting as a computer science graduate with a background in web development, ERP systems, and industrial automation before making his way into Inductive Automation. He takes us through the challenges and opportunities he encountered as he transitioned from technical roles into sales engineering, growing from one of the first hires in his division to leading a 30-plus-person team today.What You'll Learn in This EpisodeKent explains the role of a sales engineer and how it differs from traditional technical sales. He breaks down how sales engineers bridge technical expertise and customer engagement, ensuring that solutions meet real-world manufacturing challenges. He also discusses the growth of Inductive Automation, the company culture that has fueled his success, and how the Ignition platform has shaped the industrial automation industry.Another key topic in this discussion is the Ignition Community Conference (ICC), which has become a central event for the Ignition ecosystem. Kent shares how the Build-a-Thon, a live competition where integrators showcase their automation skills, became a major attraction and why it highlights the true power of rapid development with Ignition.Insights on Future Industry TrendsKent provides his perspective on where the industry is heading, especially in terms of IT-OT convergence. He discusses how containerization and DevOps principles are making their way into manufacturing and why version control and structured deployments will become the norm. He also shares insights on how Ignition's flexibility enables organizations to modernize their operations and prepare for the future.Career Lessons and Key TakeawaysThis episode is filled with valuable career advice for engineers and professionals looking to move into sales or leadership roles. Kent emphasizes the importance of working for a company that aligns with your goals rather than constantly chasing small pay increases. He discusses the need for clear communication, initiative, and the ability to adapt to different work styles.For those considering a transition from technical roles to sales engineering, Kent breaks down the key skills required, the training process, and how Inductive Automation prepares its team members for success. He also highlights the importance of building a reputation within an organization, taking on new challenges, and creating opportunities through proactive engagement.Behind-the-Scenes Stories and Fun MomentsBeyond the technical and career discussions, Kent shares some of the most entertaining moments from his time at Inductive Automation. He talks about how an impromptu on-stage rap performance during an Ignition product launch unexpectedly boosted his visibility within the company. He also gives a behind-the-scenes look at how Inductive Automation uses its own software for internal processes, from CRM and training to office automation.Who Should Watch This Episode?This conversation is ideal for industrial professionals looking to understand the role of sales engineering, engineers considering a move into customer-facing roles, and manufacturing leaders exploring Ignition's capabilities. It also offers practical career insights for anyone looking to grow within their organization and stand out in the industry.If you have any questions or thoughts, feel free to share them in the comments. Make sure to like, subscribe, and follow Manufacturing Hub for weekly conversations on manufacturing, automation, and technology.******Connect with UsVlad RomanovDave GriffithManufacturing HubSolisPLCJoltekReferences1. Inductive Automation & Ignition SCADAInductive Automation - Official Websitehttps://inductiveautomation.com/Ignition SCADA - Overview & Featureshttps://inductiveautomation.com/scada/Download Ignition (Free Trial & Maker Edition for Personal Use)https://inductiveautomation.com/downloads/Ignition Exchange (Free Industrial Automation Templates & Modules)https://inductiveautomation.com/exchange/Ignition Community Conference (ICC) – Annual Conferencehttps://inductiveautomation.com/resources/icc/Inductive Automation's YouTube Channel (Webinars, Case Studies, Training)https://www.youtube.com/@InductiveAutomation2. Sales Engineering & Career DevelopmentThe Sales Engineer Handbook: A Guide to Sales Engineering & Technical Sales (Patrick Pissang)https://www.amazon.com/Sales-Engineer-Handbook-Technical-Engineering/dp/3982171402Mastering Technical Sales: The Sales Engineer's Handbook (John Care, Aron Bohlig)https://www.amazon.com/Mastering-Technical-Sales-Engineers-Handbook/dp/1608324262Harvard Business Review - What Makes a Great Sales Engineer?https://hbr.org/2019/04/what-makes-a-great-sales-engineerLinkedIn Sales Engineering Community – Discussions, Networking, and Career Advicehttps://www.linkedin.com/groups/8948750/3. IT-OT Convergence & Industrial Automation TrendsISA (International Society of Automation) – IT-OT Convergence Resourceshttps://www.isa.org/topics/it-ot-convergenceIndustrial DevOps and Containerization in Manufacturing (Inductive Automation Blog)https://inductiveautomation.com/resources/article/modernizing-scada-with-devops/Understanding Unified Namespace (UNS) and MQTT for Industrial Automationhttps://cirrus-link.com/what-is-unified-namespace/ISA-95 Standard – Best Practices for IT and OT Integrationhttps://www.isa.org/standards-and-publications/isa-standards/isa-954. Home Automation & Ignition for Personal UseIgnition Maker Edition (Free Version for Personal & Home Automation Projects)https://inductiveautomation.com/ignition/maker-edition/Home Automation with Ignition - Community Projects & Discussionshttps://forum.inductiveautomation.com/tags/home-automationTravis Cox on Using Ignition for Smart Home Automation (Podcast)https://www.theautomatorpodcast.com/episodes/travis-cox-home-automation-ignition5. Kent Melville & Inductive Automation SocialsKent Melville on LinkedInhttps://www.linkedin.com/in/kentmelville/Inductive Automation on LinkedIn

Software Engineering Institute (SEI) Podcast Series
Securing Docker Containers: Techniques, Challenges, and Tools

Software Engineering Institute (SEI) Podcast Series

Play Episode Listen Later Dec 16, 2024 39:09


Containerization allows developers to run individual software applications in an isolated, controlled, repeatable way. With the increasing prevalence of cloud computing environments, containers are providing more and more of their underlying architecture. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Sasank Venkata Vishnubhatla and Maxwell Trdina, both engineers in the SEI CERT Division, sit down with Tim Chick, technical manager of the Applied Systems Group, to explore issues surrounding containerization, including recent vulnerabilities. 

The Six Five with Patrick Moorhead and Daniel Newman
10 Years of Amazon ECS: Powering a Decade of Containerized Innovation - Six Five On The Road at AWS re:Invent

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later Dec 13, 2024 16:59


What has a decade of containerized innovation meant for businesses? Host Keith Townsend is joined by Amazon Web Services' Nick Coult, Director of Product and Science, Serverless Compute on this episode of Six Five On The Road at AWS re:Invent for a conversation on the 10th anniversary of Amazon ECS and its impact on containerized innovation. Their discussion covers: - The impact of Gen AI on customer buying decisions across different industries - The uniqueness of the GenAI competency within AWS and its benefits for customers - A decade of evolution, milestones, and why customers prefer Amazon ECS - Key Amazon ECS innovations announced at re:Invent to meet customer needs - Future visions for Amazon ECS over the next decade  

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Integrating Generative AI into Existing IT Infrastructure: A Comprehensive Guide

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Nov 26, 2024 7:15


Integrating GenAI into Existing IT InfrastructureIntegrating the GenAI tech stack into an organization's existing IT infrastructure requires strategic adaptation to leverage existing processes and technologies without a complete overhaul. Here are some ways to include GenAI into your current systems:1. Incremental Adoption2. Integration with Existing Data Sources3. Leveraging APIs and Middleware4. Using Existing Monitoring Tools5. Cloud Hybrid Solutions6. Containerization and Orchestration7. Training and Upskilling Staff

The Productive C# Podcast
To embrace DevOps, how important it is to be skilled in containerization and CI/CD tools?

The Productive C# Podcast

Play Episode Listen Later Nov 25, 2024 2:48


How much do you need to know as a C# developer about Docker, CI/CD tools to embrace DevOps? Join my free Modern C# course ABOUT THE HOST Technical Lead @ Redgate Software | ✨Former Six-Times MVP on C# |

Cloud Unplugged
AI Monopoly Madness: Microsoft's Moves and the Future of ChatGPT! | Episode 39

Cloud Unplugged

Play Episode Listen Later Aug 7, 2024 58:23


In this episode of Cloud Unplugged, Lewis and Jon explore the latest in AI and cloud computing, discussing Microsoft's board changes and the evolving AI landscape. They also examine the practical applications of ChatGPT and its impact.Follow us on social media @cloudunplugged https://www.tiktok.com/@UCkCxcw9tJHd_sPtDveunGsQ https://twitter.com/cloud_unpluggedListen on Spotify: https://bit.ly/3y2djXaListen on Apple Podcasts:  https://bit.ly/3mosSFTJon & Jay's start-up: https://www.appvia.io/https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/jaykeshur/ Podcast sponsor inquires, topic requests: Hello@cloudunplugged.ioWelcome to The Cloud Unplugged Podcast, hosted by Jon Shanks (CEO) and Jay Keshur (COO). The two co-founded software company Appvia, and have backgrounds in engineering and platform development, with years of experience using Kubernetes. Here they take a light-hearted look at cloud engineering under the lens of platform teams. Discussing how developers, platform engineers, and businesses can leverage cloud-native software development practices successfully

Cloud Unplugged
CIO of Sportradar, Ian Poland: How to manage your cloud spend well, career journey to CIO | Episode 38

Cloud Unplugged

Play Episode Listen Later Jun 26, 2024 69:46


Follow us on social media @cloudunplugged https://www.tiktok.com/@UCkCxcw9tJHd_sPtDveunGsQ https://twitter.com/cloud_unpluggedListen on Spotify: https://bit.ly/3y2djXaListen on Apple Podcasts:  https://bit.ly/3mosSFTJon & Jay's start-up: https://www.appvia.io/https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/jaykeshur/ Podcast sponsor inquires, topic requests: Hello@cloudunplugged.ioWelcome to The Cloud Unplugged Podcast, hosted by Jon Shanks (CEO) and Jay Keshur (COO). The two co-founded software company Appvia, and have backgrounds in engineering and platform development, with years of experience using Kubernetes. Here they take a light-hearted look at cloud engineering under the lens of platform teams. Discussing how developers, platform engineers, and businesses can leverage cloud-native software development practices successfully

Oracle University Podcast
Basics of Kubernetes

Oracle University Podcast

Play Episode Listen Later Jun 18, 2024 17:26


In this episode, Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, dive into the fundamentals of Kubernetes. They talk about how Kubernetes tackles challenges in deploying and managing microservices, and enhances software performance, flexibility, and availability.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to another episode of the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! We've spent the last two episodes getting familiar with containerization and the Oracle Cloud Infrastructure Registry. Today, it's going to be all about Kubernetes. So if you've heard of Kubernetes but you don't know what it is, or you've been playing with Docker and containers and want to know how to take it to the next level, you'll want to stay with us. Lois: That's right, Niki. We'll be chatting with Mahendra Mehra, a senior OCI instructor with Oracle University, about the challenges in containerized applications within a complex business setup and how Kubernetes facilitates container orchestration and improves its effectiveness, resulting in better software performance, flexibility, and availability. 01:20 Nikita: Hi Mahendra. To start, can you tell us when you would use Kubernetes?  Mahendra: While deploying and managing microservices in a distributed environment, you may run into issues such as failures or container crashes. Issues such as scheduling containers to specific machines depending upon the configuration. You also might face issues while upgrading or rolling back the applications which you have containerized. Scaling up or scaling down containers across a set of machines can be troublesome.  01:50 Lois: And this is where Kubernetes helps automate the entire process?  Mahendra: Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.  You can think of a Kubernetes as you would a conductor for an orchestra. Similar to how a conductor would say how many violins are needed, which one play first, and how loud they should play, Kubernetes would say, how many webserver front-end containers or back-end database containers are needed, what they serve, and how many resources are to be dedicated to each one. 02:27 Nikita: That's so cool! So, how does Kubernetes work?  Mahendra: In Kubernetes, there is a master node, and there are multiple worker nodes. Each worker node can handle multiple pods. Pods are just a bunch of containers clustered together as a working unit. If a worker node goes down, Kubernetes starts new pods on the functioning worker node. 02:47 Lois: So, the benefits of Kubernetes are… Mahendra: Kubernetes can containerize applications of any scale without any downtime. Kubernetes can self-heal containerized applications, making them resilient to unexpected failures.  Kubernetes can autoscale containerized applications as for the workload and ensure optimal utilization of cloud resources. Kubernetes also greatly simplifies the process of deployment operations. With Kubernetes, however complex an operation is, it could be performed reliably by executing a couple of commands at the most. 03:19 Nikita: That's great. Mahendra, can you tell us a bit about the architecture and main components of Kubernetes? Mahendra: The Kubernetes cluster has two main components. One is the control plane, and one is the data plane. The control plane hosts the components used to manage the Kubernetes cluster. And the data plane basically hosts all the worker nodes that can be virtual machines or physical machines. These worker nodes basically host pods which run one or more containers. The containers running within these pods are making use of Docker images, which are managed within the image registry. In case of OCI, it is the container registry. 03:54 Lois: Mahendra, you mentioned nodes and pods. What are nodes? Mahendra: It is the smallest unit of computing hardware within the Kubernetes. Its work is to encapsulate one or more applications as containers. A node is a worker machine that has a container runtime environment within it. 04:10 Lois: And pods? Mahendra: A pod is a basic object of Kubernetes, and it is in charge of encapsulating containers, storage resources, and network IPs. One pod represents one instance of an application within Kubernetes. And these pods are launched in a Kubernetes cluster, which is composed of nodes. This means that a pod runs on a node but can easily be instantiated on another node. 04:32 Nikita: Can you run multiple containers within a pod? Mahendra: A pod can even contain more than one container if these containers are relatively tightly coupled. Pod is usually meant to run one application container inside of it, but you can run multiple containers inside one pod. Usually, it is only the case if you have one main application container and a helper container or some sidecar containers that has to run inside of that pod. Every pod is assigned a unique private IP address, using which the pods can communicate with one another. Pods are meant to be ephemeral, which means they die easily. And if they do, upon re-creation, they are assigned a new private IP address. In fact, Kubernetes can scale a number of these pods to adapt for the incoming traffic, consequently creating or deleting pods on demand. Kubernetes guarantees the availability of pods and replicas specified, but not the liveliness of each individual pod. This means that other pods that need to communicate with this application or component cannot rely on the underlying individual pod's IP address. 05:35 Lois: So, how does Kubernetes manage traffic to this indecisive number of pods with changing IP addresses? Mahendra: This is where another component of Kubernetes called services comes in as a solution. A service gets allocated a virtual IP address and lives until explicitly destroyed. Requests to the services get redirected to the appropriate pods, thus the services of a stable endpoint used for inter-component or application communication. And the best part here is that the lifecycle of service and the pods are not connected. So even if the pod dies, the service and the IP address will stay, so you don't have to change their endpoints anymore. 06:13 Nikita: What types of services can you create with Kubernetes? Mahendra: There are two types of services that you can create. The external service is created to allow external users to connect the containerized applications within the pod. Internal services can also be created that restrict the communication within the cluster. Services can be exposed in different ways by specifying a particular type. 06:33 Nikita: And how do you define these services? Mahendra: There are three types in which you can define services. The first one is the ClusterIP, which is the default service type that exposes services on an internal IP within the cluster. This type makes the service only reachable from within the cluster. You can specify the type of service as NodePort. NodePort basically exposes the service on the same port of each selected node in the cluster using a network address translation and makes the service accessible from the outside of the cluster using the node IP and the NodePort combination. This is basically a superset of ClusterIP. You can also go for a LoadBalancer type, which basically creates an external load balancer in the current cloud. OCI supports LoadBalancer types. It also assigns a fixed external IP to the service. And the LoadBalancer type is a superset of NodePort. 07:25 Lois: There's another component called ingress, right? When do you used that? Mahendra: An ingress is used when we have multiple services on our cluster, and we want the user requests routed to the services based on their pod, and also, if you want to talk to your application with a secure protocol and a domain name. Unlike NodePort or LoadBalancer, ingress is not actually a type of service. Instead, it is an entry point that sits in front of the multiple services within the cluster. It can be defined as a collection of routing rules that govern how external users access services running inside a Kubernetes cluster. Ingress is most useful if you want to expose multiple services under the same IP address, and these services all use the same Layer 7 protocol, typically HTTP. 08:10 Lois: Mahendra, what about deployments in Kubernetes?  Mahendra: A deployment is an object in Kubernetes that lets you manage a set of identical pods. Without a deployment, you will need to create, update, and delete a bunch of pods manually. With the deployment, you declare a single object in a YAML file, and the object is responsible for creating the pods, making sure they stay up-to-date and ensuring there are enough of them running. You can also easily autoscale your applications using a Kubernetes deployment. In a nutshell, the Kubernetes deployment object lets you deploy a replica set of your pods, update the pods and the replica sets. It also allows you to roll back to your previous deployment versions. It helps you scale a deployment. It also lets you pause or continue a deployment. 08:59 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 09:37 Nikita: Welcome back! We were talking about how useful a Kubernetes deployment is in scaling operations. Mahendra, how do pods communicate with each other?  Mahendra: Pods communicate with each other using a service. For example, my application has a database endpoint. Let's say it's a MySQL service that it uses to communicate with the database. But where do you configure this database URL or endpoints? Usually, you would do it in the application properties file or as some kind of an external environment variable. But usually, it's inside the build image of the application. So for example, if the endpoint of the service or the service name, in this case, changes to something else, you would have to adjust the URL in the application. And this will cause you to rebuild the entire application with a new version, and you will have to push it to the repository. You'll then have to pull that new image into your pod and restart the whole thing. For a small change like database URL, this is a bit tedious. So for that purpose, Kubernetes has a component called ConfigMap. ConfigMap is a Kubernetes object that maintains a key value store that can easily be used by other Kubernetes objects, such as pods, deployments, and services. Thus, you can define a ConfigMap composed of all the specific variables for your environment. In Kubernetes, now you just need to connect your pod to the ConfigMap, and the pod will read all the new changes that you have specified within the ConfigMap, which means you don't have to go on to build a new image every time a configuration changes. 11:07 Lois: So then, I'm just wondering, if we have a ConfigMap to manage all the environment variables and URLs, should we be passing our username and password in the same file? Mahendra: The answer is no. Password or other credentials within a ConfigMap in a plain text format would be insecure, even though it's an external configuration. So for this purpose, Kubernetes has another component called secret. Kubernetes secrets are secure objects which store sensitive data, such as passwords, OAuth tokens, and SSH keys with the encryption within your cluster. Using secrets gives you more flexibility in a pod lifecycle definition and control over how sensitive data is used. It reduces the risk of exposing the data to unauthorized users. 11:50 Nikita: So, you're saying that the secret is just like ConfigMap or is there a difference? Mahendra: Secret is just like ConfigMap, but the difference is that it is used to store secret data credentials, for example, database username and passwords, and it's stored in the base64 encoded format. The kubelet service stores this secret into a temporary file system. 12:11 Lois: Mahendra, how does data storage work within Kubernetes? Mahendra: So let's say we have this database pod that our application uses, and it has some data or generates some data. What happens when the database container or the pod gets restarted? Ideally, the data would be gone, and that's problematic and inconvenient, obviously, because you want your database data or log data to be persisted reliably for long term. To achieve this, Kubernetes has a solution called volumes. A Kubernetes volume basically is a directory that contains data accessible to containers in a given pod within the Kubernetes platform. Volumes provide a plug-in mechanism to connect ephemeral containers with persistent data stores elsewhere. The data within a volume will outlast the containers running within the pod. Containers can shut down and restart because they are ephemeral units. Data remains saved in the volume even if a container crashes because a container crash is not enough to cut off a pod from a node. 13:10 Nikita: Another main component of Kubernetes is a StatefulSet, right? What can you tell us about it?  Mahendra: Stateful applications are applications that store data and keep tracking it. All databases such as MySQL, Oracle, and PostgreSQL are examples of Stateful applications. In a modern web application, we see stateless applications connecting with Stateful application to serve the user request. For example, a Node.js application is a stateless application that receives new data on each request from the user. This application is then connected with a Stateful application, such as MySQL database, to process the data. MySQL stores the data and keeps updating the database on the user's request.  Now, assume you deployed a MySQL database in the Kubernetes cluster and scaled this to another replica, and a frontend application wants to access the MySQL cluster to read and write data. The read request will be forwarded to both these pods. However, the write request will only be forwarded to the first primary pod. And the data will be synchronized with other pods. You can achieve this by using the StatefulSets. Deleting or scaling down a StatefulSet will not delete the volumes associated with the Stateful applications. This gives you your data safety. If you delete the MySQL pod or if the MySQL pod restarts, you can have access to the data in the same volume.  So overall, a StatefulSet is a good fit for those applications that require unique network identifiers; stable persistent storage; ordered, graceful deployment and scaling; as well as ordered, automatic rolling updates. 14:43 Lois: Before we wrap up, I want to ask you about the features of Kubernetes. I'm sure there are countless, but can you tell us the most important ones? Mahendra: Health checks are used to check the container's readiness and liveness status. Readiness probes are intended to let Kubernetes know if the app is ready to serve the traffic.  Networking plays a significant role in container orchestration to isolate independent containers, connect coupled containers, and provide access to containers from the external clients. Service discovery allows containers to discover other containers and establish connections to them. Load balancing is a dedicated service that knows which replicas are running and provides an endpoint that is exposed to the clients. Logging allows us to oversee the application behavior.  The rolling update allows you to update a deployed containerized application with minimal downtime using different update scenarios. The typical way to update such an application is to provide new images for its containers. Containers, in a production environment, can grow from few to many in no time. Kubernetes makes managing multiple containers an easy task. And lastly, resource usage monitoring-- resources such as CPU and RAM must be monitored within the Kubernetes environment. Kubernetes resource usage looks at the amount of resources that are utilized by a container or port within the Kubernetes environment. It is very important to keep an eye on the resource usage of the pods and containers as more usage translates to more cost. 16:18 Nikita: I think we can wind up our episode with that. Thank you, Mahendra, for joining us today. Kubernetes sure can be challenging to work with, but we covered a lot of ground in this episode.  Lois: That's right, Niki! If you want to learn more about the rich features Kubernetes offers, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Remember, all the training is free, so you can dive right in! Join us next week when we'll take a look at the fundamentals of Oracle Cloud Infrastructure Container Engine for Kubernetes. Until then, Lois Houston… Nikita: And Nikita Abraham, signing off! 16:57 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Cloud Unplugged
GitOps: The gaps it has and what can be done about it! | Episode 37

Cloud Unplugged

Play Episode Listen Later Jun 12, 2024 44:29


In this episode of Cloud Unplugged, Mark and Lewis join Jon to discuss the challenges of the Gitops model, what's good about it and where it is lacking!Follow us on social media @cloudunplugged https://www.tiktok.com/@UCkCxcw9tJHd_sPtDveunGsQ https://twitter.com/cloud_unpluggedListen on Spotify: https://bit.ly/3y2djXaListen on Apple Podcasts:  https://bit.ly/3mosSFTJon & Jay's start-up: https://www.appvia.io/https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/jaykeshur/ Podcast sponsor inquires, topic requests: Hello@cloudunplugged.ioWelcome to The Cloud Unplugged Podcast, hosted by Jon Shanks (CEO) and Jay Keshur (COO). The two co-founded software company Appvia, and have backgrounds in engineering and platform development, with years of experience using Kubernetes. Here they take a light-hearted look at cloud engineering under the lens of platform teams. Discussing how developers, platform engineers, and businesses can leverage cloud-native software development practices successfully.

Engines of Our Ingenuity
Engines of Our Ingenuity 2094: The Box

Engines of Our Ingenuity

Play Episode Listen Later Jun 6, 2024 3:46


Episode: 2094 Thinking inside the box: the invention of containerization.  Today, guest scientist Andrew Boyd discusses The Box.

Oracle University Podcast
What is Containerization?

Oracle University Podcast

Play Episode Listen Later Jun 4, 2024 14:53


Welcome to a new season of the Oracle University Podcast, where we delve deep into the world of OCI Container Engine for Kubernetes. Join hosts Lois Houston and Nikita Abraham as they ask senior OCI instructor Mahendra Mehra about the transformative power of containers in application deployment and why they're so crucial in today's software ecosystem.   Uncover key differences between virtualization and containerization, and gain insights into Docker components and commands.   Getting Started with Oracle Cloud Infrastructure: https://oracleuniversitypodcast.libsyn.com/getting-started-with-oracle-cloud-infrastructure-1   Networking in OCI: https://oracleuniversitypodcast.libsyn.com/networking-in-oci   OCI Identity and Access Management: https://oracleuniversitypodcast.libsyn.com/oci-identity-and-access-management   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.  Nikita: Hi everyone! Welcome to a new season of the Oracle University Podcast. This time around, we're going to delve into the world of OCI Container Engine for Kubernetes, or OKE. For the next couple of weeks, we'll cover key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure. 00:58 Lois: So, whether you're a cloud native developer, Kubernetes administrator and developer, a DevOps engineer, or site reliability engineer who wants to enhance your expertise in leveraging the OCI OKE service for cloud native application solutions, you'll want to tune in to these episodes for sure. And if that doesn't sound like you, I'll bet you will find the season interesting even if you're just looking for a deep dive into this service. Nikita: That's right, Lois. In today's episode, we'll focus on concepts of containerization, laying the foundation for your journey into the world of containers. And taking us through all this is Mahendra Mehra, a senior OCI instructor with Oracle University. 01:38 Lois: Hi Mahendra! We're so glad to start our look at containerization with you today. Could you give us an overview? Why is it important in today's software world? Mahendra: Containerization is a form of virtualization, operates by running applications in isolated user spaces known as containers.  All these containers share the same underlying operating system. The container engine, pivotal in containerization technologies and container orchestration platforms, serves as the container runtime environment. It effectively manages the creation, deployment, and execution of containers. 02:18 Lois: Can you simplify this for a novice like me, maybe by giving us an analogy?  Mahendra: Imagine a container as a fully packaged and portable computing environment. It's like a digital suitcase that holds everything an application needs to run—binaries, libraries, configuration files, dependencies, you name it. And the best part, it's all encapsulated and isolated within container. 02:46 Nikita: Mahendra, how is containerization making our lives easier today?  Mahendra: In olden days, running an application meant matching it with your machine's operating system. For example, Windows software required a Windows machine. However, containerization has rewritten this narrative. Now, it's ancient history. With containerization, you create a single software package, a container that gracefully runs on any device or operating systems. What's fascinating is that these containers seamlessly run while sharing the host operating system. The container engine is like a shadow abstracted from the host operating system with limited access to underlying resources. Think of it as a super lightweight virtual machine. The beauty of this, the containerized application becomes a globetrotter, seamlessly running on bare metal within VMs or on the cloud platforms without needing tweaks for each environment. 03:52 Nikita: How is containerization different from traditional virtualization? Mahendra: On one side, we have traditional virtualization. It's like having multiple houses on a single piece of land, and each house or virtual machine has its complete setup—wall, roofs, and utilities. This setup, while providing isolation, can be resource-intensive with each virtual machine carrying its entire operating system. Now, let's shift gears to containerization, the modern day superhero. Imagine a high-rise building where each floor represents a container. These containers share the same building or host operating system, but have their private space or isolated user space. Here's the magic. They are super lightweight, don't carry extra baggage of a full operating system and can swiftly move between different floors. 04:50 Lois: Ok, gotcha. That sounds pretty efficient! So, what are the direct benefits of containerization?  Mahendra: With containerization technology, there's less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packing up the many individual microservices that make up modern applications. Containerization unfolds a spectrum of benefits, delivering unparalleled portability as containers run uniformly across diverse platforms. This agility, fostered by open source container engines, empowers developers with cross-platform flexibility. The speed of containerized applications known for their lightweight nature reduces cost, boosts efficiency, and accelerates start times. Fault isolation ensures robustness, allowing independent operations without affecting others. Efficiency thrives as containers share the OS kernel and reusable layers, optimizing server utilization. The ease of management is achieved through orchestration platforms like Kubernetes automating essential tasks. Security remains paramount as container isolation and defined permissions fortify the infrastructure against malicious threats. Containerization emerges not just as a technology but as a transformative force, redefining how we build, deploy, and manage applications in the digital landscape. 06:37 Lois: It sure makes deployment efficient, scalability, and seamless! Mahendra, various components of Docker architecture work together to achieve containerization goals, right? Can you walk us through them? Mahendra: A developer or a DevOps professional communicates with Docker engine through the Docker client, which may be run on the same computer as Docker engine in case of development environments or through a remote shell. So whenever a developer fires a Docker command, the client sends them to the Docker Daemon which carries them out. The communication between the Docker client and the Docker host is usually taken place through REST APIs. The Docker clients can communicate with more than one Daemon at a time.  Docker Daemon is a persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker Daemon constantly listens to the Docker API request from the Docker clients and processes them. Docker registries are services that provide locations from where you can store and download Docker images. In other words, a Docker registry contains repositories that host one or more Docker images. Public registries include Docker Hub and Docker Cloud and private registries can also be used. Oracle Cloud Infrastructure offers you services like OCIR, which is also called a container registry, where you can host your own private or public registry. 08:02 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free. So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 08:39 Nikita: Welcome back! Mahendra, I'm wondering how virtual machines are different from containers. How do virtual machines work? Mahendra: A hypervisor or a virtual machine monitor is a software, firmware, or hardware that creates and runs virtual machines. It is placed between the hardware and the virtual machines, and is necessary to virtualize the server. Within each virtual machine runs a unique guest operating system. VMs with different operating systems can run on the same physical server. A Linux VM can sit alongside a Windows VM and so on. Each VM has its own binaries, libraries, and application that it services. And the VM may be many gigabytes in size. 09:22 Lois: What kind of benefits do we see from virtual machines? Mahendra: This technique provides a variety of benefits like the ability to consolidate applications into a single system, cost savings through reduced footprints, and faster server provisioning. But this approach has its own drawbacks. Each VM includes a separate operating system image, which adds overhead in memory and storage footprint. As it turns out, this issue adds complexity to all the stages of software development lifecycle, from development and test to production and disaster recovery as well. It also severely limits the portability of applications between different cloud providers and traditional data centers. And this is where containers come to the rescue.  10:05 Lois: OK…how do containers help in this situation? Mahendra: Containers sit on top of a physical server and its host operating system—typically, Linux or Windows. Each container shares the host OS kernel and usually the binaries and libraries as well. But the shared components are read only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code. A server can run multiple workloads with a single operating system installation. Containers are thus exceptionally lightweight. They are only megabytes in size and take just seconds to start. What this means in practice is you can put two or three times as many applications on a single server with containers than you can put on a virtual machine. Compared to containers, virtual machines take minutes to run and are order of magnitude larger than an equivalent container measured in gigabytes versus megabytes. 11:01 Nikita: So then, is there ever a time you should use a virtual machine? Mahendra: You should use a virtual machine when you want to run applications that specifically require a new OS, also when isolation and security are your priority over everything else. In most scenarios, a container will provide a lighter, faster, and more cost-effective solution than the virtual machines. 11:22 Lois: Now that we've discussed containerization and the different Docker components, can you tell us more about working with Docker images? We first need to know what a Dockerfile is, right?  Mahendra: A Dockerfile is a text file that defines a Docker image. You'll use a Dockerfile to create your own custom Docker image. In other words, you use it to define your custom environment to be used in a Docker container. You'll want to create your own Dockerfile when existing images won't meet your project needs to different runtime requirements, which means that learning about Docker files is an essential part of working with Docker. Dockerfile is a step-by-step definition of building up a Docker image. It provides a set of standard instructions to be used in Dockerfile that Docker will execute when you issue a Docker build command. 12:09 Nikita: Before we wrap up, can you walk us through some Docker commands? Mahendra: Every Dockerfile must start with a FROM instruction. The idea behind this is that you need a starting point to build your image. It can be from scratch or from an existing image available in the Docker registry.  The RUN command is used to execute a command and will wait till the command finishes its execution. Since most of the images are Linux-based, a good practice is to set up a directory you will work in. That's the purpose of work directory line. It defines a directory and moves you in. The COPY instruction helps you to copy your source code into the image. ENV provides default values for variables that can be accessed within the containers. If your app needs to be reached from outside the container, you must open its listening port using the EXPOSE command. Once your application is ready to run, the last thing to do is to specify how to execute it. You must add the CMD line with the same command with all the arguments you used locally to launch your application. This command can also be used to execute commands at runtime for the containers, but we can be more flexible using the ENTRYPOINT command. Labels are used in Dockerfile to help organize your Docker images.   13:20 Lois: Thank you, Mahendra, for joining us today. I learned a lot! And if you want to learn more about working with Docker images, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. The course is free so you can get started right away. Nikita: Yeah, a fundamental understanding of core OCI services, like Identity and Access Management, networking, compute, storage, and security, is a prerequisite to the course and will certainly serve you well when leveraging the OCI OKE service. And the quickest way to gain this knowledge is by completing the OCI Foundations Associate learning path on MyLearn and getting certified. You can also listen to episodes from our first season, called OCI Made Easy, where we discussed these topics. We'll put a few links in the show notes so you can easily find them.  Lois: We're looking forward to having Mahendra join us again next week when we'll talk about container registries. Until next time, this is Lois Houston… Nikita: And Nikita Abraham signing off! 14:24 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Cloud Unplugged
Open Source: Tesla car takeovers and rogue washing machines with Lewis Marshall and Mark Hughes Open Source: Tesla car takeovers and rogue washing machines | Episode 36

Cloud Unplugged

Play Episode Listen Later May 30, 2024 39:43


In this episode of Cloud Unplugged, Mark and Lewis join Jon in the unknown of Open Source, Cyber Security and the takeover of cars and rogue washing machines!Is Open source a risk for companies to adopt? What assurances are there around it? And What are we doing collectively to help support the maintenance of open-source projects that don't have enough maintainers?Follow us on social media @cloudunplugged https://www.tiktok.com/@UCkCxcw9tJHd_sPtDveunGsQ https://twitter.com/cloud_unpluggedListen on Spotify: https://bit.ly/3y2djXaListen on Apple Podcasts:  https://bit.ly/3mosSFTJon & Jay's start-up: https://www.appvia.io/https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/jaykeshur/ Podcast sponsor inquires, topic requests: Hello@cloudunplugged.ioWelcome to The Cloud Unplugged Podcast, hosted by Jon Shanks (CEO) and Jay Keshur (COO). The two co-founded software company Appvia, and have backgrounds in engineering and platform development, with years of experience using Kubernetes. Here they take a light-hearted look at cloud engineering under the lens of platform teams. Discussing how developers, platform engineers, and businesses can leverage cloud-native software development practices successfully.

Cloud Unplugged
Head of Engineering of Moonpig, Richard Pearson: Career Progression, Leadership and Enablement teams

Cloud Unplugged

Play Episode Listen Later May 15, 2024 58:24


Richard Pearson joins The Cloud Unplugged podcast to talk discuss his career journey to become Head of Engineering at Moonpig.We take a look at how he began his career, from studying AI to working for an ISP and now heading up Engineering at Moonpig. We also discuss what the engineering function at Moonpig looks like, how the teams are structured and what technologies are being adopted in AWS.

Smart Software with SmartLogic
"DevOps: From Code to Cloud" with Dan Ivovich

Smart Software with SmartLogic

Play Episode Listen Later May 9, 2024 43:43


In Elixir Wizards Office Hours Episode 8, hosts Sundi Myint and Owen Bickford lead an engaging Q&A session with co-host Dan Ivovich, diving deep into the nuances of DevOps. Drawing from his extensive experience, Dan navigates topics from the early days before Docker to managing diverse polyglot environments and optimizing observability. This episode offers insights for developers of all levels looking to sharpen their DevOps skills. Explore the realms of Docker, containerization, DevOps workflows, and the deployment intricacies of Elixir applications. Key topics discussed in this episode: Understanding DevOps and starting points for beginners Best practices for deploying applications to the cloud Using Docker for containerization Managing multiple programming environments with microservices Strategies for geographic distribution and ensuring redundancy Localization considerations involving latency and device specs Using Prometheus and OpenTelemetry for observability Adjusting scaling based on application metrics Approaching failure scenarios, including database migrations and managing dependencies Tackling challenges in monitoring setups and alert configurations Implementing incremental, zero-downtime deployment strategies The intricacies of hot code upgrades and effective state management Recommended learning paths, including Linux and CI/CD workflows Tools for visualizing system health and monitoring Identifying actionable metrics and setting effective alerts Links mentioned: Ansible open source IT automation engine https://www.ansible.com/ Wikimedia engine https://doc.wikimedia.org/ Drupal content management software https://www.drupal.org/ Capistrano remote server automation and deployment https://capistranorb.com/ Docker  https://www.docker.com/ Circle CI CI/CD Tool https://circleci.com/ DNS Cluster https://hex.pm/packages/dnscluster ElixirConf 2023 Chris McCord Phoenix Field Notes https://youtu.be/Ckgl9KO4E4M Nerves https://nerves-project.org/ Oban job processing in Elixir https://getoban.pro/ Sidekiq background jobs for Ruby https://sidekiq.org/ Prometheus https://prometheus.io/ PromEx https://hexdocs.pm/promex/PromEx.html GitHub Actions - Setup BEAM: https://github.com/erlef/setup-beam Jenkins open source automation server https://www.jenkins.io/ DataDog Cloud Monitoring https://www.datadoghq.com/ 

Smart Software with SmartLogic
"Discovery Discoveries" with Alicia Brindisi and Bri LaVorgna

Smart Software with SmartLogic

Play Episode Listen Later Mar 28, 2024 43:26


In Elixir Wizards Office Hours Episode 2, "Discovery Discoveries," SmartLogic's Project Manager Alicia Brindisi and VP of Delivery Bri LaVorgna join Elixir Wizards Sundi Myint and Owen Bickford on an exploratory journey through the discovery phase of the software development lifecycle. This episode highlights how collaboration and communication transform the client-project team dynamic into a customized expedition. The goal of discovery is to reveal clear business goals, understand the end user, pinpoint key project objectives, and meticulously document the path forward in a Product Requirements Document (PRD). The discussion emphasizes the importance of fostering transparency, trust, and open communication. Through a mutual exchange of ideas, we are able to create the most tailored, efficient solutions that meet the client's current goals and their vision for the future. Key topics discussed in this episode: Mastering the art of tailored, collaborative discovery Navigating business landscapes and user experiences with empathy Sculpting project objectives and architectural blueprints Continuously capturing discoveries and refining documentation Striking the perfect balance between flexibility and structured processes Steering clear of scope creep while managing expectations Tapping into collective wisdom for ongoing discovery Building and sustaining a foundation of trust and transparency Links mentioned in this episode: https://smartlogic.io/ Follow SmartLogic on social media: https://twitter.com/smartlogic Contact Bri: bri@smartlogic.io What is a PRD? https://en.wikipedia.org/wiki/Productrequirementsdocument Special Guests: Alicia Brindisi and Bri LaVorgna.

artificial intelligence discovery mastering cybersecurity spark cryptocurrency programming algorithms react machine learning big data jenkins digital transformation problem solving risk management aws github product management sketch devops javascript azure discoveries scrum data privacy software engineers tech startups sql docker business intelligence git kubernetes encryption scalability software engineering data analysis smart contracts figma kanban web development quality assurance gitlab product owners flutter mongodb scrum masters ruby on rails data visualization otp graphql selenium nosql react native redis prd postgresql itil elasticsearch hadoop brindisi user experience design continuous integration google cloud platform business analysis innovation management functional programming erlang stakeholder management pair programming distributed systems software testing concurrency clean code software architecture unit testing agile software development agile coaching continuous deployment containerization version control bitbucket it strategy gdpr compliance performance testing adobe xd agile project management high availability technology consulting mobile app development data structures it service management ios development api design user interface design it project management android development blockchain development metaprogramming product lifecycle management open source development restful apis lean software development integration testing database design phoenix framework smartlogic
Syntax - Tasty Web Development Treats
744: Docker For Developers

Syntax - Tasty Web Development Treats

Play Episode Listen Later Mar 18, 2024 25:43


Join Scott and CJ on a rapid-fire journey through Docker. From unraveling containerization to practical advice on incorporating Docker into your workflow, this quick-paced episode has everything you need to navigate the world of container technology. Show Notes 00:00 Welcome to Syntax! 01:19 Brought to you by Sentry.io. 02:20 Easily reproducible environments. 02:57 Containerization technology. Containerization OS-level Virtualization 04:42 Docker is brand name containerization, there are others. Podman Containerd Buildah LXD 05:26 Why would a web developer want to use Docker? 08:19 How do you get started with Docker? Download Docker Desktop Start With Docs Docker 101 09:14 How does Docker work? Docker Sentry Docker Registry Docker Layers 16:46 Adding Docker to an existing project. SvelteKit Dockerfile Node.js / Express CLI Runner Twitchbot Development PHP / Mongodb Dockerfile 21:37 What is Docker Compose? Docker Compose 22:50 What are some ‘gotchas' or things to look out for when setting up a project? Coding Garden Example Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott:X Instagram Tiktok LinkedIn Threads CJ: X Instagram Tiktok TwitchTV YouTube Randy: X Instagram YouTube Threads

Azure DevOps Podcast
Richard Lander: Containerization and Linux - Episode 289

Azure DevOps Podcast

Play Episode Listen Later Mar 18, 2024 54:35


Richard Lander is a Principal Program Manager on the .NET team at Microsoft. He's been with Microsoft since 2000, and working on .NET since 2003! Currently, he's working on runtime features, docker container experience, blogging, and customer engagement. He's also part of the design team that defines new .NET runtime capabilities and features.   Topics of Discussion: [4:31] Richard talks about the technologies that we should already be using and what we should be looking to adopt in the near future. [6:58] Azure services. [7:22] The benefits of using Aspire, and why people should be interested in using it. [14:00] What has Richard been working on over the last several years? [14:14] Improving container image size and reducing complexity in a.NET application. [19:52] Web Assembly and WASI, web assembly system interface. [23:48] Docker containers have a spec called OCI, open container initiative. [26:50] Canonical and building chiseled containers. [36:02] Nano-framework. [36:53] Using Raspberry Pi for edge computing and density in IoT projects. [41:38] Using Linux and Windows for development work. [46:55] Improving container image publishing experience in .NET.   Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! Richard Lander on the New .NET Platform What is .NET, and why should you choose it? The convenience of .NET Announcing .NET Chiseled Containers   Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.

1010 WINS ALL LOCAL
Mayor Adams announces an expansion plan for the city's containerization of garbage. The MTA debuts new subway cars. A city councilwoman calls on the mayor to reverse the cuts on libraries.

1010 WINS ALL LOCAL

Play Episode Listen Later Feb 1, 2024 5:01


Tennessee on Supply Chain Management
S2E5: Steering through Stormy Seas with Maritime Expert Don Maier

Tennessee on Supply Chain Management

Play Episode Listen Later Jan 18, 2024 36:24 Transcription Available


In our January episode, Ted Stank and Tom Goldsby speak with maritime expert Don Maier about the state of international shipping, including shifts in trade lanes, the challenges of forecasting and capital planning, and the industry impact of issues from the Panama Canal to bubbling international conflicts.Before joining UT's faculty as an associate professor of practice, Maier served as dean for the Maine Maritime and Cal Maritime Academies. As the founding dean of the School of Maritime Transportation, Logistics, & Management at California State University-Maritime Academy, he oversaw programs in marine transportation, international logistics, and naval science. He serves on advisory boards for the International Association of Maritime and Port Executives and the Containerization & Intermodal Institute.In their opening recap, Ted and Tom also discuss holiday season spending, reports on U.S. jobs and manufacturing, and more.This episode was recorded on January 5, 2024.Related links:Americans spent a record $222 billion shopping online this holiday seasonMajor retailers offering returnless refunds this holiday seasonU.S. employers add 216,000 jobs in a sign of continued economic growthManufacturing increased in December while warehouse availability surges to highest levels since pandemic Congressional leaders reach a deal to avert shutdownRed Sea attacks pose another threat to global economyPanama Canal enmeshed in a crisis disrupting global tradeDon Maier writes in The Conversation about global shipping climate strategyProgress on U.S.-Mexico rail-ferry serviceChina names a new naval chief as maritime tensions climbSubscribe to GSCI's monthly newsletterRead the latest news and insights from GSCIRegister for the 2024 SCM Leadership Academy or SC

Gestalt IT
WebAssembly Will Displace Containers For Web-Scale Applications

Gestalt IT

Play Episode Listen Later Dec 5, 2023 18:24


Containerization of applications is only a small step forward from virtualization, but WebAssembly promises a real revolution. This episode of the On-Premise IT podcast, recorded live at KubeCon 2023 in Chicago, features Nigel Poulton, Ned Bellavance, Justin Warren, and Stephen Foskett discussing the prospects for WebAssembly. WebAssembly (WASM) is lauded for its potential to be faster, smaller, and more secure than its predecessors. But skepticism surrounds its long-term adoption and development trajectory, with debates centering on whether WASM can achieve the transformative status that containers once held. While WASM applications are technically more portable, smaller, and quicker to start, adoption remains at an early stage, appealing more to developers than operations professionals. © Gestalt IT, LLC for Gestalt IT: WebAssembly Will Displace Containers For Web-Scale Applications

Gestalt IT
WebAssembly Will Displace Containers For Web-Scale Applications

Gestalt IT

Play Episode Listen Later Dec 5, 2023 18:24


Containerization of applications is only a small step forward from virtualization, but WebAssembly promises a real revolution. This episode of the On-Premise IT podcast, recorded live at KubeCon 2023 in Chicago, features Nigel Poulton, Ned Bellavance, Justin Warren, and Stephen Foskett discussing the prospects for WebAssembly. WebAssembly (WASM) is lauded for its potential to be faster, smaller, and more secure than its predecessors. But skepticism surrounds its long-term adoption and development trajectory, with debates centering on whether WASM can achieve the transformative status that containers once held. While WASM applications are technically more portable, smaller, and quicker to start, adoption remains at an early stage, appealing more to developers than operations professionals. © Gestalt IT, LLC for Gestalt IT: WebAssembly Will Displace Containers For Web-Scale Applications

Data Mesh Radio
#273 An API-First World in Data Integration - An Actual Modern Data Stack - Zhamak's Corner 31

Data Mesh Radio

Play Episode Listen Later Dec 1, 2023 22:52


Key Points:The rush to categorize all of our tooling in data has caused many issues - we will see a big shake-up coming in the future much like happened in application development tooling.So much of data people's time is spent on things that don't add value themselves, it's work that should be automated. We need to fix that so the data work is about delivering value.We can learn a lot from virtualization but data virtualization is not where things should go in general.Containerization is merely an implementation detail. Much like software developers don't really care much about process containers, the same will happen in data product containers - it's all about the experience and containers significantly improve the experience.The pendulum swung towards decoupled data tech instead of monolithic offerings with 'The Modern Data Stack' but most of the technologies were not that easy to stitch together. Going forward, we want to keep the decoupled strategy but we need a better way to integrate - APIs is how it worked in software, why not in data? Sponsored by NextData, Zhamak's company that is helping ease data product creation.For more great content from Zhamak, check out her book on data mesh, a book she collaborated on, her LinkedIn, and her Twitter. Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereData Mesh Radio episode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or

Smart Software with SmartLogic
Learning a Language: Elixir vs. JavaScript with Yohana Tesfazgi & Wes Bos

Smart Software with SmartLogic

Play Episode Listen Later Nov 2, 2023 42:14


This week, the Elixir Wizards are joined by Yohana Tesfazgi and Wes Bos to compare notes on the experience of learning Elixir vs. JavaScript as your first programming language. Yohana recently completed an Elixir apprenticeship, and Wes Bos is a renowned JavaScript educator with popular courses for beginner software developers. They discuss a variety of media and resources and how people with different learning styles benefit from video courses, articles, or more hands-on projects. They also discuss the current atmosphere for those looking to transition into an engineering career and how to stick out among the crowd when new to the scene. Topics Discussed in this Episode Pros and cons of learning Elixir as your first programming language Materials and resources for beginners to JavaScript and Elixir Projects and methods for learning Elixir with no prior knowledge Recommendations for sharpening and showcasing skills How to become a standout candidate for potential employers Soft skills like communication translate well from other careers to programming work Learning subsequent languages becomes more intuitive once you learn your first How to decide which library to use for a project How to build an online presence and why it's important Open-source contributions are a way to learn from the community Ship early and often, just deploying a default Phoenix app teaches deployment skills Attend local meetups and conferences for mentoring and potential job opportunities Links Mentioned https://syntax.fm/ https://fly.io/ https://elixirschool.com/en Syntax.fm: Supper Club × How To Get Your First Dev Job With Stuart Bloxham (https://syntax.fm/show/667/supper-club-how-to-get-your-first-dev-job-with-stuart-bloxham) Quinnwilton.com (https://quinnwilton.com/) https://github.com/pallets/flask https://wesbos.com/courses https://beginnerjavascript.com/ Free course: https://javascript30.com/ https://pragmaticstudio.com/ https://elixircasts.io/ https://grox.io/ LiveView Mastery YouTube Channel (https://www.youtube.com/channel/UC7T19hPLqQ-Od3Rb3T2OX1g) Contact Yohana: yytesfazgi@gmail.com

Destination Linux
338: Reverse Psychology Tech Support

Destination Linux

Play Episode Listen Later Sep 5, 2023 63:09


https://youtu.be/RcQUo-k5qlk On this episode of Destination Linux (338), we deep dive a community feedback that takes us into encrpytion, learning the cli, and containerization. Then we're going to discuss some new AI tricks that might leave you feeling creeped out. Plus, we have our tips, tricks and software picks for you. Let's get this show on the road toward Destination Linux! Download as MP3 (https://aphid.fireside.fm/d/1437767933/32f28071-0b08-4ea1-afcc-37af75bd83d6/cbb59db3-d8bc-402c-9ef9-13f6c6609480.mp3) Sponsored by LINBIT = https://linbit.com Hosted by: Michael Tunnell = https://tuxdigital.com Ryan (DasGeek) = https://dasgeekcommunity.com Jill Bryant = https://jilllinuxgirl.com Want to Support the Show? Become a Patron = https://tuxdigital.com/membership Store = https://tuxdigital.com/store Chapters: 00:00 DL 338 Intro 00:45 Community Feedback 01:51 Learning Linux CLI & GUI 06:25 Full Disk Encryption on Linux (afer install) 10:58 Podman for Containerization vs Docker? 11:55 Backup Solution Reccomendations 30:16 Reverse Psycology Deployed 31:01 Accessibility in KDE Plasma 31:29 LINBIT (www.linbit.com) 32:47 AI is getting creepy thanks to Google's Duet AI virtual proxies 46:35 Gaming: Metroplex Zero 52:10 Software Spotlight: Mousai 56:49 Tip of the Week: sed cheatsheet 58:49 Outro

American Railroading Podcast
Supply Chain – The Relationship Between Ports & Rail with Denson White of APM Terminals

American Railroading Podcast

Play Episode Listen Later Jul 25, 2023 64:29


Welcome to the American Railroading Podcast! In this episode our host Don Walsh, is joined by guest Denson White, CCO of APM Terminals, Pier 400 in the Port of Los Angeles/Long Beach, CA. Together they delve into Supply Chain and the relationship between the American ports and rail. They discuss the important roles that both the ports and rail play in the Supply Chain process, as well as the short-term and long-term effects that the Covid 19 pandemic had on Supply Chain, and lessons learned. Tune in to this episode now to gain valuable insights and broaden your understanding of American Railroading. You can find the episode on the American Railroading Podcast's official website at www.AmericanRailroading.net . Welcome aboard! KEY POINTS:  Don shares his exciting first experience with a container ship.APM Terminals has 70 locations around the world.According to the Union Pacific Railroad website, 48% of rail traffic is generally intermodal shipments. “Containerization” as we know it today didn't exist until the 1950's, which included the standardization of shipping container dimensions.Intermodal has been the fastest growing rail segment over the last 25 years.Inland ports play a vital role in the Supply Chain process.Railroads are the most fuel-efficient way to move freight over-land.Freight railroads account for roughly 40% of U.S. long distance freight volume.According to the U.S. Federal Highway Administration, freight shipments are expected to increase by 30% by 2040.The American Railroading Podcast has merch coming soon! Including their own Challenge Coin! LINKS MENTIONED:  https://www.americanrailroading.net/  https://therevolutionrailgroup.com/  https://www.apmterminals.com/  https://www.buymeacoffee.com/dwalshX  https://www.up.com/  https://www.aar.org/

Environment Variables
Fact Check: Colleen Josephson, Miguel Ponce de Leon & AI Optimization of the Environmental Impact of Software

Environment Variables

Play Episode Listen Later May 31, 2023 41:39


This episode of Fact Check we ask the question, can AI always help us optimise the environmental impact of software? Host Chris Adams is joined by VMWare's Colleen Josephson and Miguel Ponce de Leon to tackle this from their unique perspectives within the industry. They also talk all things sustainability in virtualization and networking and how this begins with green software. They also give us insight into how VMWare is tackling decarbonization within their own company.

Smart Software with SmartLogic
Sophie DeBenedetto on the Future of Elixir and LiveView

Smart Software with SmartLogic

Play Episode Listen Later Apr 13, 2023 51:08


In today's episode, Sophie DeBenedetto emphasizes the importance of the Elixir community's commitment to education, documentation, and tools like liveBook, fostering an environment where people with varying skill levels can learn and contribute. The discussion highlights LiveView's capabilities and the role it plays in the future of Elixir, encouraging members to share knowledge and excitement for these tools through various channels. Sophie invites listeners to attend and submit their talks for the upcoming Empex conference, which aims to showcase the best in Elixir and LiveView technologies. Additionally, the group shares light-hearted moments, reminding everyone to contribute to all types of documentation and promoting an inclusive atmosphere. Key topics discussed in this episode: • Updates on the latest release of the Programming Phoenix LiveView book • The importance of community connection in Elixir conferences • The future of documentation in the Elixir ecosystem • The Elixir community's commitment to education and documentation • LiveBook as a valuable tool for learning and experimenting • Encouraging contributions across experience levels and skill sets • Importance of sharing knowledge through liveBooks, blog posts, and conference talks • Core Components in Phoenix LiveView, and modal implementation • Creating a custom component library for internal use • Reflecting on a Phoenix LiveView Project Experience • Ease of using Tailwind CSS and its benefits in web development • Advantages of LiveView in reducing complexity and speeding up project development • LiveView's potential to handle large datasets using Streams • The role of Elixir developers in the rapidly evolving AI landscape Links in this episode: Sophie DeBenedetto – https://www.linkedin.com/in/sophiedebenedetto Programming Phoenix LiveView Book – https://pragprog.com/titles/liveview/programming-phoenix-liveview Empex NYC - https://www.empex.co/new-york SmartLogic - https://smartlogic.io/jobs Phoenix LiveView documentation: https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html Live sessions and hooks: https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.Router.html#livesession/1 LiveView: https://hexdocs.pm/phoenixlive_view/Phoenix.LiveView.html Tailwind CSS: https://tailwindcss.com/ Reuse Markup With Function Components and Slots (https://fly.io/phoenix-files/function-components/) LiveView Card Components With Bootstrap (https://fly.io/phoenix-files/liveview-bootstrap-card/) Building a Chat App With LiveView Streams (https://fly.io/phoenix-files/building-a-chat-app-with-liveview-streams/) Special Guest: Sophie DeBenedetto.

Smart Software with SmartLogic
Cory O'Daniel and the Future of DevOps in Elixir Programming

Smart Software with SmartLogic

Play Episode Listen Later Mar 30, 2023 45:45


In this episode of Elixir Wizards, Cory O'Daniel, CEO of Massdriver, talks with Sundi and Owen about the role of DevOps in the future of Elixir programming. They discuss the advantages of using Elixir for cloud infrastructure and the challenges of securing cloud systems. They elaborate on their hopes for the future, including processes and automation to streamline operations so programmers can spend more time doing what they love … writing software! Major topics of discussion in the episode: Cory's ideal ratio of hot sauce to honey (recommended for chicken) Why this episode was renamed “how Cory almost killed his dad." The history of deployment with Elixir and Erlang The benefits of using Kubernetes to deploy Elixir applications The future of Elixir DevOps and Massdriver's role in solving related problems Benefits of reducing the operational burden for developers Whether Elixir is a good fit for Kubernetes How DevOps has changed over the last 10 years. The confusion about what DevOps actually means The idea of "engineers doing everything" is not sustainable A future where engineers don't need to know much about DevOps, and can focus on writing code Minimizing the operational burden for developers Monolithic application vs. microservices Why Massdriver does not use Webhooks to update configurations Security, access to source code, and potential source leaks The idea of multi-cloud, site-wide outage, and cloud agnosticism Hybrid cloud vs true multi-cloud Standardizing methods of packaging and deploying applications in the future Links mentioned in this episode: SmartLogic — https://smartlogic.io/ SmartLogic Twitter — https://twitter.com/smartlogic Massdriver — https://www.massdriver.cloud/ State of Production Survey (with Sweet Raffle Prizes) — https://blog.massdriver.cloud/surveys/state-of-production-2023/ $5000 Massdriver Credit — https://www.massdriver.cloud/partners/elixir-wizards Elephant in the Cloud Blog Post — https://startups.microsoft.com/blog/elephant-in-the-cloud/ RIAK — https://github.com/basho/riak Otel — https://hexdocs.pm/ Terraform — https://hexdocs.pm/terraform/Terraform.html DigitalOcean — https://www.digitalocean.com/ Heroku — https://www.heroku.com/ Linode — https://www.linode.com/ Docker — https://www.docker.com/ Kubernetes — https://kubernetes.io/ Webhooks — https://hexdocs.pm/elixirplaid/webhooks.html GitOps — https://hexdocs.pm/gitops/readme.html Helm — https://helm.sh/docs/ Special Guest: Cory O'Daniel.

The Nonlinear Library
LW - Beyond the moment of invention by jasoncrawford

The Nonlinear Library

Play Episode Listen Later Dec 17, 2022 3:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond the moment of invention, published by jasoncrawford on December 16, 2022 on LessWrong. Derek Thompson has a feature in The Atlantic this week on “Why the Age of American Progress Ended”—thoughtful and worth reading. Some comments and reactions. One of the ideas explored in the article is that what matters for progress is not just the moment of invention, but what happens after that. I generally agree. Some ways I think this is true: The first iteration of an invention is generally just good enough to be practical, but far below optimal: there are decades of incremental improvements that get it to what we know today. Edison's light bulb was not as bright or long-lasting as today's bulbs. Often there is a whole system that needs to be built up around the invention. It wasn't enough to invent the light bulb, you also needed the generators and the power grid. Such a system not only has to be invented, it has to be scaled. Scaling up from a prototype to a large, efficient, reliable system is its own challenge. Again with the power grid, it was a big challenge to figure out how to efficiently serve large regions, do load balancing, etc. With railroads, there was a challenge in figuring out how to manage a schedule with many trains and routes. With the telegraph and later the telephone, a system had to be invented to route messages and calls. Merely working is not enough for wide distribution: other characteristics really matter, like cost, efficiency, and reliability. People underrate these—especially reliability, which can be a huge barrier to adoption. An invention that works and is practical, cheap and reliable still doesn't automatically sell. You have to convince people to change the way they do things, and that is tough. Sometimes you have to help people imagine uses for things: when the telegraph was first invented, they would demonstrate it by playing chess long-distance, and people would come watch this happen, but still not imagine what they personally would use a telegraph for. Often regulations and legal frameworks have to be updated. E.g., containerization: the ICC regulated rates and they set rates based on the type of cargo; with containers you want to charge based on volume and weight alone, and this clashed with the regulatory framework. This kind of thing happens all the time. Even after an invention is widely available, it can take further decades for all the implications to be worked out and for it to fundamentally change the way people do things. Electricity didn't cause factories to be reorganized until the '20s or '30s. Containerization didn't immediately change the way supply chains were organized. And then, social or regulatory barriers can block distribution. In addition to the ones discussed above, another major one is pushback from labor when jobs are going to be automated (many historical examples). A lack of good institutions in poor countries can also block distribution, which is why the world is so unevenly wealthy today. On another note, I thought this paragraph near the end of the piece was spot-on: When you add the anti-science bias of the Republican Party to the anti-build skepticism of liberal urbanites and the environmentalist left, the U.S. seems to have accidentally assembled a kind of bipartisan coalition against some of the most important drivers of human progress. To correct this, we need more than improvements in our laws and rules; we need a new culture of progress. Amen. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Developer Kahvesi
Emulation, Virtualization & Containerization

Developer Kahvesi

Play Episode Listen Later Oct 26, 2022 106:27


Emülasyon, sanallaştırma ve container'lar üzerine sohbet ettiğimiz bu bölümde Ali Özgür ve Barış Akın'ın konuğu Özgür Öztürk. Keyifli dinlemeler...

Intel IT
A Path to Cloud: Getting More out of Containerization

Intel IT

Play Episode Listen Later Oct 25, 2022 22:06


IT Best Practices: In this fourth episode of Intel's Path to Cloud series, Phil Vokins, Cloud Services Director with Intel...[…]

DataCentric Podcast
Exploring Edge Computing, with Stratus's Jason Andersen.

DataCentric Podcast

Play Episode Listen Later Sep 6, 2022 50:27


It's all about Edge Computing as hosts Matt Kimball and Steve McDowell, principal analysts at Moor Insights & Strategy, welcome Jason Andersen, VP of Strategy and Product Management at Stratus. Jason brings Matt & Steve up-to-speed on what's happening in the world of edge computing in a wide-ranging conversation about standards (or the lack of), the evolution of edge architectures, the fragmented edge market, the colliding worlds of IT & OT, and how Stratus helps its customers navigate through this confusing world. There is a whole lot packed into this hour-long discussion, but it's invaluable for anyone working in edge today. 00:00 Kick-it Off 01:25 Catching up with Strategy: Product updates, SGH Acquisition 03:57 Pivoting from Fault-Tolerance to the Edge 08:48 It's all about Resilience! 11:00 How edge is evolving 13:00 Fragmentation at the edge 16:21 Containerization & Cloud-Native at the edge 20:48 Simplifying the edge control-plane while delivering QoS 23:20 Revisiting 2 Different Worlds as IT & OT Converge 30:10 Security at the Edge 37:22 General attitudes impacting edge deployments 41:14 Where's Stratus focused in the near-term? 45:14 Importance of the overall solution stack 50:11 Wrapping-Up 50:26 Done Special Guest: Jason Andersen.

Educative Sessions
#119: ”What it Takes to Be A Solutions Architect” with Muhammad Sajid | Educative Sessions

Educative Sessions

Play Episode Listen Later Aug 15, 2022 21:34 Transcription Available


Get started with Educative! Follow this URL for 10% off: https://educative.io/educativelee Watch the YouTube HERE: https://youtu.be/3F1E-rhLK9A Cloud or Solutions Architects are responsible for managing an organization's cloud computing architecture and work as a trusted advisor for their customers. They have in-depth knowledge of architectural principles and services. Most SAs are hands-on and developers at heart. In this session, you will learn some traits of a high-performing architect and what employers look for in a cloud architect. ABOUT OUR GUEST   Muhammad Sajid is a high-octane cloud solutions architect with a passion for turning whiteboard drawings into fully functional cloud-native software solutions. He speaks regularly at several community and company events and conferences about cloud, architecture, and cloud-native software development.  He has a great interest in DDD, Distributed Systems, Event-Driven Microservices, and Containerization.  His superpowers are mentoring, teaching, and leading and building high-performance teams.   Visit Educative to start your journey into code ►► https://educative.io Explore the Educative Answers platform and become a contributor! ►► https://educative.io/answers Don't forget to subscribe to Educative Sessions on YouTube! ►► https://www.youtube.com/c/EducativeSessions   ABOUT EDUCATIVE   Educative (educative.io) provides interactive and adaptive courses for software developers. Whether it's beginning to learn to code, grokking the next interview, or brushing up on frontend coding, data science, or cybersecurity, Educative is changing how developers continue their education. Stay relevant through our pre-configured learning environments that adapt to match a developer's skill level. Educative provides the best author platform for instructors to create interactive and adaptive content in only a few clicks.   More Videos from Educative Sessions: https://www.youtube.com/c/EducativeSessions/   Episode 119: ”What it Takes to Be A Solutions Architect” with Muhammad Sajid | Educative Sessions

Waiting for Review
S3E9: Excuse my Language!

Waiting for Review

Play Episode Listen Later Jul 2, 2022 45:22


Life interrupted the release of this episode - but better late than never ! Originally recorded in early June. Packed with all the things: * Our favourite items from WWDC22! * Daniel is back on a Magic Keyboard after his clackety diversion! * Dave is preparing GoVJ for App Store release! * Daniel is moving TelemetryDeck to Containerization! * Dave swearing!

AWS FM
Kyle Hornberg: Serverless for Startups, Containerization Use Cases, and Obscure ASR Tech

AWS FM

Play Episode Listen Later Jun 28, 2022


Kyle joins Adam to discuss why serverless is important for organizations big and small, which containerization use cases make sense (and which don't), and their shared experience with obscure speech-to-text technology.

Augmented - the industry 4.0 podcast
Episode 85: Industrial Cloud Interoperability

Augmented - the industry 4.0 podcast

Play Episode Listen Later Jun 22, 2022 51:33


This week on the podcast, (@AugmentedPod (https://twitter.com/AugmentedPod)) we have Leon Kuperman, CTO of CAST.AI (@cast_ai (https://twitter.com/cast_ai)) Futurist Trond Undheim hosts (@trondau (https://twitter.com/trondau)), this is episode #85 of Season 2 and the topic is: Industrial Cloud Interoperability. In this conversation, we talk about cloud interoperability, whether it exists, why it's needed and what it could accomplish. We also get into the technical underpinnings, such as Carita, containerization and the outlook for public private and hybrid clouds as well as the vendors that supply advanced infrastructures. Augmented reveals the stories behind the new era of industrial operations, where technology will restore the agility of frontline workers. Technology is changing rapidly. What's next in the digital factory? Who is leading the change? What are the key skills to learn and how to stay up to date on manufacturing and industry 4.0? Augmented is a podcast for industrial leaders, process engineers, and shop floor operators, hosted by futurist Trond Arne Undheim, and presented by Tulip, the frontline operations platform. Trond's takeaway: A.I. is a silent enabler of collaboration between systems, which by the same token affects collaboration between people and organizations, its technical complexity often limits the debate about the subject in non-specialist circles, which is a shame given the pivotal importance of cloud infrastructure in today's computing environment, the relative progress made on interoperability will determine the course of products, flexibility, security, and productivity. Thanks for listening. If you liked the show, subscribe at Augmented podcast.co or in your preferred podcast player and rate us with five stars. If you liked this episode, you might also like episode #17 Smart Manufacturing for All (https://www.augmentedpodcast.co/17). Hopefully you'll find something awesome in these or in other episodes. And if so, do let us know by messaging us, we would love to share your thoughts with other listeners. The Augmented podcast is created in association with Tulip, the connected frontline operations platform that connects the people, machines, devices, and the systems used in a production or logistics process in a physical location. Tulip is democratizing technology and empowering those closest to operations to solve problems. Tulip is also hiring. You can find Tulip at Tulip.co. Please share this show with colleagues who care about where industrial tech is heading. To find us on social media is easy, we are Augmented Pod on LinkedIn and Twitter, and Augmented Podcast on Facebook and YouTube: LinkedIn: https://www.linkedin.com/company/augmentedpod Facebook: https://www.facebook.com/AugmentedPodcast/ Twitter: https://twitter.com/AugmentedPod YouTube: https://www.youtube.com/channel/UC5Y1gz66LxYvjJAMnN_f6PQ See you next time. Augmented--industrial conversations that matter. Special Guest: Leon Kuperman.

Arsenal for Democracy
June 5, 2022 – Containerization (Part 3) – Arsenal For Democracy Ep. 428

Arsenal for Democracy

Play Episode Listen Later Jun 6, 2022 72:21


Bill and Rachel managed to add a third fascinating hour-plus of material on containerization, looking at present-day supply chain vulnerabilities, the rise of container freeports, product waste, and Soviet container history. Links and notes for ep. 428 (PDF): http://arsenalfordemocracy.com/wp-content/uploads/2022/06/AFD-Ep-428-Links-and-Notes-Containerization-Part-3-Bonus-on-Shipping-Containers.pdf Theme music by Stunt Bird. The post June 5, 2022 – Containerization (Part 3) – Arsenal For Democracy Ep. 428 appeared first on Arsenal For Democracy.

Whitestone Podcast
Supply Chain - McLean's Container World

Whitestone Podcast

Play Episode Listen Later May 31, 2022 14:16


So, how about that Malcolm McLean? Oh…? You haven't heard of Malcolm McLean, a man whose work has resulted in economic betterment for many hundreds of millions of people for decades? No surprise there. The story of McLean's work is buried in many items on the shelves of the nearest Walmart and has pervasively impacted the world. Join Kevin in a fascinating rendering of McLean's world of impacting others through amazing innovation—the very kind of stewardship thinking and action that God has purposed for each of us!  // Download this episode's Application & Action questions and PDF transcript at whitestone.org.

Arsenal for Democracy
May 22, 2022 – Containerization (Part 2) – Arsenal For Democracy Ep. 427

Arsenal for Democracy

Play Episode Listen Later May 22, 2022 61:27


Bill and Rachel continue the history of containerization with a look at the mature industry that emerged out of the 1970s and soon revolutionized global manufacturing supply chains, with sweeping and ruinous effects. Links and notes for ep. 427 (PDF): http://arsenalfordemocracy.com/wp-content/uploads/2022/05/AFD-Ep-427-Links-and-Notes-Containerization-Part-2-Shipping-Containers.pdf Theme music by Stunt Bird. The post May 22, 2022 – Containerization (Part 2) – Arsenal For Democracy Ep. 427 appeared first on Arsenal For Democracy.

Arsenal for Democracy
May 15, 2022 – Containerization (Part 1) – Arsenal For Democracy Ep. 426

Arsenal for Democracy

Play Episode Listen Later May 16, 2022 79:42


Originally seen as a marginal innovation on existing shipping, containerization ended up revolutionizing the global economy and destroying many local economies. This resulted directly from (and enabled) the US war in Vietnam. (Bill and Rachel.) Links and notes for ep. 426 (PDF): http://arsenalfordemocracy.com/wp-content/uploads/2022/05/AFD-Ep-426-Links-and-Notes-Containerization-Part-1-Shipping-Containers.pdf Theme music by Stunt Bird. The post May 15, 2022 – Containerization (Part 1) – Arsenal For Democracy Ep. 426 appeared first on Arsenal For Democracy.

Linux Action News
Linux Action News 235

Linux Action News

Play Episode Listen Later Apr 7, 2022 19:13


Docker surprises everyone, new Fedora tools in the works, and an old debate with a fresh take.

The Flip
Journey to the Last Mile

The Flip

Play Episode Listen Later Oct 28, 2021 40:27


As we continue our season on value chains, in this episode, we explore logistics. The cost of goods and food is disproportionately higher in Africa than anywhere else in the world, with consumers in some markets, spending 50% or more of their total income on food alone. A major reason for these high prices is logistics. So how do we fix this? How do we improve the efficiency of logistics on the African continent, and ultimately drive down the cost of goods? [04:20] - On the role of containerization and efficient ports, with Jetstream Africa's Miishe Addy.[11:37] - After we get through the ports, our goods are loaded onto a truck. We hear from Omar Hagrass on how Trella is trying to improve long-haul efficiency in North Africa and the Middle East.[15:26] - From the port, we move on to the wholesale distributor. As we discuss with Daniel Yu, Sokowatch is aggregating small retailers at the fragmented last-mile and offering same-day delivery of fast-moving consumer goods. [22:37] - As the nature of retail evolves and more small merchants need logistics solutions, logistics-as-a-service providers like Sendbox are playing a role at the last-mile. We hear from its CEO, Emotu Balogun. [26:41] - But amidst all of this tech and innovation - what about infrastructure? To what extent is the problem just poor ports and roads? The Flip's b-mic, Sayo Folawiyo, and its host, Justin Norman, call up infrastructure investor Dami Agbaje for some insight. [32:42] - This episode's retrospective with Sayo and Justin. This season is sponsored by MFS Africa.All this season, we're exploring value chains. And in the payments value chain, no fintech has a wider reach on the continent than MFS Africa. Through their network of over 180 partners - MNOs, banks, NGOs, fintechs, and global enterprises - MFS Africa's API hub makes connects over 320 million mobile wallets across 30+ countries in Africa.

Friends Against Government
ITC 9 - Chains

Friends Against Government

Play Episode Listen Later Oct 1, 2021 53:11


As contemporary global capitalism forcibly integrates, it tightens chains and shorts circuits. In an erotic age, the fear of something else grants medical tyrannism a universal mandate to become simultaneously plate-tectonic and Platonic - in the sense of a demiurge. Forty centuries of Phoenician reductionism gives way American intermodality; corrugated steel literally functions as Pandora's box. Containerization complexifies chains, forcing a mind-splitting commercial breakdown one century in the making. If “techno-economic interactivity crumbles social order”, it does so by tightening chains. Neoclassical economics, the chief concern of which is the maintenance of the slow decline, gives way to Bilderbergian agendas - by way of restraining. World Health Organizes extermination at the terminal velocity of the crunch. Disruption triggers system shocks - complexification saturates rhizomes, dampening them. ‘Root causes' turn into root rot, destroying everything from below. On this episode of Into the Cave, I talk about the supply chain crunch - how we got here and what can be done.