Podcasts about Apache Mesos

Software to manage computer clusters

  • 37PODCASTS
  • 51EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 15, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Apache Mesos

Latest podcast episodes about Apache Mesos

Software Defined Talk
Episode 445: It's my sacred time

Software Defined Talk

Play Episode Listen Later Dec 15, 2023 47:00


This week, we discuss the distribution of cloud revenue, explore who is investing in A.I., and take a look back at Mesosphere DC/OS. Plus, some thoughts on the peacefulness of flying. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=EX5FY4MJpR0) 445 (https://www.youtube.com/watch?v=EX5FY4MJpR0) Runner-up Titles I feel very safe on an airplane We're one mishap away from Lost Difference between the right decision and the safe decision First rule of AI Ethics Team: Don't fake the demo “Rice” They got run over by the Kubernetes Truck Figure out where all the yaml goes Rundown Many Different Kinds Of Cloud, Very Big Piles Of Money - IT Jungle (https://www.itjungle.com/2023/12/11/many-different-kinds-of-cloud-very-big-piles-of-money/) Software Startup That Rejected Buyout From Microsoft Shuts Down, Sells Assets to Nutanix (https://www.theinformation.com/articles/a16z-backed-startup-that-once-rejected-150m-sale-to-microsoft-shuts-down) Apache Mesos (https://en.wikipedia.org/wiki/Apache_Mesos) Docker acquires AtomicJar, a testing startup that raised $25M in January (https://techcrunch.com/2023/12/11/docker-acquires-atomicjar-a-testing-startup-that-raised-25m-in-january/) Relevant to your Interests Just about every Windows and Linux device vulnerable to new LogoFAIL firmware attack (https://arstechnica.com/security/2023/12/just-about-every-windows-and-linux-device-vulnerable-to-new-logofail-firmware-attack/) OAI staff deeply did not want to work for Microsoft (https://www.threads.net/@kalihays1/post/C0hUizEJfPV/?igshid=MzRlODBiNWFlZA==) Silicon Valley Confronts a Grim New A.I. Metric (https://www.nytimes.com/2023/12/06/business/dealbook/silicon-valley-artificial-intelligence.html) The Fastest Growing Brands of 2023 (https://pro.morningconsult.com/analyst-reports/fastest-growing-brands-2023) Vast Data lands $118M to grow its data storage platform for AI workloads (https://techcrunch.com/2023/12/06/vast-data-lands-118m-to-grow-its-data-storage-platform-for-ai-workloads/) 'No one saw this coming': Kevin O'Leary says remote work trend is now hurting sectors other than real estate — here's why he's saying certain ‘banks are going to fail' (https://finance.yahoo.com/news/no-one-saw-coming-kevin-133000274.html) The OpenAI Board Member Who Clashed With Sam Altman Shares Her Side (https://www.wsj.com/tech/ai/helen-toner-openai-board-2e4031ef) Apple joins AI fray with release of model framework (https://www.theverge.com/2023/12/6/23990678/apple-foundation-models-generative-ai-mlx) Will VMware customers balk as Broadcom transitions them to subscriptions? (https://www.constellationr.com/blog-news/insights/will-vmware-customers-balk-broadcom-transitions-them-subscriptions) Broadcom to divest VMware's EUC and Carbon Black units (https://www.theregister.com/2023/12/07/broadcom_q4_2023/) Apple Confirms It Shut Down iMessage for Android App Beeper Mini (https://www.macrumors.com/2023/12/10/apple-confirms-it-shut-down-beeper-mini/) Tech company VMware slashing 577 jobs in Austin (https://www.kvue.com/article/money/economy/boomtown-2040/vmware-austin-layoffs/269-c65851d4-54cb-4cf3-a3db-34fb907de932) The Problems with Money In (Open Source) Software | Aneel Lakhani | Monktoberfest 2023 (https://www.youtube.com/watch?v=LTCuLyv6SHo) WFH levels have become "flat as a pancake" (https://x.com/I_Am_NickBloom/status/1729557222424731894?s=20) Broadcom halves sub price for VMware's flagship hybrid cloud (https://www.theregister.com/2023/12/12/vmware_broadcom_licensing_changes/) VMware by Broadcom Dramatically Simplifies Offer Lineup and Licensing Model (https://news.vmware.com/company/vmware-by-broadcom-business-transformation) Cloud engineer gets 2 years for wiping ex-employer's code repos (https://www.bleepingcomputer.com/news/security/cloud-engineer-gets-2-years-for-wiping-ex-employers-code-repos/) The Kubernetes 1.29 release interview (https://open.substack.com/pub/craigbox/p/the-kubernetes-129-release-interview?r=1s6gmq&utm_campaign=post&utm_medium=web) HashiCorp Shares Drop 22% After Forecasting Slowing Sales Growth (https://www.marketwatch.com/story/hashicorp-shares-drop-22-after-forecasting-slowing-sales-growth-d3dd4d7d) Oracle shares slide as revenue misses estimates (https://www.cnbc.com/2023/12/11/oracle-orcl-q2-earnings-report-2024.html) Platform teams need a delightfully different approach, not one that sucks less (https://www.chkk.io/blog/platform-teams-different-approach) Nonsense Nearly Everyone Gets A's at Yale. Does That Cheapen the Grade? (https://www.nytimes.com/2023/12/05/nyregion/yale-grade-inflation.html?searchResultPosition=1) It's time for the Excel World Championships! (https://www.theverge.com/2023/12/9/23995236/its-time-for-the-excel-world-championships) Only in Australia: huge snake drops from roof during podcast recording – video (https://www.theguardian.com/environment/video/2023/dec/11/only-in-australia-huge-snake-drops-from-roof-during-podcast-recording-video) Committing to Costco (https://twitter.com/Wes_nship/status/1734207909137989969) 220-ton Nova Scotia building moved using 700 bars of soap (https://www.upi.com/Odd_News/2023/12/08/canada-Halifax-Nova-Scotia-Elmwood-building-moved-soap/2971702059148/) Conferences Jan 29, 2024 to Feb 1, 2024 That Conference Texas (https://that.us/events/tx/2024/schedule/) SCaLE 21x, March 14th to 17th, 2024 (https://www.socallinuxexpo.org/scale/21x) DevOpsDays Birmingham 2024 (https://talks.devopsdays.org/devopsdays-birmingham-al-2024/cfp) April 17-18, 2024 If you want your conference mentioned, let's talk media sponsorships. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: The Fund (https://www.amazon.com/Fund-Bridgewater-Associates-Unraveling-Street/dp/1250276934) Matt: Reese's minis unwrapped (https://amzn.to/3NoGV9i) Anti-recommendation: Keychron Acoustic Kit (https://www.keychron.com/collections/keychron-add-on/products/keychron-q10-acoustic-upgrade-kit) Coté: Sex Education (https://en.wikipedia.org/wiki/Sex_Education_(TV_series)), season 3. (some follow-ups: Menewood was good, a bit too Moby Dick w/r/t to wheat and barley in the middle; Descript is still fantastic, getting even better with the AI stuff) Photo Credits Header (https://unsplash.com/photos/blue-and-white-airplane-seats-DSOohFTAfno) Artwork (https://unsplash.com/photos/gray-and-black-airplane-seats-1KcGnn5HAPU)

Startup Project
#60 Tim Chen - From Open Source Contributor to Investor in Infrastructure Startups

Startup Project

Play Episode Listen Later Jul 2, 2023 63:42


Tim Chen is the Managing Partner at Essence VC, an early-stage fund focused on data infrastructure and developer tool companies. He has over a decade of experience leading engineering in enterprise infrastructure and open source communities and companies. Prior to Essence, Tim was the SVP of Engineering at Cosmos, a popular open source blockchain SDK. Prior to Cosmos, Tim co-founded Hyperpilot with Stanford Professor Christos Kozyrakis, leveraging decades of research to disrupt the enterprise infrastructure space, which later exited to Cloudera. Prior to Hyperpilot, Tim was an early employee at Mesosphere and CloudFoundry. He is also active in the open source space as an Apache Software Foundation core member, maintainer of Apache Drill and Apache Mesos, and CNCF TOC contributor. --- Send in a voice message: https://podcasters.spotify.com/pod/show/startupproject/message

The Hacking Open Source Business Podcast
HOSB - Building a Bootstrapped Open Source Company: Insights from Jetstack Founder & President, Matt Barker

The Hacking Open Source Business Podcast

Play Episode Listen Later Mar 7, 2023 34:53 Transcription Available


On this episode of the Hacking #opensource Business podcast, we talk with Matt Barker, the founder, and president of Jetstack. The discussion is about his experience in building a bootstrapped open source company based on his previous work at Canonical and MongoDB. In the interview, Matt talks about sales, business, and open-source technology (Linux, Kubernetes, Mesos, and more!). Matt covers the challenges of selling free open source software, support as a product, retention challenges, and measuring growth and adoption in open source software. Matt also shares his experience in comparing MongoDB and Canonical and starting Jetstack by choosing Kubernetes as the next big thing. The conversation delves into bootstrapping Jetstack with a services model, moving from services to a software product, comparing Kubernetes to Apache Mesos, and betting on truly open software. The interview provides valuable insights into sales, business, and open-source technology, making it an informative resource for anyone looking to learn about bootstrapping an open-source company.Two Matts and an Avi walk into a conference in London and end up in the Matt Cave... and this is the result.  00:00:00  Welcome to the Matt Cave00:01:48  Getting to know Matt Barker and Jetstack00:02:30  The Early Days of Canonical00:07:17  Support as a product, retention challenges00:13:21  Trying to go to the proprietary world from open source00:14:28  Comparing MongoDB & Canonical00:17:27  Bootstrapping Jetstack with Services00:19:50  Moving from services to product00:22:53  Convinced Kubernetes was the right space00:24:54  Betting on truly open software instead of open source controlled by 1 company00:29:43  Measuring growth and adoption00:33:04  Rapid FireCheckout our other interviews, clips, and videos: https://l.hosbp.com/YoutubeDon't forget to visit the open-source business community at: https://opensourcebusiness.community/Visit our primary sponsor, Scarf, for tools to help analyze your #opensource growth and adoption: https://about.scarf.sh/Subscribe to the podcast on your favorite app:Spotify: https://l.hosbp.com/SpotifyApple: https://l.hosbp.com/AppleGoogle: https://l.hosbp.com/GoogleBuzzsprout: https://l.hosbp.com/Buzzsprout

ACM ByteCast
Matei Zaharia - Episode 32

ACM ByteCast

Play Episode Listen Later Dec 13, 2022 54:27


In this episode of ACM ByteCast, Bruke Kifle hosts Matei Zaharia, computer scientist, educator, and creator of Apache Spark. Matei is the Chief Technologist and Co-Founder of Databricks and an Assistant Professor of Computer Science at Stanford. He started the Apache Spark project during his PhD at UC Berkeley in 2009 and has worked broadly on other widely used data and machine learning software, including MLflow, Delta Lake, and Apache Mesos. Matei's research was recognized through the 2014 ACM Doctoral Dissertation Award, an NSF Career Award, and the US Presidential Early Career Award for Scientists and Engineers. Matei, who was born in Romania and grew up mostly in Canada, describes how he developed Spark, a framework for writing programs that run on a large cluster of nodes and process data in parallel, and how this led him to co-found Databricks around this technology. Matei and Bruke also discuss the new paradigm shift from traditional data warehouses to data lakes, as well as his work on MLflow, an open-source platform for managing the end-to-end machine learning lifecycle. He highlights some recent announcements in the field of AI and machine learning and shares observations from teaching and conducting research at Stanford, including an important current gap in computing education.

Immigrant Computer Scientists

Episode 32: Interview with Ion Stoica, Professor at  University of California, Berkeley. Creator, leader, and founder of Apache Spark, Ray, Apache Mesos. Founder of Databricks, Anyscale, and Conviva. ACM Fellow, SIGOPS Mark Weiser Award.  Immigrant from Romania in 1994.

Being a pro
8 ways to become to better SRE today! Site reliability Engineering | 8 tips to follow as an SRE | SRE Mindset | Non Technical SRE

Being a pro

Play Episode Listen Later Jan 12, 2022 22:02


Site reliability engineering Site Reliability Engineering, also popularly referred to as the SRE, is a role in Computer Science Engineering where the main purpose is to provision, maintain, monitor, and manage the infrastructure in order to provide maximum application uptime and reliability. SRE is an emerging role, but the tasks that the SRE does were always there ever since the first application that was developed. The scope of the software developers ends where they write code to develop the application and right from setting up the infrastructure, the various services that run on them, the network connectivity that is required, providing a platform for the application to run and making sure every part of the application is up and running reliably 24x7 is the duty of an SRE. In fact, we can consider Site Reliability Engineers are the strong bridge between the users and a reliable application. Now, in order to explain the different responsibilities of an SRE, I have divided it into 4 different categories. I have always seen SRE this way, and definitely not as some ad-hoc process. The four categories in which I would classify the tasks of a Site Reliability Engineer are: Create Monitor Manage Destroy Let's dive deep into each one of them. Create 1. Provision virtual machines / PXE Baremetals SREs are responsible for provisioning the virtual machines with the requested resources in terms of CPU, memory, disks, network configurations, and operating system. They are also responsible to be rack aware during provisioning. Example operating systems involve Linux Ubuntu, CentOS, Windows. 2. Setup services Example technologies involve NGINX, Apache, RabbitMQ, Kafka, Hadoop, Traefik, MySQL, PostgreSQL, Aerospike, MongoDB, Redis, MinIO, Kubernetes, Apache Mesos, Marathon, MariaDB, Galera. 3. Optimize the infrastructure Since there are several components and services that are being used in the infrastructure, there is a scope for improvements in terms of performance, efficiency, and security. The SRE optimizes the components by keeping them up to date, choosing the right service for the right job, patching the servers. 4. Write monitoring scripts When the SRE are involved in maintaining an infrastructure of any size, they never underestimate any component of the infrastructure and write a monitoring script to monitor the components and metrics of each and every one of them. This provides the ability to get real-time alerts on any of the components malfunctioning and also a better view of the infrastructure. The SRE uses programming languages like Bash, Python, Golang, Perl, and tools like daemon processes, Riemann, InfluxDB, OpenTSDB, Kafka, Grafana, Prometheus, and APIs to monitor the infrastructure 5. Write automation scripts If there are more than 10 steps to be performed and chances are that the task has to be performed more than once, the SRE never hesitate to automate the task. This saves time and also prevents human error. The SRE uses programming languages like Bash, Python, Golang, Perl, Ansible to automate the tasks. 6. Manage users on the machines

Being a pro
How a Site Reliability Engineer destroys the infrastructure? SRE | Tharun Shiv

Being a pro

Play Episode Listen Later Jan 11, 2022 5:54


Site reliability engineering Site Reliability Engineering, also popularly referred to as the SRE, is a role in Computer Science Engineering where the main purpose is to provision, maintain, monitor, and manage the infrastructure in order to provide maximum application uptime and reliability. SRE is an emerging role, but the tasks that the SRE does were always there ever since the first application that was developed. The scope of the software developers ends where they write code to develop the application and right from setting up the infrastructure, the various services that run on them, the network connectivity that is required, providing a platform for the application to run and making sure every part of the application is up and running reliably 24x7 is the duty of an SRE. In fact, we can consider Site Reliability Engineers are the strong bridge between the users and a reliable application. Now, in order to explain the different responsibilities of an SRE, I have divided it into 4 different categories. I have always seen SRE this way, and definitely not as some ad-hoc process. The four categories in which I would classify the tasks of a Site Reliability Engineer are: Create Monitor Manage Destroy Let's dive deep into each one of them. Create 1. Provision virtual machines / PXE Baremetals SREs are responsible for provisioning the virtual machines with the requested resources in terms of CPU, memory, disks, network configurations, and operating system. They are also responsible to be rack aware during provisioning. Example operating systems involve Linux Ubuntu, CentOS, Windows. 2. Setup services Example technologies involve NGINX, Apache, RabbitMQ, Kafka, Hadoop, Traefik, MySQL, PostgreSQL, Aerospike, MongoDB, Redis, MinIO, Kubernetes, Apache Mesos, Marathon, MariaDB, Galera. 3. Optimize the infrastructure Since there are several components and services that are being used in the infrastructure, there is a scope for improvements in terms of performance, efficiency, and security. The SRE optimizes the components by keeping them up to date, choosing the right service for the right job, patching the servers. 4. Write monitoring scripts When the SRE are involved in maintaining an infrastructure of any size, they never underestimate any component of the infrastructure and write a monitoring script to monitor the components and metrics of each and every one of them. This provides the ability to get real-time alerts on any of the components malfunctioning and also a better view of the infrastructure. The SRE uses programming languages like Bash, Python, Golang, Perl, and tools like daemon processes, Riemann, InfluxDB, OpenTSDB, Kafka, Grafana, Prometheus, and APIs to monitor the infrastructure 5. Write automation scripts If there are more than 10 steps to be performed and chances are that the task has to be performed more than once, the SRE never hesitate to automate the task. This saves time and also prevents human error. The SRE uses programming languages like Bash, Python, Golang, Perl, Ansible to automate the tasks. 6. Manage users on the machines

Being a pro
How a Site Reliability Engineer maintains the infrastructure? SRE | Tharun Shiv

Being a pro

Play Episode Listen Later Jan 10, 2022 5:48


Site reliability engineering Site Reliability Engineering, also popularly referred to as the SRE, is a role in Computer Science Engineering where the main purpose is to provision, maintain, monitor, and manage the infrastructure in order to provide maximum application uptime and reliability. SRE is an emerging role, but the tasks that the SRE does were always there ever since the first application that was developed. The scope of the software developers ends where they write code to develop the application and right from setting up the infrastructure, the various services that run on them, the network connectivity that is required, providing a platform for the application to run and making sure every part of the application is up and running reliably 24x7 is the duty of an SRE. In fact, we can consider Site Reliability Engineers are the strong bridge between the users and a reliable application. Now, in order to explain the different responsibilities of an SRE, I have divided it into 4 different categories. I have always seen SRE this way, and definitely not as some ad-hoc process. The four categories in which I would classify the tasks of a Site Reliability Engineer are: Create Monitor Manage Destroy Let's dive deep into each one of them. Create 1. Provision virtual machines / PXE Baremetals SREs are responsible for provisioning the virtual machines with the requested resources in terms of CPU, memory, disks, network configurations, and operating system. They are also responsible to be rack aware during provisioning. Example operating systems involve Linux Ubuntu, CentOS, Windows. 2. Setup services Example technologies involve NGINX, Apache, RabbitMQ, Kafka, Hadoop, Traefik, MySQL, PostgreSQL, Aerospike, MongoDB, Redis, MinIO, Kubernetes, Apache Mesos, Marathon, MariaDB, Galera. 3. Optimize the infrastructure Since there are several components and services that are being used in the infrastructure, there is a scope for improvements in terms of performance, efficiency, and security. The SRE optimizes the components by keeping them up to date, choosing the right service for the right job, patching the servers. 4. Write monitoring scripts When the SRE are involved in maintaining an infrastructure of any size, they never underestimate any component of the infrastructure and write a monitoring script to monitor the components and metrics of each and every one of them. This provides the ability to get real-time alerts on any of the components malfunctioning and also a better view of the infrastructure. The SRE uses programming languages like Bash, Python, Golang, Perl, and tools like daemon processes, Riemann, InfluxDB, OpenTSDB, Kafka, Grafana, Prometheus, and APIs to monitor the infrastructure 5. Write automation scripts If there are more than 10 steps to be performed and chances are that the task has to be performed more than once, the SRE never hesitate to automate the task. This saves time and also prevents human error. The SRE uses programming languages like Bash, Python, Golang, Perl, Ansible to automate the tasks. 6. Manage users on the machines

Being a pro
How a Site Reliability Engineer monitors the infrastructure? SRE | Tharun Shiv

Being a pro

Play Episode Listen Later Jan 9, 2022 7:45


Site reliability engineering Site Reliability Engineering, also popularly referred to as the SRE, is a role in Computer Science Engineering where the main purpose is to provision, maintain, monitor, and manage the infrastructure in order to provide maximum application uptime and reliability. SRE is an emerging role, but the tasks that the SRE does were always there ever since the first application that was developed. The scope of the software developers ends where they write code to develop the application and right from setting up the infrastructure, the various services that run on them, the network connectivity that is required, providing a platform for the application to run and making sure every part of the application is up and running reliably 24x7 is the duty of an SRE. In fact, we can consider Site Reliability Engineers are the strong bridge between the users and a reliable application. Now, in order to explain the different responsibilities of an SRE, I have divided it into 4 different categories. I have always seen SRE this way, and definitely not as some ad-hoc process. The four categories in which I would classify the tasks of a Site Reliability Engineer are: Create Monitor Manage Destroy Let's dive deep into each one of them. Create 1. Provision virtual machines / PXE Baremetals SREs are responsible for provisioning the virtual machines with the requested resources in terms of CPU, memory, disks, network configurations, and operating system. They are also responsible to be rack aware during provisioning. Example operating systems involve Linux Ubuntu, CentOS, Windows. 2. Setup services Example technologies involve NGINX, Apache, RabbitMQ, Kafka, Hadoop, Traefik, MySQL, PostgreSQL, Aerospike, MongoDB, Redis, MinIO, Kubernetes, Apache Mesos, Marathon, MariaDB, Galera. 3. Optimize the infrastructure Since there are several components and services that are being used in the infrastructure, there is a scope for improvements in terms of performance, efficiency, and security. The SRE optimizes the components by keeping them up to date, choosing the right service for the right job, patching the servers. 4. Write monitoring scripts When the SRE are involved in maintaining an infrastructure of any size, they never underestimate any component of the infrastructure and write a monitoring script to monitor the components and metrics of each and every one of them. This provides the ability to get real-time alerts on any of the components malfunctioning and also a better view of the infrastructure. The SRE uses programming languages like Bash, Python, Golang, Perl, and tools like daemon processes, Riemann, InfluxDB, OpenTSDB, Kafka, Grafana, Prometheus, and APIs to monitor the infrastructure 5. Write automation scripts If there are more than 10 steps to be performed and chances are that the task has to be performed more than once, the SRE never hesitate to automate the task. This saves time and also prevents human error. The SRE uses programming languages like Bash, Python, Golang, Perl, Ansible to automate the tasks. 6. Manage users on the machines

Being a pro
How a Site Reliability Engineer is a Creator of the infrastructure? SRE | Tharun Shiv

Being a pro

Play Episode Listen Later Jan 8, 2022 5:35


Site reliability engineering Site Reliability Engineering, also popularly referred to as the SRE, is a role in Computer Science Engineering where the main purpose is to provision, maintain, monitor, and manage the infrastructure in order to provide maximum application uptime and reliability. SRE is an emerging role, but the tasks that the SRE does were always there ever since the first application that was developed. The scope of the software developers ends where they write code to develop the application and right from setting up the infrastructure, the various services that run on them, the network connectivity that is required, providing a platform for the application to run and making sure every part of the application is up and running reliably 24x7 is the duty of an SRE. In fact, we can consider Site Reliability Engineers are the strong bridge between the users and a reliable application. Now, in order to explain the different responsibilities of an SRE, I have divided it into 4 different categories. I have always seen SRE this way, and definitely not as some ad-hoc process. The four categories in which I would classify the tasks of a Site Reliability Engineer are: Create Monitor Manage Destroy Let's dive deep into each one of them. Create 1. Provision virtual machines / PXE Baremetals SREs are responsible for provisioning the virtual machines with the requested resources in terms of CPU, memory, disks, network configurations, and operating system. They are also responsible to be rack aware during provisioning. Example operating systems involve Linux Ubuntu, CentOS, Windows. 2. Setup services Example technologies involve NGINX, Apache, RabbitMQ, Kafka, Hadoop, Traefik, MySQL, PostgreSQL, Aerospike, MongoDB, Redis, MinIO, Kubernetes, Apache Mesos, Marathon, MariaDB, Galera. 3. Optimize the infrastructure Since there are several components and services that are being used in the infrastructure, there is a scope for improvements in terms of performance, efficiency, and security. The SRE optimizes the components by keeping them up to date, choosing the right service for the right job, patching the servers. 4. Write monitoring scripts When the SRE are involved in maintaining an infrastructure of any size, they never underestimate any component of the infrastructure and write a monitoring script to monitor the components and metrics of each and every one of them. This provides the ability to get real-time alerts on any of the components malfunctioning and also a better view of the infrastructure. The SRE uses programming languages like Bash, Python, Golang, Perl, and tools like daemon processes, Riemann, InfluxDB, OpenTSDB, Kafka, Grafana, Prometheus, and APIs to monitor the infrastructure 5. Write automation scripts If there are more than 10 steps to be performed and chances are that the task has to be performed more than once, the SRE never hesitate to automate the task. This saves time and also prevents human error. The SRE uses programming languages like Bash, Python, Golang, Perl, Ansible to automate the tasks. 6. Manage users on the machines

airhacks.fm podcast with adam bien
SGI, NCSA Mosaic, Sun, Java, JSF, Java EE, Jakarta EE and Clouds

airhacks.fm podcast with adam bien

Play Episode Listen Later Oct 20, 2021 58:32


An airhacks.fm conversation with Ed Burns (@edburns) about: Ti 99 4a with speech synthesis, Secrets of the Rockstars Programmer book, Apple 2c with word processing and laser mouse, Superman 2, collecting half cents as rounding errors, War Games and Tron, the Logo programming language with a turtle, enjoying playing trumpet, marching band and a binary trumpet, The Nullpointers Band, Fourier Transforms for music quantification at high school, just intonation and the key changes, equal temperement on piano, retuning the keyboard on the fly, applying at Sun Microsystems, Lighthouse Design and Objectivec-C, working at Silicon Graphics and the nice O2 workstation, working on NCSA Mosaic browser at NCSA, learning Pascal and C++ at the university, working on Common Client Interface on Mosaic Browser, inperson conference system, talent vs. grit, grit over talent, floyd marinescu started the theserverside.com, the Spyglas Browser, the SGI Cosmo and VRML, SGI IRIX operating system, commodity vs. boutique fights at SGI, joining Sun's Lighthouse Design group, building a Java-based productivity suite, building a multi-dimensional spreadsheet: quantrix, NextStep Appkits vs. Swing, the AOL Sun-Netscape alliance, OJI - Open Java VM Interface the SPI for Applets, Project Panama - the new JNI, the popularity of Struts was the motivation for JSF, Craig McLanaham and Amy Fowler started to work on JSF, JSF code name was moonwalk, Hans Muller and the Swing Application Framework (JSR-296), the Java Community Process passion, IETF and W3C are like JCP, "Innovation Happens Elsewhere" book, JSF and Spring XML-based dependency injection, ATG dynamo jhtml, JSF 2.0 composite components, JSF was a hot technology with multiple component implementations RichFaces, icefaces, PrettyFaces, Liferay, PrimeFaces and MyFaces, the initial JSF target was page-based corporate apps, the AJAX experience conference and Ben Galbraith, Martin Marinschek from Irian, Josh Juneau and the famous blog post, building a proprietary Java-based docker orchestration framework on top of Apache Mesos at Oracle, Java EE on Azure, riding the crest, Ed's journey from client to server to cloud Ed Burns on twitter: @edburns

The BreakLine Arena
Ali Ghodsi: Building Databricks

The BreakLine Arena

Play Episode Listen Later Aug 31, 2021 59:09


Please join us in The BreakLine Arena for a conversation with Ali Ghodsi, CEO and Co-founder of Databricks.As Chief Executive Officer, Ali responsible for the growth and international expansion of the company. He previously served as the VP of Engineering and Product Management before taking the role of CEO in January 2016. In addition to his work at Databricks, Ali serves as an adjunct professor at UC Berkeley and is on the board at UC Berkeley's RiseLab.Ali was one of the original creators of open source project, Apache Spark, and ideas from his academic research in the areas of resource management and scheduling and data caching have been applied to Apache Mesos and Apache Hadoop. Ali received his MBA from Mid-Sweden University in 2003 and PhD from KTH/Royal Institute of Technology in Sweden in 2006 in the area of Distributed Computing.If you like what you've heard, please like, subscribe, or follow our show. To learn more about BreakLine Education, check us out at breakline.org.

ShipTalk
A Time Before, During, and After Kubernetes - Santiago - GumGum

ShipTalk

Play Episode Listen Later Jun 28, 2021 27:55 Transcription Available


In this episode of ShipTalk we are joined by Santiago Campuzano. Santiago is a Lead DevOps Engineer at GumGum and has seen his fair share of distributed systems.  Santiago used to build distributed Java/JEE applications for the financial sector and has gone through the early days of containerization leveraging technologies such as Apache Mesos and Apache Marathon before leveraging Kubernetes. We go through in the episode describing the time right before containerization became popular to the journey that Santiago and firms have been on ever since.  Santiago has also published an excellent blog describing concerns and different points of view when implementing K8s at scale.Blog:https://medium.com/gumgum-tech/implementing-kubernetes-the-hidden-part-of-the-iceberg-part-1-76c3e9684d49 

Software at Scale
Software at Scale 18 - Alexander Gallego: CEO, Vectorized

Software at Scale

Play Episode Listen Later Apr 27, 2021 61:41


Alexander Gallego is the founder and CEO of Vectorized. Vectorized offers a product called RedPanda, an Apache Kafka-compatible event streaming platform that’s significantly faster and easier to operate than Kafka. We talk about the increasing ubiquity of streaming platforms, what they’re used for, why Kafka is slow, and how to safely and effectively build a replacement.Previously, Alex was a Principal Software Engineer at Akamai systems and the creator of the Concord Framework, a distributed stream processing engine built in C++ on top of Apache Mesos.Apple Podcasts | Spotify | Google PodcastsHighlights7:00 - Who uses streaming platforms, and why? Why would someone use Kafka?12:30 - What would be the reason to use Kafka over Amazon SQS or Google PubSub?17:00 - What makes Kafka slow? The story behind RedPanda. We talk about memory efficiency in RedPanda which is better optimized for machines with more cores.34:00 - Other optimizations in RedPanda39:00 - WASM programming within the streaming engine, almost as if Kafka was an AWS Lambda processor.43:00 - How to convince potential customers to switch from Kafka to Redpanda?48:00 - What is the release process for Redpanda? How do they ensure that a new version isn’t broken?52:00 - What have we learnt about the state of Kafka and the use of streaming tools? Get on the email list at www.softwareatscale.dev

The DevOps Kitchen Talks's Podcast
21 - Kubernetes 1.21, Terraform 0.15, пока Mesos, Google Logica и майнеры на GitHub

The DevOps Kitchen Talks's Podcast

Play Episode Listen Later Apr 20, 2021 78:41


Не прошло и 20 выпусков и это свершилось - мы запустили полноценный видеоформат! Теперь нас можно не только слышать, но и видеть. Очень интересен Ваш фидбэк, как получился пилотный видео ролик. В этом выпуске обсудили стоит ли ждать увеличения зарплаты, если выучить kubernetes и какие нововведения в версии 1.21. В гости заглянул Саша Довнар и порассуждал когда же будет Terraform 1.0 и зачем обновляться на 0.15. Майнеры на GitHub - хорошо это или плохо? Дискутировали, зачем в ядро линукса завезли поддержку M1 от Apple и что за Logica от Google. Тайминг: 00:20 - Приветствие в видео-формате 00:48 - Максим не в фокусе 03:10 - EPAM Hiring Week (https://www.epam-group.ru/careers/data-and-devops-hiring-week/devops) 04:23 - Зарплаты с k8s выше, чем без него (https://kube.careers/report-2021-q1) 16:01 - Список востребованных DevOps-технологий 18:46 - Главные изменения в kubernetes 1.21 (https://sysdig.com/blog/whats-new-kubernetes-1-21/) 30:23 - Что подвезли в 0.15 версии Terraform (https://www.hashicorp.com/blog/announcing-hashicorp-terraform-0-15-general-availability) 40:16 - DevOps Days Kiev (https://devopsdays.com.ua/) 41:33 - Apache прощается с Mesos (https://www.opennet.ru/opennews/art.shtml?num=54926) 44:38 - Майнеры осваивают GitHub (https://m.habr.com/ru/news/t/550684/) 48:51 - Как Сашу промайнили 52:57 - gh поддерживает workflow (https://github.blog/2021-04-15-work-with-github-actions-in-your-terminal-with-github-cli/) 55:45 - Набор типсов и сниппетов для GitHub Actions (https://github.com/yengoteam/awesome-gha-snippets) 58:55 - Почему Саша разочаровался в GHA 1:00:42 - Linux и М1 (https://arstechnica.com/gadgets/2021/04/apple-m1-hardware-support-merged-into-linux-5-13/) 1:09:45 - Нужна ли Google Logica? (https://habr.com/ru/company/selectel/blog/551576/) Новости одной строкой: 1:13:12 - Депутат предложила заменить youtube патриотическим контентом (https://dev.by/news/deputat-firewall) 1:13:54 - ПК стали покупать чаще (https://dev.by/news/rynok-pk-vzletel-na-55-v-pervom-kvartale-prodazhi-mac-vyrosli-bolee-chem-vdvoe) 1:14:36 - Больше не будет телефонов LG (https://www.zdnet.com/article/lg-is-ending-its-smartphone-business/) 1:15:33 - VISA разрешит использование криптовалют Сказать спасибо: https://www.patreon.com/devopskitchentalks Музыка: https://www.bensound.com/

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2021-04-07)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Apr 7, 2021 59:13


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/- - -00:00:00​ Intro- - -00:01:25 Terraform 0.15 RC2 just releasedhttps://github.com/hashicorp/terraform/releases/tag/v0.15.0-rc2- - -00:02:05 Apache Mesos probably moving to “attic”https://lists.apache.org/x/thread.html/rab2a820507f7c846e54a847398ab20f47698ec5bce0c8e182bfe51ba%40%3Cdev.mesos.apache.org%3E- - -00:02:45 AWS Elasticsearch Announces “Auto-Tune”https://www.infoq.com/news/2021/04/amazon-elasticsearch-autotune/- - -00:06:05 AWS Announces Serial Console Access (for your pet servers)https://www.theregister.com/2021/04/01/aws_vm_serial_console/- - -00:06:55 GitHub Actions at war with Crypto Minershttps://therecord.media/github-investigating-crypto-mining-campaign-abusing-its-server-infrastructure/- - -00:08:57 PHP Git Repo (self-hosted) Hacked / Backdoor Installedhttps://www.theregister.com/2021/03/29/php_repository_infected/- - -00:11:00 Sponsor Cloud Posse / SweetOps / Office Hours for $1 / mo on GitHubhttps://github.com/sponsors/cloudposse- - -00:15:09 How to manage auto rotation of IAM User Access Keys within terraform- - -00:25:05 Istio upgrade vulnerability - - -00:26:16 Cloud Posse tutorials available - - -00:27:55 AWS CLI SDK copy AMI between regions - - -00:31:45 ChatOps engineering- - -00:36:10 Application custom metrics best practices- - -00:42:11 DevOps KPI metrics  - - -00:58:28 Outro- - -#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show (https://cloudposse.com/office-hours/)

Kubernetes Podcast from Google
Siri, Storage and Solutions, with Josh Bernstein

Kubernetes Podcast from Google

Play Episode Listen Later Jan 26, 2021 38:22


Josh Bernstein has worked at a number of infrastructure roles before recently landing at Google. He talks about migrating Siri from AWS (pre-acqusition) to VMware to Mesos, and Dell EMC’s work building what would become the Container Storage Interface. Guest host Jasmine Jaksic talks with Craig about snowcreatures. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Episode 15, with Dan Ciruli and Jasmine Jaksic Snowpeople and snowthings News of the week Multi-dimensional pod autoscaling in this week’s GKE release Hitachi: vacuum cleaners in the 1990s and Kubernetes today Garnet.ai kind 0.10 New Google Cloud Run networking features Don’t cross the streams Production Kubernetes from VMware Tanzu. Serverless for Everyone Else from Alex Ellis Episode 116 Chris Aniszczyk’s 2021 predictions Episode 134 Priyanka Sharma’s 2021 predictions Episode 107 14 LFX interns graduate Kubernetes honey tokens by Brad Geesaman Bad pods: privilege escalation by Seth Art The US Air Force are feeling supersonic Links from the interview Apple acquires Siri Xserve Siri public introduction Apple rebuilds Siri backend with Apache Mesos using the J.A.R.V.I.S. framework Dell EMC {code} community REX-Ray: announcement and docs CNCF Governing Board CI/CD startups to watch: Harness Armory Shipa Josh Bernstein on Twitter

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Databricks CEO, Ali Ghodsi on The 3 Phases of Startup Growth, How to Evaluate Risk and Downside Scenario Planning & Who, What and When To Hire When Scaling Your Go-To-Market

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jan 14, 2021 41:52


Ali Ghodsi is the Founder & CEO @ Databricks, bringing together data engineering, science and analytics on an open, unified platform so data teams can collaborate and innovate faster. To date, Ali has raised over $897M for the company including from the likes of a16z, NEA, Microsoft, Battery, Coatue, Greenbay and more. Prior to Databricks, Ali was one of the original creators of open source project, Apache Spark, and ideas from his research have been applied to Apache Mesos and Apache Hadoop. In Today’s Episode You Will Learn: 1.) How Ali made his way from fleeing Iran as a refugee to living in a Swedish ghetto? What was the founding moment for Ali with Databricks? 2.) How does Ali think about and evaluate risk today? Why does Ali always make his team do downside scenario planning? How does Ali think about his relationship to money today? Why does Ali disagree with gut decisions? What is his process for making decisions effectively? 3.) Stage 1: The Search for PMF: What are the core elements included in this phase? What types of leaders thrive in this phase? What type struggle? How can leaders sustain morale in the early days when it is not up and to the right? Who are the crucial hires in this phase? 4.) Stage 2: Scale Go-To-Market: What are the core roles needed to expand GTM fast and effectively? Why should you hire sales leaders before marketing leaders? Why is hiring finance leaders so crucial here? What mistakes are most often made here? How do the board resolve them? 5.) Stage 3: Process and Efficiency: What are the first and most important processes that need to be implemented? How does Ali need to change the type of leader he is to fit this stage? How does one retain creativity and nimble decision-making at scale and with process? Item’s Mentioned In Today’s Episode Ali’s Favourite Book: Good Strategy Bad Strategy: The Difference and Why it Matters As always you can follow Harry and The Twenty Minute VC on Twitter here! Likewise, you can follow Harry on Instagram here for mojito madness and all things 20VC.

airhacks.fm podcast with adam bien

An airhacks.fm conversation with Alex Soto (@alexsotob) about: playing desperado on spectrum, peek, poke and rem with basic, implementing a clock and drawing a line, curiosity and programming, fascination with communication, sending emails to unknown people, Netscape Composer and Microsoft Frontpage, Netscape Mail Client, the "view code" button, Netscape Mail became Mozilla's Thunderbird, adding interactivity to HTML pages with JavaScript, coding number guess game with JavaScript, the friend declaration in C++, starting with Java 1.2 and Swing, Sun Java Workshop and Java Studio Workshop, using Servlets on Orion Application Server as backend for HTML forms, using Wicket web framework, doubled income for experienced developer, building portals with JBoss 3.0, Ant and XDoclet, VoIP and Session Initiation Protocol SIP project with JBoss in 2002, Bean Managed (BMP) and Container Managed Persistence (CMP), nice BMP and CMP - comes for free, installing JDK 1.3.1 (Kestrel), controlling medical robots with Java, IoT in 2005, moving physical machines with Java, loosing focus after 8 years, building electronic voting systems, Java EE 5 came with productivity boost, TomEE booted in 1 second, introducing Java EE as "The New Thing", advocating Java EE on conferences, promoting Java EE as productivity and speed optimisation, Kohsuke Kawaguchi the creator of Hudson, starting at CloudBees with Kohsuke Kawaguchi, Kohsuke started launchable, working with Apache Mesos, starting to work with Arquillian, becoming an Arquililan committer, speaking at Devoxx, ping from Aslak Knutsen, starting at the dream company - RedHat, the Monday message from Aslak, working with fabric8, Alex Soto on twitter: @alexsotob, Alex's blog: www.lordofthejars.com and Alex on GitHub

The Podlets - A Cloud Native Podcast
Cloud Native Infrastructure (Ep 5)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Nov 13, 2019 51:39


This week on The Podlets Podcast, we talk about cloud native infrastructure. We were interested in discussing this because we’ve spent some time talking about the different ways that people can use cloud native tooling, but we wanted to get to the root of it, such as where code lives and runs and what it means to create cloud native infrastructure. We also have a conversation about the future of administrative roles in the cloud native space, and explain why there will always be a demand for people in this industry. We dive into the expense for companies when developers run their own scripts and use cloud services as required, and provide some pointers on how to keep costs at a minimum. Joining in, you’ll also learn what a well-constructed cloud native environment should look like, which resources to consult, and what infrastructure as code (IaC) really means. We compare containers to virtual machines and then weigh up the advantages and disadvantages of bare metal data centers versus using the cloud. Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Nicholas Lane Key Points From This Episode: • A few perspectives on what cloud native infrastructure means. • Thoughts about the future of admin roles in the cloud native space. • The increasing volume of internet users and the development of new apps daily. • Why people in the infrastructure space will continue to become more valuable. • The cost implications for companies if every developer uses cloud services individually. • The relationships between IaC for cloud native and IaC for the could in general. • Features of a well-constructed cloud native environment. • Being aware that not all clouds are created equal and the problem with certain APIs. • A helpful resource for learning more on this topic: Cloud Native Infrastructure. • Unpacking what IaC is not and how Kubernetes really works. • Reflecting how it was before cloud native infrastructure, including using tools like vSphere. • An explanation of what containers are and how they compare to virtual machines. • Is it worth running bare metal in the clouds age? Weighing up the pros and cons. • Returning to the mainframe and how the cloud almost mimics that idea. • A list of the cloud native infrastructures we use daily. • How you can have your own “private” cloud within your bare metal data center. Quotes: “This isn’t about whether we will have jobs, it’s about how, when we are so outnumbered, do we as this relatively small force in the world handle the demand that is coming, that is already here.” — Duffie Coolie @mauilion [0:07:22] “Not every cloud that you’re going to run into is made the same. There are some clouds that exist of which the API is people. You send a request and a human being interprets your request and makes the changes. That is a big no-no.” — Nicholas Lane @apinick [0:16:19] “If you are in the cloud native workspace you may need 1% of your workforce dedicated to infrastructure, but if you are in the bare metal world, you might need 10 to 20% of your workforce dedicated just to running infrastructure.” — Nicholas Lane @apinick [0:41:03] Links Mentioned in Today’s Episode: VMware RADIO — https://www.vmware.com/radius/vmware-radio-amplifying-ideas-innovation/CoreOS — https://coreos.com/Brandon Phillips on LinkedIn — https://www.linkedin.com/in/brandonphilips Kubernetes — https://kubernetes.io/Apache Mesos — http://mesos.apache.orgAnsible — https://www.ansible.comTerraform — https://www.terraform.ioXenServer (Citrix Hypervisor) — https://xenserver.orgOpenStack — https://www.openstack.orgRed Hat — https://www.redhat.com/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaCloud native Infrastructure — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Heptio — https://heptio.cloud.vmware.comAWS — https://aws.amazon.comAzure — https://azure.microsoft.com/en-us/vSphere — https://www.vmware.com/products/vsphere.htmlCircuit City — https://www.circuitcity.comNewegg — https://www.newegg.comUber —https://www.uber.com/ Lyft — https://www.lyft.com Transcript: EPISODE 05 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.1] NL: Hello and welcome to episode five of The Podlets Podcast, the podcast where we explore cloud native topics one topic at a time. This week, we’re going to the root of everything, money. No, I mean, infrastructure. My name is Nicholas Lane and joining me this week are Carlisia Campos. [0:00:59.2] CC: Hi everybody. [0:01:01.1] NL: And Duffie Cooley. [0:01:02.4] DC: Hey everybody, good to see you again. [0:01:04.6] NL: How have you guys been? Anything new and exciting going on? For me, this week has been really interesting, there’s an internal VMware conference called RADIO where we have a bunch of engineering teams across the entire company, kind of get together and talk about the future of pretty much everything and so, have been kind of sponging that up this week and that’s been really interesting to kind of talking about all the interesting ideas and fascinating new technologies that we’re working on. [0:01:29.8] DC: Awesome. [0:01:30.9] NL: Carlisia? [0:01:31.8] CC: My entire team is at RADIO which is in San Francisco and I’m not. But I’m sort of glad I didn’t have to travel. [0:01:42.8] NL: Yeah, nothing too exciting for me this week. Last week I was on PTO and that was great so this week it has just been kind of getting spun back up and I’m getting back into the swing of things a bit. [0:01:52.3] CC: Were you spun back up with a script? [0:01:57.1] NL: Yeah, I was. [0:01:58.8] CC: With infrastructure I suppose? [0:02:00.5] NL: Yes, absolutely. This week on The Podlets Podcast, we are going to be talking about cloud native infrastructure. Basically, I was interested in talking about this because we’ve spent some time talking about some of the different ways that people can use cloud native tooling but I wanted to kind of get to the root of it. Where does your code live, where does it run, what does it mean to create cloud native infrastructure? Start us off, you know, we’re going to talk about the concept. To me, cloud native infrastructure is basically any infrastructure tool or service that allows you to programmatically create infrastructure and by that I mean like your compute nodes, anything running your application, your networking, software defined networking, storage, Seth, object store, dev sort of thing, you can just spin them up in a programmatical in contract way and then databases as well which is very nice. Then I also kind of lump in anything that’s like a managed service as part of that. Going back to databases if you use like difference and they have their RDS or RDB tooling that provides databases on the fly and then they manage it for you. Those things to me are cloud native infrastructure. Duffy, what do you think? [0:03:19.7] DC: Year, I think it’s definitely one of my favorite topics. I spent a lot of my career working with infrastructure one way or the other, whether that meant racking servers in racks and doing it the old school way and figuring out power budgets and you know, dealing with networking and all of that stuff or whether that meant, finally getting to a point where I have an API and my customer’s going to come to me and say, I need 10 new servers, I can be like, one second. Then run all the script because they have 10 new servers versus you know, having to order the hardware, get the hardware delivered, get the hardware racked. Replace the stuff that was dead on arrival, kind of go through that whole process and yeah. Cloud native infrastructure or infrastructure as a service is definitely near and dear to my heart. [0:03:58.8] CC: How do you feel about if you are an admin? You work from VMware and you are a field engineer now. You’re basically a consultant but if you were back in that role of an admin at a company and you had the company was practicing cloud native infrastructure things. Basically, what we’re talking about is we go back to this theme of self-sufficiency a lot. I think we’re going to be going back to this a lot too, as we go through different topics. Mainly, someone was a server in that environment now, they can run an existing script that maybe you made it for them. But do you have concerns that your job is redundant now that you can just one script can do a lot of your work? [0:04:53.6] NL Yeah, in the field engineering org, we kind of have this mantra that we’re trying to automate ourselves out of a job. I feel like anyone who is like really getting into cloud native infrastructure, that is the path that they’re taking as well. If I were an admin in a world that was like hybrid or anything like that, they had like on prem or bare metal infrastructure and they had cloud native infrastructure. I would be more than ecstatic to take any amount of the administrative work of like spinning up new servers in the cloud native infrastructure. If the people just need somewhere they can go click, I got whatever services I need and they all work together because the cloud makes them work together, awesome. That gives me more time to do other tasks that may be a bit more onerous or less automated. I would be all for it. [0:05:48.8] CC: You’re saying that if you are – because I don’t want the admin people listening to this to stop listening and thinking, screw this. You’re saying, if you’re an admin, there will still be plenty of work for you to do? [0:06:03.8] NL: Year, there’s always stuff to do I think. If not, then I guess maybe it’s time to find somewhere else to go. [0:06:12.1] DC: There was a really interesting presentation that really stuck with me when I was working for CoreOS which is another infrastructure company, it was a presentation by our CTO, his name is Brandon Philips and Brandon put together a presentation around the idea that every single day, there are you know, so many thousand new users of the Internet coming online for the first time. That’s so many thousand people who are like going to be storing their photos there, getting emails, doing all those things that we do in our daily lives with the Internet. That globally, across the whole world, there are only about, I think it was like 250k or 300,000 people that do what we do, that understand the infrastructure at a level that they might even be able to automate it, you know? That work in the IT industry and are able to actually facilitate the creation of those resources on which all of those applications will be hosted, right? This isn’t even taking into account, the number of applications per day that are brought into the Internet or made available to users, right? That in itself is a whole different thing. How many people are putting up new webpages or putting up new content or what have you every single day. Fundamentally, I think that we have to think about the problem in a slightly different way, this isn’t about whether we will have jobs, it’s about how, when we are so outnumbered, how do we as this relatively small force in the world, handle the demand that is coming, that is already here today, right? Those people that are listening, who are working infrastructure today, you’re even more valuable when you think about it in those terms because there just aren’t enough people on the planet today to solve those problems using the tools that we are using today, right? Automation is king and it has been for a long time but it’s not going anywhere, we need the people that we have to be able to actually support much larger numbers or bigger scale of infrastructure than they know how to do today. That’s the problem that we have to solve. [0:08:14.8] NL: Yeah, totally. [0:08:16.2] CC: Looking from the perspective of whoever is paying the bills. I think that in the past, as a developer, you had to request a server to run your app in the test environment and eventually you’ll get it and that would be the server that everybody would use to run against, right? Because you’re the developer in the group and everybody’s developing different features and that one server is what we would use to push out changes to and do some level of manual task or maybe we’ll have a QA person who would do it. That’s one server or one resource or one virtual machine. Now, maybe I’m wrong but as a developer, I think what I’m seeing is I will have access to compute and storage and I’ll run a script and I boot up that resource just for myself. Is that more or less expensive? You know? If every single developer has this facility to speed things up much quicker because we’re not depending on IT and if we have a script. I mean, the reality is not as easy like just, well command and you get it but – If it’s so easy and that’s what everybody is doing, doesn’t it become expensive for the company? [0:09:44.3] NL: It can, I think when cloud native infrastructure really became more popular in the workplace and became more like mainstream, there was a lot of talk about the concept of sticker shock, right? It’s the idea of you had this predictable amount of money that was allocated to your infrastructure before, these things cost this much and their value will degrade over time, right? The server you had in 2005 is not going to be as valuable as the server you buy in 2010 but that might be a refresh cycles like five years or so. But you have this predictable amount of money. Suddenly, you have this script that we’re talking about that can spin up in equivalent resource as one of those servers and if someone just leaves it running, that will run for a long time and so, irresponsible users of the cloud or even just regular users of cloud, it does cost a lot of money to use any of these cloud services or it can cost a lot of money. Yes, there is some concern about the amount of money that these things cost because honestly, as we’re exploring cloud native topics. One thing keeps coming up is that cloud really just means, somebody else’s computer, right? You’re not using the cost of maintenance or the time it takes to maintain things, somebody else is and you’re paying that price upfront instead of doing it on like a yearly basis, right? It’s less predictable and usually a bit more than people are expected, right? But there is value there as well. [0:11:16.0] CC: You’re saying if the user is diligent enough to terminate clusters or the machine, that is how you don’t rack up unnecessary cost? [0:11:26.6] NL: Right For a test, like say you want to spin up your code really quickly and just need to – a quick like setup like networking and compute resources to test out your code and you spin up a small, like a tiny instance somewhere in one of the clouds, test out your code and then kill the instance. That won’t cost hardly anything. It didn’t really cost you much on time either, right? You had this automated process hopefully or had a manual process that isn’t too onerous and you get the resource and the things you needed and you’re good. If you aren’t a good player and you keep it going, that can get very expensive very quickly. Because It’s a number of resources used per hour I think is how most billing happens in the cloud. That can exponentially – I mean, they’re not really exponentially grow but it will increase in time to a value that you are not expecting to see. You get a billing and you’re like holy crap, what is this? [0:12:26.9] DC: I think it’s also – I mean, this is definitely where things like orchestration come in, right? With the level of obstruction that you get from things like Kubernetes or Mesos or some other tools, you’re able to provide access to those resources in a more dynamic way with the expectation and sometimes one of the explicit contract, that work load that you deploy will be deployed on common equipment, allowing for things like Vin packing which is a pretty interesting term when it comes to infrastructure and means that you can think of the fact that like, for a particular cluster. I might have 10 of those VMs that we talk about having high value. I attended to those running and then my goal is to make sure that I have enough consumers of those 10 nodes to be able to get my value out of it and so when I split up the environments, we did a little developer has a main space, right? This gets me the ability to effectively over subscribe those resources that I’m paying for in a way that will reduce the overall cost of ownership or cost of – not ownership, maybe cost of operation for those 10 nodes. Let’s take a step back and go down a like memory lane. [0:13:34.7] NL: When did you first hear about the concept of IAS or cloud native infrastructure or infrastructure as code? Carlisia? [0:13:45.5] CC: I think in the last couple of years, same as pretty much coincided with when I started to – you need to cloud native and Kubernetes. I’m not clear on the difference between the infrastructure as code for cloud native versus infrastructure as code for the cloud in general. Is there anything about cloud native that has different requirements and solutions? Are we just talking about, is the cloud and the same applies for cloud native? [0:14:25.8] NL: Yes, I think that they’re the same thing. Cloud, like infrastructure is code for the cloud is inherently cloud native, right? Cloud native just means that whatever you’re trying to do, leverages the tools and the contracts that a cloud provides. The basic infrastructure as code is basically just how do I use the cloud and that’s – [0:14:52.9] CC: In an automated way. [0:14:54.7] NL: In an automated way or just in a way, right? A properly constructed cloud should have a user interface of some kind, that uses an API, right? A contract to create these machines or create these resources. So that the way that it creates its own resources is the same that you create the resource if you programmatically do it right. Orchestration tool like Ansible or Terraform. The API calls that itself makes and its UI needs to be the same and if we have that then we have a well-constructed cloud native environment. [0:15:32.7] DC: Yeah, I agree with that. I think you know, from the perspective of cloud infrastructure or cloud native infrastructure, the goal is definitely to have – it relies on one of the topics that we covered earlier in a program around the idea of API first or API driven being such an intrinsic quality of any cloud native architecture, right? Because, fundamentally, if we can’t do it programmatically, then we’re kind of stuck in that old world of wrecking servers or going through some human managed process and then we’re right back to the same state that we were in before and there’s no way that we can actually scale our ability to manage these problems because we’re stuck in a place where it’s like one to one rather than one to many. [0:16:15.6] NL: Yeah, those API’s are critical. [0:16:17.4] DC: The API’s are critical. [0:16:18.4] NL: You bring up a good point, reminder of something. Not every cloud that you’re going to run into is made the same. There are some clouds that exist and I’m not going to specifically call them out but there are some clouds that exist that the API is people. Using their request and a human being interprets your request and makes the changes. That is a big no-no. Me no like, no good, my brain stopped. That is a poorly constructed cloud native environment. In fact, I would say it is not a cloud native environment at all, it can barely call itself a cloud and sentence. Duffie, how about you? When was the first time you heard the concept of cloud native infrastructure? [0:17:01.3] DC: I’m going to take this question in a form of like, what was the first programmatic infrastructure of those service that I played with. For me, that was actually like back in the Nicera days when we were virtualizing the network and effectively providing and building an API that would allow you to create network resources and a different target that we were developing for where things like XenServer which the time had a reasonable API that would allow you to create virtual machines but didn’t really have a good virtual network solution. There were also technologies like KVM, the ability to actually use KVM to create virtual machines. Again, with an API and then, although, in KVM, it wasn’t quite the same as an API, that’s kind of where things like OpenStack and those technologies came along and kind of wrapped a lot of the capability of KVM behind a restful API which was awesome. But yeah, I would say, XenServer was the first one and that gave me the ability to – like a command line option with which I could stand up and create virtual machines and give them so many resources and all that stuff. You know, from my perspective, it was the Nicera was the first network API that I actually ever saw and it was also one of the first ones that I worked on which was pretty neat. [0:18:11.8] CC: When was that Duffie? [0:18:13.8] DC: Some time ago. I would say like 2006 maybe? [0:18:17.2] NL: The TVs didn’t have color back then. [0:18:21.0] DC: Hey now. [0:18:25.1] NL: For me, for sort of cloud native infrastructure, it reminded me, it was the open stack days I think it was really when I first heard the phrase like I – infrastructure as a service. At the time, I didn’t even get close to grocking it. I still don’t know if I grock it fully but I was working at Red Hat at the time so this was probably back in like 2012, 2013 and we were starting to leverage OpenStack more and having like this API driven toolset that could like spin up these VMs or instances was really cool to me but it’s something I didn’t really get into using until I got into Kubernetes and specifically Kubernetes on different clouds such as like AWS or AS Ram. We’ll touch on those a little bit later but it was using those and then having like a CLI that had an API that I could reference really easily to spin things up. I was like, holy crap, this is incredible. That must have been around like the 2015, 16 timeframe I think. I think I actually heard the phrase cloud native infrastructure first from our friend Kris Nova’s book, Cloud Native Infrastructure. I think that really helped me wrap my brain around really what it is to have something be like cloud native infrastructure, how the different clouds interact in this way. I thought that was a really handy book, I highly recommend it. Also, well written and interesting read. [0:19:47.7] CC: Yes, I read it back to back and when I joined what I have to, I need to read it again actually. [0:19:53.6] NL: I agree, I need to go back to it. I think we’ve touched on something that we normally do which is like what does this topic mean to us. I think we kind of touched on that a bit but if there’s anything else that you all want to expand upon? Any aspect of infrastructure that we’ve not touched on? [0:20:08.2] CC: Well, I was thinking to say something which is reflects my first encounter with Kubernetes when I joined Heptio when it was started using Kubernetes for the very first time and I had such a misconception of why Kubernetes was. I’m saying what I’m going to say to touch base on what I want to say – I wanted to relate what infrastructure as code is not. [0:20:40.0] NL: That’s’ a very good point actually, I like that. [0:20:42.4] CC: Maybe, what Kubernetes now, I’m not clear what it is I’m going to say. Hang with me. It’s all related. I work from Valero, Valero is out, they run from Kubernetes and so I do have to have Kubernetes running for me to run Valero so we came on my machine or came around on the cloud provider and our own prem just for the pluck. For Kubernetes to run, I need to have a cluster running. Not one instance because I was used to like yeah, I could bring up an instance or an instance or two. I’ve done this before but bringing up a cluster, make sure that everything’s connected. I was like, then this. When I started having to do that, I was thinking. I thought that’s what Kubernetes did. Why do I have to bother with this? Isn’t that what Kubernetes supposed to do, isn’t it like just Kubernetes and all of that gets done? Then I had that realization, you know, that encounter where reality, no, I still have to boot up my infrastructure. That doesn’t go away because we’re doing Kubernetes. Kubernetes, this abstraction called that sits on top of that infrastructure. Now what? Okay, I can do it manually while DJIK has a great episode where Joe goes and installs everything by hand, he does thing like every single thing, right? Every single step that you need to do, hook up the network KP’s. It’s brilliant, it really helps you visualize what happens when all of the stuff, how all the stuff is brought up because ideally, you’re not doing that by hand which is what we’re talking about here. I used for example, cloud formation on AWS with a template that Heptio also has in partnership with AWS, there is a template that you can use to bring up a cluster for Kubernetes and everything’s hooked up, that proper networking piece is there. You have to do that first and then you have Kubernetes installed as part of that template. But the take home lesson for me was that I definitely wanted to do it using some sort of automation because otherwise, I don’t have time for that. [0:23:17.8] DC: Ain’t nobody got time for that, exactly. [0:23:19.4] CC: Ain’t got no time for that. That’s not my job. My job is something different. I’m just boarding these stuff up to test my software. Absolutely very handy and if you put people is not working with Kubernetes yet. I just wanted to clarify that there is a separation and the one thing is having your infrastructure input and then you have installed Kubernetes on top of that and then you know, you might have, your application running in Kubernetes or you can have an external application that interacts with Kubernetes. As an extension of Kubernetes, right? Which is what I – the project that I work on. [0:24:01.0] NL: That’s a good point and that’s something we should dive into and I’m glad that you brought this up actually, that’s a good example of a cloud native application using the cloud native infrastructure. Kubernetes has a pretty good job of that all around and so the idea of like Kubernetes itself is a platform. You have like a platform as a service that’s kind of what you’re talking about which is like, if I spin up. I just need to spin up a Kubernetes and then boom, I have a platform and that comes with the infrastructure part of that. There are more and more to like, of this managed Kubernetes offerings that are coming out that facilitate that function and those are an aspect of cloud native infrastructure. Those are the managed services that I was referring to where the administrators of the cloud are taking it upon themselves to do all that for you and then manage it for you and I think that’s a great offering for people who just don’t want to get into the weeds or don’t want to worry about the management of their application. Some of these like – For instance, databases. These managed services are awesome tool and going back to Kubernetes a little bit as well, it is a great show of how a cloud native application can work with the infrastructure that it’s on. For instance, when Kubernetes, if you spin up a service of type load balancer and it’s connected to a cloud properly, the cloud will create that object inside of itself for you, right? A load balancer in AWS is an ELB, it’s just a load balancer in Azure and I’m not familiar with the other terms that the other clouds use, they will create these things for you. I think that the dopest thing on the planet, that is so cool where I’m just like, this tool over here, I created it I n this thing and it told this other thing how to make that work in reality. [0:25:46.5] DC: That is so cool. Orchestration magic. [0:25:49.8] NL: Yeah, absolutely. [0:25:51.6] DC: I agree, and then, actually, I kind of wanted to make a point on that as well which is that, I think – the way I interpreted your original question, Carlisia was like, “What is the difference perhaps between these different models that I call the infrastructure-esque code versus a plat… you know or infrastructure as a service versus platform as a service versus containers as a service like what differentiates these things and for my part, I feel like it is effectively an evolution of the API like what the right entry point for your consumer is. So in the form, when we consider container orchestration. We consider things like middle this in Kubernetes and technologies like that. We make the assumption that the right entry point for that user is an API in which they can define those things that we want to orchestrate. Those containers or those applications and we are going to provide within the form of that platform capability like you know, service discovery and being able to handle networking and do all of those things for you. So that all you really have to do is to find like what needs to be deployed. And what other services they need to rely on and away you go. Whereas when you look at infrastructure of the service, the entry point is fundamentally different, right? We need to be thinking about what the infrastructure needs to look at like I would might ask an infrastructure as a service API how man machines I have running and what networks they are associated with and how much memory and disk is associated with each of those machines. Whereas if I am interacting with a platform as a service, I might ask whatever the questions about those primitives that are exposed by their platform, how many deployments do I have? What name spaces do I have access to do? How many pods are running right now versus how many I ask that would be running? Those questions capabilities. [0:27:44.6] NL: Very good point and yeah I am glad that we explored that a little bit and cloud native infrastructure is not nothing but it is close to useless without a properly leveraged cloud native application of some kind. [0:27:57.1] DC: These API’s all the way down give you this true flexibility and like the real functionality that you are looking for in cloud because as a developer, I don’t need to care how the networking works or where my storage is coming from or where these things are actually located. What the API does any of these things. I want someone to do it for me and then API does that, success and that is what cloud native structure gets even. Speaking of that, the thing that you don’t care about. What was it like before cloud? What do we have before cloud native infrastructure? The things that come to mind are the things like vSphere. I think vSphere is a bridge between the bare metal world and the cloud native world and that is not to say that vSphere itself is not necessarily cloud native but there are some limitations. [0:28:44.3] CC: What is vSphere? [0:28:46.0] DC: vSphere is a tool that VMware has created a while back. I think it premiered back in 2000, 2000-2001 timeframe and it was a way to predictably create and manage virtual machines. So in virtual machine being a kernel that sits on top of a kernel inside of a piece of hardware. [0:29:11.4] CC: So is vSphere two virtual machines where to Kubernetes is to containers? [0:29:15.9] DC: Not quite the same thing I don’t think because fundamentally, the underlying technologies are very different. Another way of explaining the difference that you have that history is you know, back in the day like 2000, 90s even currently like what we have is we have a layer of people who are involved in dealing with physical hardware. They rack 10 or 20 servers and before we had orchestration and virtualization and any of those things, we would actually be installing an operating system and applications on those servers. Those servers would be webservers and those servers will be database servers and they would be a physical machine dedicated to a single task like a webserver or database. Where vSphere comes in and XenServer and then KVM and those technologies is that we think about this model fundamentally differently instead of thinking about it like one application per physical box. We think about each physical box being able to hold a bunch of these virtual boxes that look like virtual machines. And so now those virtual machines are what we put our application code. I have a virtual machine that is a web server. I have a virtual machine that is a database server. I have that level of abstraction and the benefit of this is that I can get more value out of those hardware boxes than I could before, right? Before I have to buy one box or one application, now I can buy one box for 10 or 20 applications. When we take that to the next level, when we get to container orchestration. We realized, “You know what? Maybe I don’t need the full abstraction of a machine. I just need enough of an abstraction to give me enough, just enough isolation back and forth between those applications such that they have their own file system but they can share the kernel but they have their own network” but they can share the physical network that we have enough isolation between them but they can’t interact with each other except for when intent is present, right? That we have that sort of level of abstraction. You can see that this is a much more granular level of abstraction and the benefit of that is that we are not actually trying to create a virtual machine anymore. We are just trying to effectively isolate processes in a Linux kernel and so instead of 20 or maybe 30 VMs per physical box, I can get a 110 process all day long you know on a physical box and again, this takes us back to that concept that I mentioned earlier around bin packing. When we are talking about infrastructure, we have been on this eternal quest to make the most of what we have to make the most of that infrastructure. You know, how do we actually – what tools and tooling that we need to be able to see the most efficiency for the dollar that we spend on that hardware? [0:32:01.4] CC: That is simultaneously a great explanation of container and how containers compare with virtual machines, bravo. [0:32:11.6] DC: That was really low crap idea that was great. [0:32:14.0] CC: Now explain vSphere please I still don’t understand it. I don’t know what it does. [0:32:19.0] NL: So vSphere is the one that creates the many virtual machines on top of a physical machine. It gives you the capability of having really good isolation between these virtual machines and inside of these virtual machines you feel like you have like a metal box but it is not a metal box it is just a process running on a metal box. [0:32:35.3] CC: All right, so it is a system that holds multiple virtual machines inside the same machine. [0:32:41.8] NL: Yeah, so think of it in the cloud native like infrastructure world, vSphere is essentially the API or you could think of it as the cloud itself, which is in the sense an AWS but for your datacenter. The difference being that there isn’t a particularly useful API that vSphere exposes so it makes it harder for people to programmatically leverage, which makes it difficult for me to say like vSphere is a cloud native app like tool. It is a great tool and it has worked wonders and still works wonders for a lot of companies throughout the years but I would hesitate to lump into a cloud native functionality. So prior to cloud native or infrastructures and service, we had these tools like vSphere, which allowed us to make smaller and smaller VMs or smaller and smaller compute resources on a larger compute resource and going back to something, we’re talking about containers and how you spin up these processes. Prior to things that containers and in this world of VM’s a lot of times what you do is you would create a VM that had your application already installed into it. It is burnt into the image so that when that VM stood up it would spin up that process. So that would be the way that you would start processes. The other way would be through orchestration tools similar to Ansible but they existed right or Ansible. That essentially just ran SSH on a number of servers to startup tools like these processes and that is how you’d get distributed systems prior to things like cloud native and containers. [0:34:20.6] CC: Makes sense. [0:34:21.6] NL: And so before we had vSphere we already had XenServer. Before we had virtual machine automation, which is what these tools are, virtual machine automation we had bare metal. We just had joes like Duffie and me cutting our hands on rack equipment. A server was never installed properly unless my blood is on it essentially because they are heavy and it is all metal and it sharpens some capacity and so you would inevitably squash a hand or something. And so you’d rack up the server and then you’d plug in all the things, plug in all the plugs and then set it up manually and then it is good to go and then someone can use it at will or in a logical manner hopefully and that is what we had to do before. It’s like, “Oh I need a new compute resource” okay well, let us call Circuit City or whoever, let us call Newegg and get a new server in and there you go” and then there is a process for that and yeah I think, I don’t know I’m dying of blood loss anymore. [0:35:22.6] CC: And still a lot of companies are using bare metal as a matter of course, which brings up another question, if one would want to ask, which is, is it worth it these days to run bare metal if you have the clouds, right? One example is we see companies like Uber, Lyft, all of these super high volume companies using public clouds, which is to say paying another company for all the data, traffic of data, storage of data in computes and security and everything. And you know one could say, you would save a ton of money to have that in house but using bare metal and other people would say there is no way that this costs so much to host all of that. [0:36:29.1] NL: I would say it really depends on the environment. So usually I think that if a company is using only cloud native resources to begin with, it is hard to make the transition into bare metal because you are used to these tools being in place. A company that is more familiar like they came from a bare metal background and moved to cloud, they may leverage both in a hybrid fashion well because they should have tooling or they can now create tooling so that they can make it so that their bare metal environment can mimic the functionality of the cloud native platform, it really depends on the company and also the need of security and data retention and all of these thing to have like if you need this granularity of control bare metal might be a better approach because of you need to make sure that your data doesn’t leave your company in a specific way putting it on somebody else’s computer probably isn’t the best way to handle that. So there is a balancing act of how much resources are you using in the cloud and how are you using the cloud and what does that cost look like versus what does it cost to run a data center to like have the physical real estate and then have like run the electricity VH fact, people’s jobs like their salaries specifically to manage that location and your compute and network resources and all of these things. That is a question that each company will have to ask. There is normally like hard and fast answer but many smaller companies something like the cloud is better because you don’t have to like think of all these other costs associated with running your application. [0:38:04.2] CC: But then if you are a small company. I mean if you are a small company it is a no brainer. It makes sense for you to go to the clouds but then you said that it is hard to transition from the clouds to bare metal. [0:38:17.3] NL: It can and it really depends on the people that you have working for you. Like if the people you have working for you are good and creating automation and are good and managing infrastructure of any kind, it shouldn’t be too bad but as we’re moving more and more into a cloud focused world, I wonder if those people are going to start going away. [0:38:38.8] CC: For people who are just listening on the podcast, Duffie was heavily nodding as Nick was saying that. [0:38:46.5] DC: I was. I do, I completely agree with the statement that it depends on the people that you have, right? And fundamentally I think the point I would add to this is that Uber or a company like Uber or a company like Lyft, how many people do they employ that are focused on infrastructure, right? And I don’t know the answer to this question but I am positioning it, right? And so, if we assume that they have – that this is a pretty hot market for people who understand infrastructure. So they are not going to have a ton of them, so what specific problems do they want those people that they employ that are focused on infrastructure to solve right? And we are thinking about this as a scale problem. I have 10 people that are going to manage the infrastructure for all of the applications that Uber has, maybe have 20 people but it is going to be a percentage of the people compared to the number of people that I have maybe developing for those applications or for marketing or for within the company. I am going to have, I am going to be, I am going to quickly find myself off balance and then the number of applications that I need to actually support that I need to operationalize to be able to get to be deployed, right? Versus the number of people that I have doing the infrastructure work to provide that base layer for which problems in application will be deployed, right? I look around in the market and I see things like orchestration. I see things like Kubernetes and Mezos and technologies like that. That can provide kind of a multiplier or even AWS or Azure or GCP. I have these things act as a multiplier for those 10 infrastructure pull that I have, right? Because no – they are not looking at trying to figure out how to store it, you know get this data. They are not worried about data centers, they are not worried about servers, they are not worried about networking, right? They can actually have ten people that are decent at infrastructure that can expand, that can spin up very large amounts of infrastructure to satisfy the developing and operational need of a company the size of Uber reasonably. But if we had to go back to a place where we had to do all of that in the physical world like rack those servers, deal with the power, deal with the cool low space, deal with all the cabling and all of that stuff, 10 people are probably not enough. [0:40:58.6] NL: Yeah, absolutely. I put into numbers like if you are in the cloud native workspace you may need 1% of your workforce dedicated to infrastructure, but if you are in the bare metal world, you might need 10 to 20% of your workforce dedicated just to running infrastructure. because the overtly overhead of people’s work is so much greater and a lot of it is focused on things that are tangible instead of things that are fundamental like automation, right? So those 1% or Uber if I am pulling this number totally out of nowhere but if that one percent, their job is to usually focus around automating the allocation of resources and figuring out like the tools that they can use to better leverage those clouds. In the bare metal environment, those people’s jobs are more like, “Oh crap, suddenly we are like 90 degrees in the data center in San Jose, what is going on?” and then having someone go out there and figuring out what physical problem is going on. It is like their day to day lives and something I want to touch on really quickly as well with bare metal, prior to bare metal we had something kind of interesting that we were able to leverage from an infrastructure standpoint and that is mainframes and we are actually going back a little bit to the mainframe idea but the idea of a mainframe, it is essentially it is almost like a cloud native technology but not really. So instead of you using whatever like you are able to spin up X number of resources and networking and make that work. With the mainframe at work is that everyone use the same compute resource. It was just one giant compute resource. There is no networking native because everyone is connected with the same resource and they would just do whatever they needed, write their code, run it and then be done with it and then come off and it was a really interesting idea I think where the cloud almost mimics a mainframe idea where everyone just connects to the cloud. Does whatever they need and then comes off but at a different scale and yeah, Duffie do you have any thoughts on that? [0:43:05.4] DC: Yeah, I agree with your point. I think it is interesting to go back to mainframe days and I think from the perspective of like what a mainframe is versus like what the idea of a cluster in those sorts of things are is that it is kind of like the domain of what you’re looking at. Mainframe considers everything to be within the same physical domain whereas like when you start getting into the larger architectures or some of the more scalable architectures you find that just like any distributed system we are spreading that work across a number of physical nodes. And so we think about it fundamentally differently but it is interesting the parallels between what we do, the works that we are doing today versus what we are doing in maintain times. [0:43:43.4] NL: Yeah, cool. I think we are getting close to a wrap up time but something that I wanted to touch on rather quickly, we have mentioned these things by names but I want to go over some of like the cloud native infrastructures that we use on a day to day basis. So something that we don’t mention them before but Amazon’s AWS is pretty much the number one cloud, I’m pretty sure, right? That is the most number of users and they have a really well structured API. A really good seal eye, peace and gooey, sometimes there is some problems with it and it is the thing that people think of when they think cloud native infrastructure. Going back to that point again, I think that AWS was agreed, AWS is one of the largest cloud providers and has certainly the most adoption as an infrastructure for cloud native infrastructure and it is really interesting to see a number of solutions out there. IBM has one, GCP, Azure, there are a lot of other solutions out there now. That are really focused on trying to follow the same, it could be not the same exact patterns that AWS has but certainly providing this consistent API for all of the same resources or for all of the same services serving and maybe some differentiating services as well. So it is yeah, you could definitely tell it is sort of like for all the leader. You can definitely tell that AWS stumbled onto a really great solution there and that all of the other cloud providers are jumping on board trying to get a piece of that as well. [0:45:07.0] DC: Yep. [0:45:07.8] NL: And also something we can touch on a little bit as well but from a cloud native infrastructure standpoint, it isn’t wrong to say that a bare metal data center can be a cloud native infrastructure. As long as you have the tooling in place, you can have your own cloud native infrastructure, your own private cloud essentially and I know that private cloud doesn’t actually make any sense. That is not how cloud works but you can have a cloud native infrastructure in your own data center but it takes a lot of work. And it takes a lot of management but it isn’t something that exists solely in the realm of Amazon, IBM, Google or Microsoft and I can’t remember the other names, you know the ones that are running as well. [0:45:46.5] DC: Yeah agreed and actually one of the questions you asked, Carlisia, earlier that I didn’t get a chance to answer was do you think it is worth running bare metal today and in my opinion the answer will always be yes, right? Especially as we think about like it is a line that we draw in the sand is container isolation or container orchestration, then there will always be a good reason to run on bare metal to basically expose resources that are available to us against a single bare metal instance. Things like GPU’s or other physical resources like that or maybe we just need really, really fast disk and we want to make sure that we like provide those containers to access to SSDs underlying and there is technology certainly introduced by VMware that expose real hardware from the underlying hype riser up to the virtual machine where these particular containers might run but you know the question I think you – I always come back to the idea that when thinking about those levels of abstraction that provide access to resources like GPU’s and those sorts of things, you have to consider that simplicity is king here, right? As we think about the different fault domains or the failure domains as we are coming up with these sorts of infrastructures, we have to think about what would it look like when they fail or how they fail or how we actually go about allocating those resources for ticket of the machines and that is why I think that bare metal and technologies like that are not going away. I think they will always be around but to the next point and I think as we covered pretty thoroughly in this episode having an API between me and the infrastructure is not something I am willing to give up. I need that to be able to actually to solve my problems at scale even reasonable scale. [0:47:26.4] NL: Yeah, you mean you don’t want to go back to the battle days of around tell netting into a juniper switch and Telnet – setting up your IP – not IP tables it was your IP comp commands. [0:47:39.9] DC: Did you just say Telnet? [0:47:41.8] NL: I said Telnet yeah or serial, serial connect into it. [0:47:46.8] DC: Nice, yeah. [0:47:48.6] NL: All right, I think that pretty much covers it from a cloud native infrastructure. Do you all have any finishing thoughts on the topic? [0:47:54.9] CC: No, this was great. Very informative. [0:47:58.1] DC: Yeah, I had a great time. This is a topic that I very much enjoy. It is things like Kubernetes and the cloud native infrastructure that we exist in is always what I wanted us to get to. When I was in university this was I’m like, “Oh man, someday we are going to live in a world with virtual machines” and I didn’t even have the idea of containers but people can real easily to play with applications like I was amazed that we weren’t there yet and I am so happy to be in this world now. Not to say that I think we can stop and we need to stop improving, of course not. We are not at the end of the journey by far but I am so happy we’re at where we are at right now. [0:48:34.2] CC: As a developer I have to say I am too and I had this thought in my mind as we are having this conversation that I am so happy too that we are where we are and I think well obviously not everybody is there yet but as people start practicing the cloud native development they will come to realize what is it that we are talking about. I mean I said before, I remember the days when for me to get access to a server, I had to file a ticket, wait for somebody’s approval. Maybe I won’t get an approval and when I say me, I mean my team or one of us would do the request and see that. Then you had that server and everybody was pushing to the server like one the main version of our app would maybe run like every day we will get a new fresh copy. The way it is now, I don’t have to depend on anyone and yes, it is a little bit of work to have to run this choreo to put up the clusters but it is so good that is not for me. [0:49:48.6] DC: Yeah, exactly. [0:49:49.7] CC: I am selfish that way. No seriously, I don’t have to wait for a code to merge and be pushed. It is my code that is right there sitting on my computer. I am pushing it to the clouds, boom, I can test it. Sometimes I can test it on my machine but some things I can’t. So when I am doing volume or I have to push it to the cloud provider is amazing. It is so, I don’t know, I feel very autonomous and that makes me very happy. [0:50:18.7] NL: I totally agree for that exact reason like testing things out maybe something isn’t ready for somebody else to see. It is not ready for prime time. So I need something really quick to test it out but also for me, I am the avatar of unwitting chaos meaning basically everything I touch will eventually blow up. So it is also nice that whatever I do that’s weird isn’t going to affect anybody else either and that is great. Blast radius is amazing. All right, so I think that pretty much wraps it up for a cloud native infrastructure episode. I had an awesome time. This is always fun so please send us your concepts and ideas in the GitHub issue tracker. You can see our existing episode and suggestions and you can add your own at github.com/heptio/thecubelets and go to the issues tab and file a new issue or see what is already there. All right, we’ll see you next time. [0:51:09.9] DC: Thank you. [0:51:10.8] CC: Thank you, bye. [END OF INTERVIEW] [0:51:13.9] KN: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Adventures in DevOps
DevOps 002: Castigated by Containers

Adventures in DevOps

Play Episode Listen Later Aug 6, 2019 47:33


Panel Nell Shamrel-Harrington Lee Waylin Scott Nixon Episode Summary The panelists introduce themselves. Nell is a principal engineer at Chef, Lee runs the DevOps consulting team Fuzzy Logic Systems, and Scott runs Cloud Mechanics consulting. The topic of today’s podcast is containers. They begin by defining what a container is, with each sharing their own definition. Containers aren’t really new, they’re just a hot topic because companies like Docker made them easier to use and has brought them into the mainstream. The panelists agree that containers are not for storing data, only for running applications, but that data can be mounted to a container. This is one of the biggest beginner mistakes.  They discuss situations that are conducive to the proper use of containers. Containers are best added to mature and large scale projects. They discuss the pros and cons of Kubernetes. The panel gives advice on how to start using containers, suggesting new users start by containerizing one thing at a time rather than trying to do it all at once. They remind listeners that if you want something to stick around for each deploy, it should not be in the container itself because containers are temporary. They talk about how to create a container, good patterns and anti-patterns found when using containers. They discuss possible security concerns with anti-patterns in containers. They finish by talking about other container orchestrators and how to get traffic to Docker boxes Links Docker Container LAMP (Linux, Apache, MySQL, PHP) Kubernetes Habitat.io CICD Docker Swarm Apache Mesos Follow DevChat on Facebook and Twitter Picks Nell Sharmell Harrington: Trader Joe’s Iced Coffee Big Little Lies Lee Waylin: Magic Sandbox Fall, or Dodge in Hell Whisper Room Scott Nixon:  An Absolutely Remarkable Thing The Book of Beautiful Questions

Devchat.tv Master Feed
DevOps 002: Castigated by Containers

Devchat.tv Master Feed

Play Episode Listen Later Aug 6, 2019 47:33


Panel Nell Shamrel-Harrington Lee Waylin Scott Nixon Episode Summary The panelists introduce themselves. Nell is a principal engineer at Chef, Lee runs the DevOps consulting team Fuzzy Logic Systems, and Scott runs Cloud Mechanics consulting. The topic of today’s podcast is containers. They begin by defining what a container is, with each sharing their own definition. Containers aren’t really new, they’re just a hot topic because companies like Docker made them easier to use and has brought them into the mainstream. The panelists agree that containers are not for storing data, only for running applications, but that data can be mounted to a container. This is one of the biggest beginner mistakes.  They discuss situations that are conducive to the proper use of containers. Containers are best added to mature and large scale projects. They discuss the pros and cons of Kubernetes. The panel gives advice on how to start using containers, suggesting new users start by containerizing one thing at a time rather than trying to do it all at once. They remind listeners that if you want something to stick around for each deploy, it should not be in the container itself because containers are temporary. They talk about how to create a container, good patterns and anti-patterns found when using containers. They discuss possible security concerns with anti-patterns in containers. They finish by talking about other container orchestrators and how to get traffic to Docker boxes Links Docker Container LAMP (Linux, Apache, MySQL, PHP) Kubernetes Habitat.io CICD Docker Swarm Apache Mesos Follow DevChat on Facebook and Twitter Picks Nell Sharmell Harrington: Trader Joe’s Iced Coffee Big Little Lies Lee Waylin: Magic Sandbox Fall, or Dodge in Hell Whisper Room Scott Nixon:  An Absolutely Remarkable Thing The Book of Beautiful Questions

Cloud Engineering – Software Engineering Daily
Peloton: Uber’s Cluster Scheduler with Min Cai and Mayank Bansal

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Mar 28, 2019 56:54


Upcoming events: A Conversation with Haseeb Qureshi at Cloudflare on April 3, 2019 FindCollabs Hackathon at App Academy on April 6, 2019 Google’s Borg system is a cluster manager that powers the applications running across Google’s massive infrastructure. Borg provided inspiration for open source tools like Apache Mesos and Kubernetes. Over the last decade, some The post Peloton: Uber’s Cluster Scheduler with Min Cai and Mayank Bansal appeared first on Software Engineering Daily.

Business and Philosophy
Serverless Research with Ion Stoica

Business and Philosophy

Play Episode Listen Later Dec 10, 2018 71:02


The Berkeley AMPLab was a research lab where Apache Spark and Apache Mesos were both created. In the last five years, the Mesos and Spark projects have changed the way infrastructure is managed and improved the tools for data science. Because of its proximity to Silicon Valley, Berkeley has become a university where fundamental research The post Serverless Research with Ion Stoica appeared first on Software Engineering Daily.

.NET Rocks!
It's a Container World with Ben Hall

.NET Rocks!

Play Episode Listen Later Jul 24, 2018 55:39


Containerize all the things! Carl and Richard talk to Ben Hall about his on-going work with software in containers. Ben talks about Docker being pretty much synonymous with containers now, but when it comes to orchestration, there are a few more choices. Kubernetes seems to be the popular choice in the public cloud space, but Docker Swarm, Apache Mesos and Red Hat OpenShift all can play a role as well. Ben also digs into the role of serverless in a container world, and how these cloud-native architectures make you think about software differently!Support this podcast at — https://redcircle.com/net-rocks/donations

.NET Rocks!
It's a Container World with Ben Hall

.NET Rocks!

Play Episode Listen Later Jul 24, 2018 55:38


Containerize all the things! Carl and Richard talk to Ben Hall about his on-going work with software in containers. Ben talks about Docker being pretty much synonymous with containers now, but when it comes to orchestration, there are a few more choices. Kubernetes seems to be the popular choice in the public cloud space, but Docker Swarm, Apache Mesos and Red Hat OpenShift all can play a role as well. Ben also digs into the role of serverless in a container world, and how these cloud-native architectures make you think about software differently!Support this podcast at — https://redcircle.com/net-rocks/donations

Cloud Engineering – Software Engineering Daily
Service Mesh Design with Oliver Gould

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Jan 19, 2018 59:38


Oliver Gould worked at Twitter from 2010 to 2014. Twitter’s popularity was taking off, and the engineering team was learning how to scale the product. During that time, Twitter adopted Apache Mesos, and began breaking up its monolithic architecture into different services. As more and more services were deployed, engineers at Twitter decided to standardize The post Service Mesh Design with Oliver Gould appeared first on Software Engineering Daily.

THE ARCHITECHT SHOW
Ep. 30: Ion Stoica on how RISELab is pushing the envelope on real-time data

THE ARCHITECHT SHOW

Play Episode Listen Later Jul 27, 2017 66:22


In this episode of the ARCHITECHT Show, Ion Stoica talks about the promise of real-time data and machine learning he's pursuing with the new RISELab project he directs at UC-Berkeley, along with some other big names in big data. Stoica previously was director of the university's AMPLab, which created and helped to mature technologies such as Apache Spark, Apache Mesos and Alluxio. Stoica is also co-founder and executive chairman of Apache Spark startup Databricks, and he shares some insights into that company's business and the evolution of the big data ecosystem. In the news segment, co-hosts Derrick Harris (ARCHITECHT) and Barb Darrow (Fortune) discuss Microsoft (and possibly AWS) doubling down on Kubernetes, Google's cloudy cloud revenue, GoDaddy getting out of the cloud business, and the possibility of Meg Whitman as Uber CEO.

THE ARCHITECHT SHOW
Episode 25: How Apache Spark's creator is now tackling AI's data problem

THE ARCHITECHT SHOW

Play Episode Listen Later Jun 22, 2017 59:05


In episode 25(!) of the ARCHITECHT Show, Matei Zaharia  and Peter Bailis discuss how the DAWN project at Stanford University is trying bring the power of AI and machine learning to a broader set of users. The project's initial focus is on making it easier to collect, process and analyze enough high-quality data to take advantage of today's leading models and algorithms. Aside from the technologies, the Zaharia and Bailis also discuss how they're working with industry on this project and the right path for moving academic research into the real world. It's an area Zaharia knows well, having previously helped create Apache Mesos and Apache Spark, and having co-founded Databricks, where he's still CTO. In the news segment, co-hosts Derrick Harris (ARCHITECHT) and Barb Darrow (Fortune) talk all about Amazon, and how its $14 billion purchase of Whole Foods is having competitive repercussions even in its cloud business.

AWS re:Invent 2016
CON401: Amazon ECR Deep Dive on Image Optimization

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 59:00


“Are you struggling with bulky images or slow push and pull times? In this session we will walk through the anatomy of a Docker image and provide techniques you can use to optimize images for faster pushes and pulls and reduce your overall storage footprint. We will discuss Docker image building (build containers versus runtime containers to remove unnecessary software), Docker image composition (minimizing the number of layers), the Docker Remote API (optimizing how images are pushed and pulled), and CI/CD Integration (automate building, versioning, and deploying images to production). We’ll also examine the tools that ECR provides to make Docker image management easier so that you can focus on building your application. Finally, we'll hear from Pinterest about how they use ECR and Docker, some valuable experiences gained along the way, and best practices for using ECR with Apache Mesos.”

AWS re:Invent 2016
CON303: Introduction to Container Management on AWS

AWS re:Invent 2016

Play Episode Listen Later Dec 24, 2016 46:00


Managing and scaling hundreds of containers is a challenging task. A container management solution takes care of these challenges for you, allowing you to focus on developing your application. In this session, we cover the role and tasks of a container management solution and we analyze how four common container management solutions - Amazon EC2 Container Service, Docker for AWS, Kubernetes, and Apache Mesos - stack against each other. We also see how you can easily get started with each of these solutions on AWS.

Software Defined Talk
Episode 78: Trump's possible effect on tech, plus, containers

Software Defined Talk

Play Episode Listen Later Nov 11, 2016 81:21


We discuss possible effects that the Trump presidency will have on the tech world. The ideas are more or less known, but the details and whether they'd be enacted are sketchy and unreliable. Before that, of course, we talk about containers. This episode features Brandon Whichard (https://twitter.com/bwhichard), Matt Ray (https://twitter.com/mattray), and Coté (https://twitter.com/cote). Mid-roll Matt: Dec 1st and 2nd - DevOps Days Australia 20% discount code - SDT2016 (https://ti.to/devopsaustralia/2016-sydney/discount/SDT2016). Coté: Nov 16th - Cloud Native Roadshow in Omaha, next week (https://pivotal.io/event/cloud-native-workshop/omaha). Coté: Various dates - Pivotal Cloud Native Roadshows (https://pivotal.io/event/pivotal-cloud-native-roadshow) - Cincinnati - Nov 10 (https://pivotal.io/event/pivotal-cloud-native-roadshow/cincinnati); St. Louis - Nov 14 (https://pivotal.io/event/pivotal-cloud-native-roadshow/stlouis); Hartford - Nov 16 (https://pivotal.io/event/pivotal-cloud-native-roadshow/hartford); Denver - Nov 18 (https://pivotal.io/event/pivotal-cloud-native-roadshow/denver); New York - Nov 22 (https://pivotal.io/event/pivotal-cloud-native-roadshow/newyork); Los Angeles - Nov 28 (https://pivotal.io/event/pivotal-cloud-native-roadshow/losangeles). K8s Operators Stateful applications for K8s, a shot at Mesos (https://coreos.com/blog/introducing-operators.html)? Prometheus & etcd first examples (spark? hadoop?) This begs the broad question: so, what’s CoreOS’s business posture now? Azure Container Service, now with K8s Those Microsoft folks will just put anything that looks tasty in their cloud (https://azure.microsoft.com/en-us/blog/azure-container-service-the-cloud-s-most-open-option-for-containers/) - what a reversal from the Microsoft we grew up with. Docker in Production: A History of Failure From this dude’s perspective (https://thehftguy.wordpress.com/2016/11/01/docker-in-production-an-history-of-failure/): a failure of product management and stable releases. Bugs, documentation spotty, cleanup scripts, kernel support (Debian!?), aufs & overlay & overlay2, 7-hour outage with no post-mortem “Docker only moves forward and breaks things” “The docker hype is not only a technological liability any more, it has evolved into a sociological problem as well.” A retort… that mostly agrees (https://patrobinson.github.io/2016/11/05/docker-in-production/) “boring tech is what makes money” shiny tech makes resumes? Mesosphere Jay Lyman on the momemtum (https://451research.com/report-short?entityId=88226): “Mesosphere does not disclose its number of paying clients, but says it has dozens of large enterprise customers, its primary target. The company says its experience supporting software deployments in production is among its key differentiators, helped by the use of Apache Mesos by companies such as Twitter, Netflix, Airbnb, PayPal and Yelp, which was featured in a 451 User Deployment Report. Mesosphere says its focus is customer deployments of 500-1,000 nodes per day in production. It also says the bulk of its customers are licensees with professional services accounting for less than 10% of its clients, which tend to move to its subscription software.” TrumpTech, aka, “Putting the 400 lbs hackers on diets.” Turns out there is some marginally clear policy, just not McKinsey title mode versus white papers (http://www.vox.com/2016/11/10/13584390/donald-trump-first-100-days). Jonathan Shieber@Tech Crunch (https://techcrunch.com/2016/11/09/what-does-a-president-elect-trump-mean-for-silicon-valley-nothing-very-good/): "The biggest question facing millions of Americans this Wednesday is: just how much of what Donald Trump said on the campaign does he intend to actually try to make happen." (For example, Korea (http://www.vox.com/world/2016/11/10/13585524/donald-trump-phone-call-south-korea-park-geun-hye).) Dave Lee, at the BBC has a good laundry list: “Uncertainty, frustration and an increased fragility for the global home of tech innovation. Mr Trump certainly won't want to go down as the president who destroyed Silicon Valley, but the concern here is that of the few policies that have been explained in detail, some seem directly at odds with each other.” 10% repatriation program (http://blogs.barrons.com/techtraderdaily/2016/11/09/apple-adobe-cisco-citi-focuses-on-big-techs-big-trump-tax-windfall/) - tech companies have tons of cash abroad: Historic rates (http://www.investopedia.com/terms/r/repatriation.asp): “At the highest tax rate, corporations must pay 35% to repatriate capital, minus local taxes charged by countries in which the funds are held.” Hardware: “AAPL (93% of $230bln), CSCO (91% of $64.6B), IBM ($8.2B total cash, undisclosed % of cash held overseas but note 58% of earnings are from non US operations), HPE ($10.0B total cash, undisclosed % of cash held overseas but 65% of earnings are from non US operations), HPQ ($5.6B total cash, undisclosed % of cash held overseas but 65%-70% of earnings are from non US operations), JNPR (94% of $3.2B).” Software: “Specifically, some of the mid and large cap companies that have large cash balances “trapped” offshore are likely to benefit from being able to return a portion of this cash to shareholders. We note companies with high gross cash balances trapped offshore include: ADBE (85% of $4B – from 2015 10-K), ADSK (86% of $2.1B), CA (76% of $2.7B), CTXS (80% of $2.45B), FTNT (38% of $1.2B), ORCL (76% of $56B – pre-N), MSFT (96% of $113B – pre-LNKD purchase), RHT (42% of $2.0B), SYMC (93% of $5.6B – post-BC), VMW (77% of $7.5B), VRSN (68% of $1.9B). We believe the chances increase of a larger share repurchase or (lesser chance) dividend from these companies.” Apple & Amazon are not in a good situation (http://www.theregister.co.uk/2016/11/09/tech_trump_silicon_valley/) - they’ll be a good test of WTF happens. Meanwhile, tech stocks dropping a bit (http://www.marketwatch.com/story/tech-stocks-plunge-for-second-straight-day-after-trump-win-2016-11-10). Ovum has a shit ton of quick analysis, all free (https://www.ovum.com/us-presidential-election-2016/): Fear of US public cloud companies, globally (https://www.ovum.com/will-trump-presidency-mean-public-cloud-computing-2-2/). Remember the freak-out from NSA stuff? Same idea. I think the Gemans got over it. Outsources (https://www.ovum.com/providers-prepare-trump-presidency-potential-impact-global-delivery-2/): “A massive curtailing of H-1B visas, for example, will mean providers will need to make immediate shifts in what they’re able to offer customers locally, unless or until they’re able to compensate with talent.” “For providers, there’s also the unanswered question of the impact on US government spending.” [Education](https://www.ovum.com/trumping-expectations-now-us-public-sector-2/ - some proposals for de-centralizing, meaning fragmentation of IT spend. Government talent, regulations, and spending (https://www.ovum.com/trumping-expectations-now-us-public-sector-2/) - “If there is a large exodus of high-caliber and skilled staff, how will departments fill the gap? It also raises the question of funding for programs aimed at modernizing tech in the federal government such as F18 and FedRAMP. Trump might reduce the barriers to swapping out tech and push down expenditure that way. Certainly, the high cost and length of time needed to get Authority to Operate (ATO) under FedRAMP has been a barrier to uptake.” Telcos (https://www.ovum.com/trumps-victory-will-affect-us-telecoms-market/) - other than him stating he’d stop the AT&T/TimeWarner merger, telco stuff is very unclear. No one’s sure what the traditional Republican +/- Trump equals, or what the formula is. M&A from Brenon@451 (https://451research.com/report-short?entityId=90759&type=mis&alertid=211&contactid=0033200001wgKCKAA2&utm_source=sendgrid&utm_medium=email&utm_campaign=market-insight&utm_content=newsletter&utm_term=90759-In+Trump%2C+an+M%26A+watchdog+with+more+bite): “Chinese buyers probably won't be shopping as freely in the US in the coming years.” They spent $14bn this year, I think. Chinese buyers have recently picked up Ingram Micro, which swings nearly $50bn worth of tech gear and services each year, 25-year-old printer maker Lexmark and even a majority stake in the gay dating app Grindr." Also see shorter blog post with chart of Chinese M&A spend (https://blogs.the451group.com/techdeals/investment-banking/in-trump-a-tech-ma-watchdog-with-more-bite/). Snowden for Head of NSA! (https://twitter.com/realDonaldTrump/status/346998236776640513). Follow-up That’s how you do it! (https://twitter.com/simonmcc/status/794951901720805376) We got actual comments (http://www.softwaredefinedtalk.com/77#disqus_thread)! BONUS LINKS! Not covered in episode Matt wrote up an Amazon ECS thing The blog entry (https://blog.chef.io/2016/11/07/habitat-amazon-elastic-container-service/) Doing Business in Japan Not new, but a good primer (http://www.kalzumeus.com/2014/11/07/doing-business-in-japan/). Recommendations Brandon: New season of The Startup podcast (https://gimletmedia.com/episode/shadowed-qualities-season-4-episode-3/) Matt: TransferWise (https://transferwise.com/u/matthewr9) for transferring money abroad. A16Z on TransferWise (https://a16z.com/2016/01/29/a16z-podcast-when-banking-works-like-my-smartphone/). Coté: “Tighten Up.” (https://open.spotify.com/track/2pBgtxhgqevCHEnJ7W5UKI), Archie Bell & The Drells - once you’re done being depressed, get your shit back together. HSAs. Meanwhile, this “pastrami burger” (https://www.instagram.com/p/BMpHcIhDJkk/) at 3 Greens Market (http://3greensmarket.com/) in Chicago is AMAZING.

Roaring Elephant
Episode 22 – Big Data in Small Business

Roaring Elephant

Play Episode Listen Later Aug 16, 2016 92:35


The main subject in this episode features answer to a listener question we received a couple of months ago: How can big data help small businesses? What ways can small business use big data? At the moment all the talk is about big data helping enterprise firms. And we are introducing a new section which we hope you will enjoy! 00:00 Recent events Working with a new team in sunny cork, getting them up to speed Workshop with a global SI and a European tel-co about the upcoming phases of their big data journey Workshop with a customer who has been using Hadoop for a very long time, since Hadoop 0.2! Finally looking to migrate into the future Multi vendor workshop fraud analytics Object recognition and detection in images. 11:30 Our very own "New and Noteworthy" Dave http://blogs.teradata.com/international/streaming-analytics-story-many-tales/ http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A453888 http://research.ibm.com/cognitive-computing/ostp/rfi-response.shtml http://dataconomy.com/10-online-big-data-courses-2016/   Jhon Apache Spark 2.0 (July 28, 2016) http://spark.apache.org/releases/spark-release-2-0-0.html Unifying DataFrame and Dataset (RDD): In Scala and Java, DataFrame and Dataset have been unified, i.e. DataFrame is just a type alias for Dataset of Row. SparkSession: new entry point that replaces the old SQLContext and HiveContext for DataFrame and Dataset APIs. MLLib: The DataFrame-based API is now the primary API. The RDD-based API is entering maintenance mode. Spark 2.0 substantially improved SQL functionalities with SQL2003 support. Spark SQL can now run all 99 TPC-DS queries Ships the initial experimental release for Structured Streaming, a high level streaming API built on top of Spark SQL Databricks article: https://databricks.com/blog/2016/07/28/continuous-applications-evolving-streaming-in-apache-spark-2-0.html Apache Mesos 1.0 released https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces97 http://techblog.netflix.com/2016/07/distributed-resource-scheduling-with.html Apache Twill becomes top level project http://twill.apache.org/ https://blogs.apache.org/foundation/entry/apache_software_foundation_announces_apache1 44:40 Big Data for Small Business Define "small business" How can big data help small businesses What ways can small business use big data The problems a small business could face http://www.columnfivemedia.com/100-best-free-data-sources-infographic Our answers to those problems Some conclusions 01:32:35 End   Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

Roaring Elephant
Episode 22 – Big Data in Small Business

Roaring Elephant

Play Episode Listen Later Aug 16, 2016 92:35


The main subject in this episode features answer to a listener question we received a couple of months ago: How can big data help small businesses? What ways can small business use big data? At the moment all the talk is about big data helping enterprise firms. And we are introducing a new section which we hope you will enjoy! 00:00 Recent events Working with a new team in sunny cork, getting them up to speed Workshop with a global SI and a European tel-co about the upcoming phases of their big data journey Workshop with a customer who has been using Hadoop for a very long time, since Hadoop 0.2! Finally looking to migrate into the future Multi vendor workshop fraud analytics Object recognition and detection in images. 11:30 Our very own "New and Noteworthy" Dave http://blogs.teradata.com/international/streaming-analytics-story-many-tales/ http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A453888 http://research.ibm.com/cognitive-computing/ostp/rfi-response.shtml http://dataconomy.com/10-online-big-data-courses-2016/   Jhon Apache Spark 2.0 (July 28, 2016) http://spark.apache.org/releases/spark-release-2-0-0.html Unifying DataFrame and Dataset (RDD): In Scala and Java, DataFrame and Dataset have been unified, i.e. DataFrame is just a type alias for Dataset of Row. SparkSession: new entry point that replaces the old SQLContext and HiveContext for DataFrame and Dataset APIs. MLLib: The DataFrame-based API is now the primary API. The RDD-based API is entering maintenance mode. Spark 2.0 substantially improved SQL functionalities with SQL2003 support. Spark SQL can now run all 99 TPC-DS queries Ships the initial experimental release for Structured Streaming, a high level streaming API built on top of Spark SQL Databricks article: https://databricks.com/blog/2016/07/28/continuous-applications-evolving-streaming-in-apache-spark-2-0.html Apache Mesos 1.0 released https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces97 http://techblog.netflix.com/2016/07/distributed-resource-scheduling-with.html Apache Twill becomes top level project http://twill.apache.org/ https://blogs.apache.org/foundation/entry/apache_software_foundation_announces_apache1 44:40 Big Data for Small Business Define "small business" How can big data help small businesses What ways can small business use big data The problems a small business could face http://www.columnfivemedia.com/100-best-free-data-sources-infographic Our answers to those problems Some conclusions 01:32:35 End   Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

Roaring Elephant
Episode 22 – Big Data in Small Business

Roaring Elephant

Play Episode Listen Later Aug 16, 2016 92:35


The main subject in this episode features answer to a listener question we received a couple of months ago: How can big data help small businesses? What ways can small business use big data? At the moment all the talk is about big data helping enterprise firms. And we are introducing a new section which we hope you will enjoy! 00:00 Recent events Working with a new team in sunny cork, getting them up to speed Workshop with a global SI and a European tel-co about the upcoming phases of their big data journey Workshop with a customer who has been using Hadoop for a very long time, since Hadoop 0.2! Finally looking to migrate into the future Multi vendor workshop fraud analytics Object recognition and detection in images. 11:30 Our very own "New and Noteworthy" Dave http://blogs.teradata.com/international/streaming-analytics-story-many-tales/ http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A453888 http://research.ibm.com/cognitive-computing/ostp/rfi-response.shtml http://dataconomy.com/10-online-big-data-courses-2016/   Jhon Apache Spark 2.0 (July 28, 2016) http://spark.apache.org/releases/spark-release-2-0-0.html Unifying DataFrame and Dataset (RDD): In Scala and Java, DataFrame and Dataset have been unified, i.e. DataFrame is just a type alias for Dataset of Row. SparkSession: new entry point that replaces the old SQLContext and HiveContext for DataFrame and Dataset APIs. MLLib: The DataFrame-based API is now the primary API. The RDD-based API is entering maintenance mode. Spark 2.0 substantially improved SQL functionalities with SQL2003 support. Spark SQL can now run all 99 TPC-DS queries Ships the initial experimental release for Structured Streaming, a high level streaming API built on top of Spark SQL Databricks article: https://databricks.com/blog/2016/07/28/continuous-applications-evolving-streaming-in-apache-spark-2-0.html Apache Mesos 1.0 released https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces97 http://techblog.netflix.com/2016/07/distributed-resource-scheduling-with.html Apache Twill becomes top level project http://twill.apache.org/ https://blogs.apache.org/foundation/entry/apache_software_foundation_announces_apache1 44:40 Big Data for Small Business Define "small business" How can big data help small businesses What ways can small business use big data The problems a small business could face http://www.columnfivemedia.com/100-best-free-data-sources-infographic Our answers to those problems Some conclusions 01:32:35 End   Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

Software Engineering Radio - The Podcast for Professional Software Developers

Jeff Meyerson talks to Haoyuan Li about Alluxio, a memory-centric distributed storage system. The cost of memory and disk capacity are both decreasing every year–but only the throughput of memory is increasing exponentially. This trend is driving opportunity in the space of big data processing. Alluxio is an open source, memory-centric, distributed, and reliable storage system enabling data sharing across clusters at memory speed. Alluxio was formerly known as Tachyon. Haoyuan is the creator of Alluxio. Haoyuan was a member of the Berkeley AMPLab, which is the same research facility from which Apache Mesos and Apache Spark were born. In this episode, we discuss Alluxio, Spark, Hadoop, and the evolution of the data center software architecture.

Inside the Datacenter - Connected Social Media
Deploy, Scale, and Automate the Data Center with Mesos – Intel Chip Chat – Episode 441

Inside the Datacenter - Connected Social Media

Play Episode Listen Later Mar 8, 2016


In this Intel Chip Chat audio podcast with Allyson Klein: In this livecast from the Structure Conference in San Francisco Florian Leibert, CEO & Founder of Mesosphere talks about how he helped to build the next generation Twitter platform on an Apache Mesos project creating one of the first open platforms for turning a data […]

Vancouver Tech Podcast
Episode 16: Human API

Vancouver Tech Podcast

Play Episode Listen Later Mar 7, 2016 48:17


After a short mention of the terrible weather, Drew and James talk about Vancouver's tech events from the previous week, and which ones are coming up next. This episode's special guest is Rob Gulewich from Human API.

Intel Chip Chat
Deploy, Scale, and Automate the Data Center with Mesos – Intel® Chip Chat episode 441

Intel Chip Chat

Play Episode Listen Later Mar 4, 2016 8:40


In this livecast from the Structure Conference in San Francisco Florian Leibert, CEO & Founder of Mesosphere* talks about how he helped to build the next generation Twitter* platform on an Apache Mesos project creating one of the first open platforms for turning a data center into a high performance and easy-to-use private cloud. He outlines how Mesosphere was started in order to help make Apache Mesos easier to use allowing users to deploy, manage and scale services, organizing their entire infrastructure as if it was a single computer. Florian highlights Mesosphere’s new Data Center Operating System (DCOS) that allow users to bring many data center services and applications onto a private or public cloud while avoiding vendor lock in. Florian also calls out the deep collaboration between Intel and Mesosphere including the Cloud For All initiative and other work that enhances Mesos and makes it more usable for a wide variety of use cases and workloads. To learn more, follow Mesosphere on Twitter https://twitter.com/mesosphere or message info@mesosphere.com.

ZADevChat Podcast
Episode 30 - Segfault February

ZADevChat Podcast

Play Episode Listen Later Feb 23, 2016 49:10


Kenneth and Kevin have the first of our Segfault instalments, a monthly banter about things that we find noteworthy but that might not fill an episode (yet). Here are the links to the (majority of the) topics we covered: * Rubyfuza 2016 - http://www.rubyfuza.org/ * DevConf ZA 2016, covered on #23 - http://www.devconf.co.za * Go 1.6 release, specifically transparent HTTP/2 support in net/http - https://golang.org/doc/go1.6#http2 * Rust 1.6 release, specifically Crates.io not allowing wildcards in dependencies in favour of SemVer - http://blog.rust-lang.org/2016/01/21/Rust-1.6.html * Semantic Versioning - http://semver.org * Tom's Obvious, Minimal Language - https://github.com/toml-lang/toml * IntermezzOS is a teaching operating systems, especially focused on introducing systems programming concepts to experienced developers from other areas of programming - http://intermezzos.github.io * Steve Klabnik - https://github.com/steveklabnik, https://twitter.com/steveklabnik, http://www.steveklabnik.com * MIT Unix xv6 OS - https://github.com/mit-pdos/xv6-public * A Skeleton Key on Unknown Strength, article regaring CVE-2015-7574, the glibc resolver bug - http://dankaminsky.com/2016/02/20/skeleton/ * CoreOS is Linux for Massive Server Deployments - https://coreos.com * Apache Mesos, a distributed systems kernel for your data center - http://mesos.apache.org/ * The Post Amazon Challenge and The New Stack - http://thenewstack.io/post-amazon-challenge-new-stack-model/ * Visual Transistor-level Simulation of the 6502 CPU - http://www.visual6502.org/JSSim/ Thanks for listening! Stay in touch: * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - https://itunes.apple.com/za/podcast/zadevchat-podcast/id1057372777

Open Source – Software Engineering Daily
Mesos and Docker in Practice with Michael Hausenblas

Open Source – Software Engineering Daily

Play Episode Listen Later Dec 29, 2015 59:36


Apache Mesos is an open-source cluster manager that enables resource sharing in a fine-grained manner, improving cluster utilization. Michael Hausenblas is a developer and cloud advocate with Mesosphere, which builds the Datacenter Operating System (DCOS), a distributed OS that uses Apache Mesos as its kernel. Questions Can you give the historical context for cluster computing? The post Mesos and Docker in Practice with Michael Hausenblas appeared first on Software Engineering Daily.

The Cloudcast
The Cloudcast #211 - Mesosphere DCOS

The Cloudcast

Play Episode Listen Later Aug 29, 2015 25:54


Aaron talks with Ben Hindman (@benh; Co-creator of ApacheMesos, Founder of @mesosphere) about his time at Twitter, building Mesos, understanding problems at scale, how Mesos compares to Kubernetes, Mesosphere DCOS and the recent announcements with Microsoft. Interested in growing your career and networking with professionals in the Data Center and Cloud industry? Attend John Troyer's The Reckoning event, in Half Moon Bay, CA on September 13-14. Cloudcast listeners can get $100 discount by entering promo-code: CLOUDCAST. Interested in the O'Reilly Velocity NYC? Want a chance at a free pass for VelocityConf NYC? Send us your interesting journey in Web-Scale Operations to show@thecloudcast.net by Friday July 10th and we'll pick a winner! Want to register for Velocity Conference now? Use promo code 20CLOUD for 20% off Check out the Velocity Schedule Free eBook from O'Reilly Media for Cloudcast Listeners! Links from the show: Dave Lester from Twitter/ApacheMesos on #155 Understanding Mesos Kenneth Hui’s Series on Apache Mesos Microsoft / MesoSphere Announcement Thanks for the MesosCon folks for having us! - MesosCon Homepage Topic 1 - Give us some of your background and how you went from working on Apache Mesos at Twitter to becoming involved with Mesosphere? Topic 2 - It’s been about a year since we talked about Mesos on the show. We’ve talked about Kubernetes a few times. Can you give us the basics of each of those technologies because sometimes people confuse them or think they are interchangeable or overlapping. Topic 3 - Let’s talk about Mesosphere and DCOS (Data Center Operating System). People talk about “durable and declarative” infrastructure for applications. How does DCOS accomplish this? Topic 4 - Mesosphere includes not only systems and schedulers for the underlying container infrastructure, but also application-level schedulers. What are the differences, and how does a development team vs. an ops team interact with Mesosphere? Topic 5 - What types of applications are you seeing Mesosphere customers running in this new environment? One thing we heard at VelocityConf was that there is work within Apache Mesos to look at adding support for stateful applications or stateful data - what’s the status of that?

Software Engineering Radio - The Podcast for Professional Software Developers

Ben Hindman talks to Jeff Meyerson about Apache Mesos, a distributed systems kernel. Mesos abstracts away many of the hassles of managing a distributed system. Hindman starts with a high-level explanation of Mesos, explaining the problems he encountered trying to run multiple instances of Hadoop against a single data set. He then discusses how Twitter uses […]

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio-Episode-235:-Ben-Hindman-on-Apache-Mesos

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Aug 17, 2015 49:17


The New Stack Analysts
#48: Mesosphere's Florian Leibert on the New Data Center OS

The New Stack Analysts

Play Episode Listen Later Jun 16, 2015 44:57


As Mesosphere co-founder and CEO Florian Leibert puts it, “The data center is the new form factor, and it needs an operating system.” That's why they are particularly excited that Mesposphere's Datacenter Operating System (DCOS) is now generally available. “We just want to make the life of data center operators and what we call ‘new framework developers' much easier.” Florian joins The New Stack founder Alex Williams and co-host Donnie Berkholz of 451 Research for this edition of The New Stack Analysts podcast, an engaging and informative discussion about the virtues of Apache Mesos, and the concepts and tools involved with architecting big data. Watch on YouTube: https://youtu.be/CMyA9WYMKng Learn more at: https://thenewstack.io/tns-analysts-show-48-mesospheres-florian-leibert-on-the-new-data-center-os/

Les Cast Codeurs Podcast
LCC 115 - Interview de Sam Bessalah sur la data science, Hadoop et Mesos

Les Cast Codeurs Podcast

Play Episode Listen Later Dec 22, 2014 72:20


Dans cet épisose, on discute avec Sam Bessalah de ce “nouveau” métier qu’est le data scientist. On explore aussi l’univers Apache Hadoop et l’univers Apache Mesos. Ces endroits sont pleins de projets aux noms bizarres, cette interview permet de s’y retrouver un peu dans cette mythologie. Enregistré le 16 decembre 2014 Téléchargement de l’épisode LesCastCodeurs-Episode–115.mp3 Interview Ta vie, ton oeuvre @samklr Ses présentations, encore ici et là Data scientist Kesako ?! C’est nouveau ? On a toujours eu des données pourtant dans nos S.I. ?! Le job le plus sexy du 21eme siecle ? Drew conway’s Data Science Venn diagram Traiter les données, les plateformes MapR, Hadoop, … C’est Quoi ? C’est nouveau ? Ca vient d’où ? Comment ça marche ? A quoi ça sert ? Ca s’intègre à tout ? Et nos sources de données legacy (Mon bon vieux mainframe et son EBCDIC) ? Où sont passés mes EAI, ETL, et autres outils d’intégration B2C/B2B ? EAI ETL EBCDIC BI (Business Intelligence) Hadoop MapReduce Doug Cutting Apache Lucene - moteur de recherche full-text Apache Hadoop - platforme de process distribués et scalables HDFS - système de fichier distribué Apache Hive - datawarehouse au dessus d’Hadoop offrant du SQL-like Terradata Impala - database analytique (“real time”) SQL queries etc Apache Tez - directed-acyclic-graph of tasks Apache Shark remplacé par Spark SQL Apache Spark - Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing Apache Storm - process de flux de données de manière scalable et distribuée Data Flow Machine Learning - apprendre de la donnée Graph Lab Et l’infrastructure dans tout ça ? De nos bons vieux serveurs qui remplissent les salles machines au cloud (IAAS, PAAS), en passant par la virtualisation (), les conteneurs (XLC, Docker, …) …. Des ressources à gogo c’est bien mais comment les gérer ? YARN Apache Mesos Apache Mesos Comment démarrer Mesos Tutoriaux Data Center OS de Mesosphere Presentation de Same à Devoxx sur Mesos Mesos et les container docker Cluster Management and Containerization by Benjamin Hindman Integration continue avec Mesos par EBays Docker Docker Démarrer un cluster Spark avec Docker Shell Spark dans Docker Docker et Kubernetes dans Apache Hadoop YARN Cluster Hadoop sur Docker Docker, Kubernetes and Mesos cgroups LXC Docker vs LXC Marathon Chronos Code de Chronos Aurora Kubernetes Kubernetes workshop Oscar Boykin Scalding Présentation Scala + BigData et une autre Apache Ambari Comment je m’y mets ? Comment devient-on data scientist ? (se former, ouvrages de références, sources d’infos, …) Mesosphere Cours de Andrew Ng sur le Machine Learning Introduction to data science sur Coursera Kaggle MLlib Mahoot R Scikit-learn (Python) Machine Learning pour Hackers (livre) Scala TypeSafe Activator iPython NoteBooks Autres référence iPython NoteBooks Notebooks temporaires en line - démarre un container docker sur rackspace gratuitement (pour vous) Des notebooks Parallel Machine Learning with scikit-learn and IPython Visualiser les notebooks en ligne sans les télécharger Spark / Scala notebooks for web based spark development http://zeppelin-project.org/ Spark et Scala avec un notebook ipython Nous contacter Contactez-nous via twitter http://twitter.com/lescastcodeurs sur le groupe Google http://groups.google.com/group/lescastcodeurs ou sur le site web http://lescastcodeurs.com/ Flattr-ez nous (dons) sur http://lescastcodeurs.com/ En savoir plus sur le sponsoring? sponsors@lescastcodeurs.com

a16z
a16z Podcast: Why the Datacenter Needs an Operating System

a16z

Play Episode Listen Later Dec 18, 2014 31:01


What does an operating system for today's datacenter look like? Why do we even need one, and how does it function? Mesosphere's Benjamin Hindman, the co-creator of Apache Mesos, joins Steven Sinofsky for an all-OS discussion. The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

All Things Hadoop
Resource scheduling and task launching with Apache Mesos and Apache Aurora at Twitter

All Things Hadoop

Play Episode Listen Later Oct 26, 2014 28:25


Resource scheduling and task launching with Apache Mesos and Apache Aurora at Twitter

The Cloudcast
The Cloudcast #155 - Scaling Twitter's Distributed Infrastructure with Mesos

The Cloudcast

Play Episode Listen Later Jul 31, 2014 31:42


Aaron and Brian talk with Dave Lester (@davelester, Open Source Advocate at Twitter. Friend of @ApacheMesos, @ApacheAurora) about how Twitter manages large-scale infrastructure, an introduction to Apache Mesos and how projects like Kubernetes, Docker, Aurora are helping to define the next-generation of web-scale infrastructure management. Music Credit: Nine Inch Nails (www.nin.com)

Dave & Gunnar Show
Episode 57: #57: 10¢ Beer Night

Dave & Gunnar Show

Play Episode Listen Later Jul 22, 2014 40:49


This week, Dave and Gunnar talk about: Dave’s Russian agents, VMware’s nightmarish Docker future, everyone learns what “navigator” is in Greek. Cпасибо? Dave’s article is mentioned in Russian D&G bait: Will Docker and VMware Compete? Betteridge’s law of headlines Libswarm and Apache Mesos will help with management Red Hat and Google Collaborate on Kubernetes to Manage Docker Containers at Scale Learn more here Linux Wants Automakers to Stop Failing at Infotainment by Using Common Software Joe Brockmeier has a new podcast Silence is golden: Video Essay Beautifully Shows Martin Scorsese’s Use of Silence in Film 10 Cent Beer Night Tim Russert of NBC’s Meet the Press was a student at at the time: “I went with $2 in my pocket. You do the math.” Cutting Room Floor Today in Videoconferencing Entlistungsfreude: The satisfaction achieved by crossing things off lists GitBook: no excuses HT Erich Morisse: DOD’s industry is “Accounting”, per LinkedIn Ohio news flash: Alice Cooper inducted into the White Castle Hall of Fame Speaking of Cleveland You talkin’ to me? Next time you’re in Austin, check out Robert De Niro’s Taxi Cab License Used to Prepare for Taxi Driver Also at the Ransom Center, a chilling World War I exhibit that Gunnar really enjoyed We Give Thanks Erich Morisse for the civics lesson.

GitMinutes
GitMinutes #23: Chris Aniszczyk on Git and Open Source

GitMinutes

Play Episode Listen Later Sep 23, 2013


In this episode we talk to Chris Aniszczyk. He’s head of open source at Twitter, and he’s been heavily involved with the Eclipse foundation where he sits on the board of directors. Over the last years he’s been guiding Eclipse’s migration to Git while being very active in the JGit/EGit projects. If you cannot see the audio controls, your browser does not support the audio element. Use the link below to download the mp3 manually. Link to mp3Links:Chris on Twitter, GitHub, blog/homepageChris: 100 Days: Eclipse Foundation Moves to GitChris: Eclipse Foundation Migrated to GitChris: Apache and Politics Over Code?Mike Milinkovich: Embracing Social Coding at EclipseThe Vert.x debacle beginsVert.x preparing move to EclipseEclipse’s GitHub mirrorsOpen source at Twitter on Twitter, homepage, GitHub: Wired article: Return of the Borg: How Twitter Rebuilt Google’s Secret WeaponLinux-foundation and the automotive industryDockerApache MesosListen to the episode on YouTube