American software company
POPULARITY
Welcome to the Scale with Strive podcast, the place where you come to listen to some of the world's most influential leaders of the SaaS industry.
This week, we dive into the state of SBOMs, what's going on with Harness, and the ongoing collision of tech and politics. Plus, Coté finds himself a stranger in the Texas he once called home. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/Gy02kkQjolI?si=TS_H8x4duNuGr8Ph) 501 (https://www.youtube.com/live/Gy02kkQjolI?si=TS_H8x4duNuGr8Ph) Runner-up Titles Who knows what's going to happen on that side of the planet? There are no hacks in The Netherlands. I know it's not the quality. An explosion of Eggnog The resident American American This topic will be boring Thank goodness it's part of my existing vendor relationship It's a webhook, knock yourself out They unlocked Ayn Rand Hacking it on the mainland Rundown Rust Will Explode, SBOMs Will Be Duds: Open Source Predictions (https://thenewstack.io/rust-will-explode-sboms-will-be-duds-open-source-predictions/) Harness CEO Jyoti Bansal on "startups within startups" (https://www.thestack.technology/harness-ceo-jyoti-bansal-the-stack-interview/) Marc Andreessen on Trump, the vibe shift, and what's after wokeness (https://youtu.be/l8X8jecivWw?si=fgNzX7OXqupKcbiM) A (https://www.youtube.com/watch?v=sgTeZXw-ytQ) 2 (https://www.youtube.com/watch?v=sgTeZXw-ytQ)- (https://www.youtube.com/watch?v=sgTeZXw-ytQ)hour interview with Andreessen (https://www.youtube.com/watch?v=sgTeZXw-ytQ) Relevant to your Interests Penpot unfolds their new open-source business model (https://youtu.be/STNomD9GUJY) Apple and Meta go to war over interoperability vs. privacy (https://techcrunch.com/2024/12/19/apple-and-meta-go-to-war-over-interoperability-vs-privacy/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAAGg73b-roDi-nW16voQhBVF4F0F4VDFNb2FTUXI-FSDE7EWV_BurzrSR-HtNljvccHZNYFZG9R73FB5FiHgK5nyQxCvXY_EPzMscjo-ytoIOS9uXtc4xFfCE5fZxpnhYnqbKjf2Bl5O4pUl7GGoAAXV4xV4C1fczloKtGC7K72tA) 15 predictions for 2025 (https://www.platformer.news/2025-tech-predictions-ai-google-threads-bluesky/) Ray-Ban Meta Crosses 1-Million Mark (https://www.counterpointresearch.com/insight/post-insight-research-notes-blogs-rayban-meta-crosses-1million-mark-success-indicates-promising-future-for-lightweight-ar-glasses/) Google Slashes 10% Of Managerial Staff In Hunt For 'Googleyness': Report (https://www.ndtv.com/world-news/google-layoffs-google-sundar-pichai-slashes-10-of-managerial-staff-in-hunt-for-googlyness-report-7292782) Resilience in Software Foundation (https://bsky.app/profile/resilienceinsoftware.org/post/3ldr56jnuqu2x) Amazon Delays RTO Mandate for Thousands of Workers Due to Space (https://www.bloomberg.com/news/articles/2024-12-18/amazon-delays-return-to-office-mandate-for-thousands-of-workers) Community plans to fork Puppet, unhappy with Perforce changes to open-source project (https://devclass.com/2024/12/18/community-plans-to-fork-puppet-unhappy-with-perforce-changes-to-open-source-project/?td=rt-3a) 5.6 Million Impacted by Ransomware Attack on Healthcare Giant Ascension (https://www.securityweek.com/5-6-million-impacted-by-ransomware-attack-on-healthcare-giant-ascension/) Yoast CEO calls for a 'federated' approach to WordPress repository (https://techcrunch.com/2024/12/23/yoast-ceo-calls-for-a-federated-approach-to-wordpress-repository/) Netflix sues Broadcom in California federal court (https://www.reuters.com/legal/litigation/netflix-sues-broadcoms-vmware-over-us-virtual-machine-patents-2024-12-23/>
My Greasiest of people, Another episode is upon us. Brand new recreational cannabis takes! Serious, hard to believe audio bugs inside of Halo:Combat Evolved Master Chief edition! Porn stance! And lets talk about daylight savings, huh? Its like, why? Who is still holding onto this thing? Is it some poor sap in a basement office, clutching the daylight savings government file in a grey withered hand, like Golem from LOTR? Who is this person? Who trapped them here? Will we release the hounds of hell if we don't Save Daylight? *Insert corny joke about there being no daylight savings bank*I hope our future generations get rid of this.Perforce has it out for me...Support the showWrite in with your questions and comments: greasysays@gmail.com Tiktok: @greasysaysInstagram: @greasysaysFacebook: @greasysayspodcastTwitter: @m_cue
Industrial Talk is onsite at OMG, Q1 Meeting and talking to Chris Ganacoplos with Preforce and Tim Schilbach with Penacity about "A connected industrial world requires sound cyber protection and compliance". Scott MacKenzie hosts an industrial podcast featuring Chris Ganacoplos and Tim Schilbach. Chris, from Perforce, discusses DevSecOps and continuous compliance standards, emphasizing the importance of secure infrastructure and policy frameworks like NIST 800-171. Tim, from Penacity, highlights the Cybersecurity Maturity Model Certification (CMMC) designed to protect industrial secrets from adversaries. They stress the need for dynamic, adaptive security measures that balance innovation with compliance. Both experts advise businesses to seek professional help, consult authoritative sources, and establish a robust corporate governance program to navigate cybersecurity effectively. Action Items [ ] Educate yourself on applicable frameworks like NIST SP 800-171. [ ] Consult with certified professionals to assess your organization's security gaps and develop a roadmap. [ ] Reach out to Chris and Tim on LinkedIn for cybersecurity guidance. Outline Introduction and Meeting Setup Scott MacKenzie introduces the Industrial Talk podcast, emphasizing its focus on industry professionals and their innovations. The meeting is held at OMG Reston, Virginia, and is the Q1 meeting with a focus on problem solvers. Scott introduces Chris and Tim, who are in the hot seat for the discussion. Chris and Tim share their backgrounds: Chris from Perforce, focusing on DevSecOps and continuous compliance, and Tim from Penacity, specializing in industrial security and critical infrastructure. Background on DevSecOps and CMMC Chris explains his role at Perforce, focusing on DevSecOps and continuous compliance standards. Tim provides a detailed background on CMMC (Cybersecurity Maturity Model Certification), its purpose, and its relevance to the defense industrial base. Tim highlights the importance of CMMC in protecting industrial secrets and the implications for national security. The discussion touches on the dynamic nature of cybersecurity standards and the need for continuous compliance. Challenges in Maintaining Compliance Chris discusses the importance of securing infrastructure and the role of policies in maintaining compliance. Tim explains the complexity of dynamic environments and the need for continuous documentation and monitoring. The conversation covers the challenges of ensuring compliance in rapidly changing environments and the importance of having a robust change control process. Tim emphasizes the role of technology platforms like Puppet in automating compliance checks and maintaining security baselines. Creating a Culture of Compliance Scott and Tim discuss the importance of creating a culture of compliance within organizations. Tim highlights the role of leadership in driving a culture of compliance and the need for effective communication and collaboration. The conversation touches on the importance of automation in reducing costs and improving compliance. Tim shares insights on the role of consultants and technology partners in helping organizations navigate compliance challenges. Practical Steps for Small Businesses Scott asks about practical steps for...
大手スタジオと小規模スタジオで傾向は異なるものの「資金不足」「コラボレーション」「イノベーション」が大きな課題となっています。
Why you shouldn't use AI to write your tests, and the crazy deals new AI companies are getting themselves into to access hardware.
Join us as we explore the insights from the 2024 State of Automotive Software Development Report. Uncover the primary concerns of automotive professionals and the factors they prioritize. In this podcast episode, our guest Jill Britton, Director of Compliance at Perforce, sheds light on the notable shift revealed in the report—from safety apprehensions to a heightened focus on quality standards. Gain valuable insights into the perspectives of key industry players regarding the adoption of AI and open-source technologies in automotive development.Sponsored By:
Today we're going to talk about the impact of software testing on the customer experience, and how rising customer expectations mean that brands need to up their testing game, using more agile methods, and AI-based solutions. To help me discuss this topic, I'd like to welcome Stephen Feloney, VP of Products - Continuous Testing at Perforce. Resources Perforce website: https://www.perforce.com Listen to The Agile Brand without the ads. Learn more here: https://bit.ly/3ymf7hd Headed to MAICON 24 - the premier marketing and AI conference? Use our discount code AGILE150 for $150 off your registration code. Register here: http://tinyurl.com/5jpwhycv Don't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Today we're going to talk about the impact of software testing on the customer experience, and how rising customer expectations mean that brands need to up their testing game, using more agile methods, and AI-based solutions. To help me discuss this topic, I'd like to welcome Stephen Feloney, VP of Products - Continuous Testing at Perforce. Resources Perforce website: https://www.perforce.com Listen to The Agile Brand without the ads. Learn more here: https://bit.ly/3ymf7hd Headed to MAICON 24 - the premier marketing and AI conference? Use our discount code AGILE150 for $150 off your registration code. Register here: http://tinyurl.com/5jpwhycv Don't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company Learn more about your ad choices. Visit megaphone.fm/adchoices
"Docs as code" to filozofia, która głosi, żeby tworzyć dokumentację za pomocą tych samych narzędzi i procesów co oprogramowanie. W zamian za to otrzymujemy szereg benefitów, takich jak lepsza współpraca z programistami, synchronizacja kodu i dokumentacji, wersjonowanie, automatyczne testy oraz ogólne poczucie, że dokumentacja to wspólna odpowiedzialność. Czy takie podejście sprawdza się w praktyce? Czy nie są to tylko puste obietnice, których w rzeczywistości nie da się spełnić? W tym odcinku konfrontujemy artykuł "Docs as code is a broken promise" z naszymi własnymi doświadczeniami i przekonaniami. Uwaga, spoiler! Jako żarliwi zwolennicy docs as code, staramy się pokazać, że pomimo wyzwań jakie ze sobą niesie, jest to podejście, które dobrze się sprawdza w świecie dokumentacji do oprogramowania. Dźwięki wykorzystane w audycji pochodzą z kolekcji "107 Free Retro Game Sounds" dostępnej na stronie https://dominik-braun.net, udostępnianej na podstawie licencji Creative Commons license CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Informacje dodatkowe: "Docs as code is a broken promise", Sarah Moir: https://thisisimportant.net/posts/docs-as-code-broken-promise/ "Docs as Code", Write the Docs: https://www.writethedocs.org/guide/docs-as-code/ "Documentation as Code: why you need it and how to get started", Swimm Team: https://swimm.io/learn/code-documentation/documentation-as-code-why-you-need-it-and-how-to-get-started Git: https://git-scm.com/ Subversion (SVN): https://subversion.apache.org/ Mercurial: https://www.mercurial-scm.org/ Perforce: https://www.perforce.com/solutions/version-control "What version control systems do you regularly use?", JetBrains: https://www.jetbrains.com/lp/devecosystem-2023/team-tools/#tools_vcs "Component content management system (CCMS)", Wikipedia: https://en.wikipedia.org/wiki/Component_content_management_system GitLab: https://gitlab.com/ GitHub: https://github.com/ The Zen of Python: https://peps.python.org/pep-0020/#the-zen-of-python MadCap Flare: https://www.madcapsoftware.com/products/flare/ Markdown: https://daringfireball.net/projects/markdown/ AsciiDoc: https://asciidoc.org/ Visual Studio Code (VS Code): https://code.visualstudio.com/ Kotlin: https://kotlinlang.org/ IntelliJ IDEA: https://www.jetbrains.com/idea/ "Emancipation: Why the heck would a tech writer use enterprise tools?", Paweł Kowaluk: https://meetcontent.github.io/events/krakow/2024/20 Docusuarus: https://docusaurus.io/ GitLens: https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens
In Aflevering 43 verwelkomen we Peter Ton, de Online Tech Director bij Guerrilla, een gamestudio van PlayStation, en een ervaren professional die zijn sporen heeft verdiend bij toonaangevende bedrijven zoals Riot Games. Met zijn uitgebreide achtergrond in de game- en telecommunicatie-industrie brengt Peter een schat aan kennis met zich mee.Tijdens ons boeiende gesprek onthult Peter niet alleen hoe Guerrilla gebruikmaakt van cutting-edge DevOps-tools zoals Agones en Perforce, maar hij werpt ook een licht op zijn tijd bij Riot Games, waar hij betrokken was bij het creëren van een eigen orkestratietool. Deze op maat gemaakte tool heeft een cruciale rol gespeeld in het stroomlijnen van de infrastructuur en het verbeteren van de ontwikkelings workflow voor games bij Riot Games. Met zijn diepgaande inzichten in zowel de game- als de tech-industrie biedt Peter ons een uniek perspectief op de evolutie van infrastructuur en DevOps in de gamingwereld, en de innovatieve oplossingen die worden gebruikt om complexe uitdagingen aan te pakken en grenzen te verleggen in de gameontwikkeling.Mis deze exclusieve aflevering niet, waarin we dieper ingaan op de fascinerende wereld van game-infrastructuur en DevOps, met waardevolle inzichten van een industrieleider die zijn stempel heeft gedrukt op enkele van 's werelds meest invloedrijke gamestudio's.Agones: Dedicated Game Server Hosting and Scaling for Multiplayer Games on Kubernetes Guerrilla Games (guerrilla-games.com)Game Development Software for Innovative Studios | PerforceGitHub - IBM/core-dump-handler: Save core dumps from a Kubernetes Service or RedHat OpenShift to an S3 protocol compatible object store
In this episode, we are deconstructing the “first team approach” with Monica Bajaj, VPE @ Okta. We cover how to apply “first team" across your org, within different team functions (including architecture, quality, security, etc.) and across all levels. She also shares real-life examples from her experience with “first teams” in scenarios like onboarding new teams after M&As, developing new products, and more. Monica provides tactical steps for implementing the first team concept within your org & why it encourages bottoms-up initiatives / self sufficient teams.ABOUT MONICA BAJAJMonica is currently VP of Engineering at Okta where she leads the Developer Experience portfolio for Customer Identity Cloud (CIAM). She is responsible for building a frictionless developer experience for Consumer and SaaS Apps thus securing billions of logins every month. Her expertise spans technology, operations, global expansion, and product launch in areas such as Consumer/Enterprise, Infrastructure, Business Intelligence, DevOps, and Security. She has taken products into the global market by launching localization and globalization programs delivering multi-million dollar growth.Previously she has held senior engineering leadership positions at Workday, Perforce, Network Appliance, and UKG. She holds a Masters in Computer Science from IIT Mumbai. Monica is an active supporter of diversity in STEM, has launched several Women in Technology initiatives, and is now an exec sponsor for Women at Okta. When not obsessing over technology, she can be found spending time with Boy Scouts, enjoying hiking, and supporting the cause of mentorship and uplifting women and young girls."The first team concept was launched at my level and then I went through this journey and I realized like, 'Oh, this is very powerful.' First, it was confusing that I need to put my team aside and take my peers as my first priority, but then I became more curious and then I was intrigued by the results and I'm like, 'Oh, this is so powerful. I need to put this in my own organization.' So I started with my directs like, 'Hey, we have studied about this. We did a whole session and walk them through some real examples. That's where it was like, 'Oh, we need to implement this and see it.'”- Monica Bajaj This episode is brought to you by incident.ioincident.io is trusted by hundreds of tech-led companies across the globe, including Etsy, monday.com, Skyscanner and more to seamlessly orchestrate incident response from start to finish. Intuitively designed, and with powerful and flexible built-in workflow automation, companies use incident.io to supercharge incident response and up-level the entire organization.Learn more about how you can better identify, learn from, and respond to incidents at incident.ioInterested in joining an ELC Peer Group?ELCs Peer Groups provide a virtual, curated, and ongoing peer learning opportunity to help you navigate the unknown, uncover solutions and accelerate your learning with a small group of trusted peers.Apply to join a peer group HERE: sfelc.com/peerGroupsSHOW NOTES:Defining the “first team” concept & three characteristics that lead to success (3:22)How applying a first team approach impacts relationships (6:05)Why adopting these principles improved the quality, trust & maturity of eng teams (8:30)What conditions were met to set up the relationship between teams (12:09)Nuances of incorporating a first team approach at different levels of your org (13:48)How the first team facilitates faster pivoting as new priorities arise (16:31)First team frameworks for successfully & quickly onboarding new teams (19:08)An example of this concept applied to an architecture context (20:20)Why “first teams” support / encourage bottoms-up initiatives (23:47)Strategies for leadership to implement first teams @ different levels of their org (27:38)Recommendations for regaining cohesiveness as a first team (29:17)Rapid fire questions (31:28)LINKS AND RESOURCESThe Habit of Winning: Stories to Inspire, Motivate and Unleash the Winner withinThis episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
In this episode of the podcast, Grizz sits down with Cortney Stauffer (Head of UX Practice) & Chuck Danielsson (Head of Practice, Web/UI), both from Adaptive. They talk about UX, UI, FDC3, and why things should just work. Cortney Stauffer: https://www.linkedin.com/in/cortstauffer/ Chuck Danielsson: https://www.linkedin.com/in/chuck-danielsson-2141b058/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Connectifi, Discover, Enterprise DB, FinOps Foundation, Fujitsu, instaclustr, Major League Hacking, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Percona, Sonatype, StormForge, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Jon Gottfried, Co-Founder of Major League Hacking. They talk about hackathons in finance, and developer/engineering talent, from both the individual and hiring manager perspectives. Jon Gottfried: https://www.linkedin.com/in/jonmarkgo/ MajorLeagueHacking: https://sponsor.mlh.io/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Connectifi, Discover, Enterprise DB, FinOps Foundation, Fujitsu, instaclustr, Major League Hacking, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Percona, Sonatype, StormForge, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, our FINOS COO, Jane Gavronsky sits down with Adrian Dale of ISLA and David Shone of ISDA to discuss the associations contribution and backing of the FINOS CDM, Common Domain Model to the FINOS open source community. CDM: https://cdm.finos.org/ On GitHub: https://github.com/finos/common-domain-model Adrian Dale, Head of Regulation & Markets, ISLA - https://www.linkedin.com/in/adrian-dale-27942314/ David Shone, Director of Product - Data & Digital, ISDA - https://www.linkedin.com/in/david-shone/ Jane Gavronsky, COO, FINOS - https://www.linkedin.com/in/janegavronsky/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Connectifi, Discover, Enterprise DB, FinOps Foundation, Fujitsu, instaclustr, Major League Hacking, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Percona, Sonatype, StormForge, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Peter Smulovics, Executive Director at Morgan Stanley about.. well, just about everything. We hit his developer journey, metaverse, XR, spatial computing, Big Boost Mondays, autism hackathons, and painting fences. He is currently Executive Director for Windows and .NET develop practices and spatial computing and metaverse development practices at Morgan Stanley, and co-chair for Open Source Readiness ( https://osr.finos.org ) and Emerging Technologies ( https://zenith.finos.org ) at The Linux Foundation / FINOS. He will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1PzH7 Peter Smulovics LinkedIn: https://www.linkedin.com/in/smulovicspeter/ FSI Hack for Autism - 2023: https://fsi-hack4autism.github.io/ Zenith Emerging Technologies: https://zenith.finos.org/ Open Source Readiness: https://osr.finos.org/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
Will dons the game developer hat again this week for a deep dive into how a game gets built. No, not the coding and design and art and all that -- we mean how a game gets literally built into a package that you can run on your PC or console, with some in-depth chat about everything from build servers to source and version control of both code and binary assets, pre-baked lighting, creating automated nightlies of your game, partying in TeamCity, and more.Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
Open source has always moved fast. Today, it moves faster than ever, driven by both community demand and corporate interest. On this episode, Perforce's Javier Perez and OSI's Stefano Maffulli discuss the impact of recent license changes and the historical push-and-pull between consumers and providers in the world of open source.Highlights:Reflecting on 25 years of OSI and its widening scopeThe historical changes that set the stage for open sourceWhat's shaping Linux distributions today (CentOS, RHEL restrictions, HashiCorp's switch to BSL, and more)The “social contract” between companies and communitiesThe pros and cons of single companies driving open-source communitiesThe commercialized future of open sourceSpeakers:Javier Perez, Chief Open Source Evangelist and Senior Director of Product Management at PerforceStefano Maffulli, Executive Director at the Open Source Initiative (OSI)Links:Learn about Puppet's commitment to open source projects like Bolt and Open Source Puppet (OSP): https://www.puppet.com/community/open-sourceFind Stefano at https://www.maffulli.net/Follow Javier on Twitter at https://twitter.com/jperezp_bosOSI's programs (including a new Advocacy and Outreach program) https://opensource.org/programs/“Defining an open source AI for the greater good”: How OSI is approaching AI https://opensource.com/article/22/10/defining-open-source-ai“Friend or Foe? ChatGPT's Impact on Open Source Software” by Javier Perez for DevOps.com https://devops.com/friend-or-foe-chatgpts-impact-on-open-source-software/Read the episode transcriptFind Us Online:puppet.comPulling the Strings on Apple PodcastsTwitterLinkedIn
In this episode of the podcast, Grizz sits down with Anna McDonald, Technical Voice of the Customer at Confluent to talk about her OSFF talk: "Enabling Real Time Regulatory Compliance with Kafka Streams and Morphir". We talk about Kafka Streams, Morphir, Open Regulation, and what it's like to figure out your passion for coding at 5 years old. She will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1PzH7 Anna McDonald LinkedIn: https://www.linkedin.com/in/jbfletch/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Brian Douglas, CEO, of OpenSauced to talk about his OSFF talk: "Data-Driven Decisions: Uncovering the Key Metrics Shaping Success in OSS". We talk about his developer evangelist journey, open source project analytics, accessing talent, and a little Steph Curry. He will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1PzGI LinkedIn: https://www.linkedin.com/in/brianldouglas/ OpenSauced: https://opensauced.pizza/ Podcast & Videos: https://www.youtube.com/@OpenSauced/videos NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Varsha Sundar, VP of Cloud FinOps at Chubb Insurance to talk about her OSFF talk: "Cloud Financial Management Strategy". We talk about her journey, what FInOps is, and why it's important. She will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1Q2n3 LinkedIn: https://www.linkedin.com/in/varsha-sundar-b751b326/ FinOps Foundation: https://www.finops.org/ All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsor: Databricks, where you can unify all your data, analytics, and AI on one platform. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, we break down the newly released schedule from the Open Source in Finance Forum (OSFF). Plus - we return to our FINOS Debrief episodes that wrap up the past month in the FINOS Ecosystem - and look forward to the next month and beyond. All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2023 State of Open Source in Financial Services Survey: https://www.research.net/r/NX3VVXM 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsor: Databricks, where you can unify all your data, analytics, and AI on one platform. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, we discuss the formation of a new major project in FINOS around common cloud controls for financial services. Get involved now here: https://www.finos.org/common-cloud-controls-project Read the Press Release here: https://www.finos.org/press/finos-announces-formation-of-common-cloud-controls US Dept of Treasury Cloud Report: https://home.treasury.gov/system/files/136/Treasury-Cloud-Report.pdf UK HMT Critical 3rd Party Finance Sector Policy Statement: https://www.gov.uk/government/publications/critical-third-parties-to-the-finance-sector-policy-statement European Council DORA: https://www.consilium.europa.eu/en/press/press-releases/2022/11/28/digital-finance-council-adopts-digital-operational-resilience-act/ Monetary Authority of Singapore Cloud Advisory: https://www.mas.gov.sg/-/media/MAS/Regulations-and-Financial-Stability/Regulatory-and-Supervisory-Framework/Risk-Management/Cloud-Advisory.pdf All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2023 State of Open Source in Financial Services Survey: https://www.research.net/r/NX3VVXMhttps://www.research.net/r/NX3VVXM 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis, especially to our Leader sponsor: Databricks. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, OpenJS, Open Mainframe Project, Perforce, Red Hat, Sonatype, and Tidelift. Registration is now open and early bird pricing is available till August 18th. Join us in NYC! If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
#218: Continuous testing has become an integral part of modern software development and delivery. It enables organizations to maintain high quality and agility in the face of rapid software iterations. But how can we harness the power of artificial intelligence to enhance and optimize the continuous testing process? In this episode, we speak with Bharath Vantari, Principal Presales at Perforce for Blazemeter, about how we should start thinking how we can add AI into our continuous testing process. Bharath's contact information: Twitter: https://twitter.com/BharathVantari LinkedIn: https://www.linkedin.com/in/bharathvantari/ YouTube channel: https://youtube.com/devopsparadox/ Books and Courses: Catalog, Patterns, And Blueprints https://www.devopstoolkitseries.com/posts/catalog/ Review the podcast on Apple Podcasts: https://www.devopsparadox.com/review-podcast/ Slack: https://www.devopsparadox.com/slack/ Connect with us at: https://www.devopsparadox.com/contact/
In this bonus podcast episode, SD Times Editor-in-Chief David Rubinstein talks about what DevOps engineers need to know today. His guest is David Sandilands, Principal Solutions Architect at Puppet by Perforce.
“It doesn't matter how small the contribution is – I think everyone benefits from the different environment, the different culture, of open source communities.” From ‘free software' to the Mars Rover, the scope of open source is expansive, growing, and offering new challenges to organizations and practitioners alike. Now, Perforce Director of Product Management Javier Perez is excited to share the latest findings from the 2023 State of Open Source Report with the context of his 26+ years in the software industry. Look into the past and peer into the future with this exciting discussion between two open source evangelists.Join Javier and host Ben Ford, Developer Relations Director at Puppet by Perforce, as they discuss the history of open source, examples of open source software dating back to the 1950s, the communities that have formed around open source, where open source software is headed, and highlights from the highly anticipated 2023 State of Open Source Report, which asked ~900 global respondents about their use of open source software.Highlights:AI and machine learning took the number one spot as a technology that most survey respondents were interested in.The number-one reason organizations use open source software is access to innovation — not cost savings.How initiatives intra-organization methodologies like InnerSource are breaking down silos between companies.Why AI and Web3 are trends worth watching in the open source space.Speakers:Ben Ford, Developer Relations Director at Puppet by PerforceJavier Perez, Director of Product Management at PerforceLinks:Get the 2023 State of Open Source ReportFollow Javier on TwitterRead the episode transcriptFind Us Online:puppet.comApple PodcastsTwitterLinkedIn
In this episode, we are joined by Rod Cope, Chief Technology Officer at Perforce Software, a company that has made 11 acquisitions in the past six years, including his own. Rod shares his unique insights on navigating the challenges and uncertainties that come with mergers and acquisitions (M&A) and provides valuable advice on eliminating friction and ensuring successful integration between companies. Rod's extensive experience in M&A has taught him that it typically takes at least two years for companies to fully integrate and become a cohesive unit. He shares his top tips for tech leaders on both sides of an M&A to support a positive experience: Host a Town Hall early on with an AMA (Ask Me Anything) section to build trust and open communication among the entire team. Set clear expectations through transparent and constant contact, ensuring everyone is on the same page throughout the process. Swap out the brands early on, including building signs, email signatures, and more, as these small details can make a significant difference in establishing a unified identity. Ensure employees don't experience any friction with calendars, emails, and communication, allowing them to feel like part of the greater team. Rod also discusses how tech leaders can better prepare for acquiring or merging with another organization and the importance of understanding the time it takes for a successful integration. As companies worldwide rely on Perforce to build complex digital products faster and with higher quality, Rod's insights and experiences offer a valuable perspective for tech leaders navigating the M&A landscape. Tune in to this episode to learn from Rod Cope's extensive expertise in M&A and discover how to successfully eliminate friction and foster a positive experience during the integration process.
Did you know that over 3 billion people play video games? That's nearly 50% of the world's population! And if you've ever played a video game (which, statistically, you have), you've probably played something that was built using Perforce, which is *the* source code versioning software that nearly all the major game developers use. With the increase of online gaming and global distribution of game studios, the cloud is becoming a critical component in game development and delivery. As a result, cloud service providers are offering "as a service" products for applications like Perforce, Unreal Engine and more. Ravi Prakash joins us this week to discuss Perforce and how NetApp ONTAP is enhancing the cloud experience for game developers on-prem and in the cloud.
Today we're going to talk about building infrastructure at scale while keeping in mind the people, processes, and platforms involved, and how this translates to an improvement in the customer experience. To help me discuss this topic, I'd like to welcome Deepak Giridharagopal, CTO at Puppet by Perforce. RESOURCES Puppet's 2023 State of DevOps Report: Platform Engineering Edition: https://www.puppet.com/resources/state-of-platform-engineering The Agile Brand podcast website: https://www.gregkihlstrom.com/theagilebrandpodcast Sign up for The Agile Brand newsletter here: https://www.gregkihlstrom.com Get the latest news and updates on LinkedIn here: https://www.linkedin.com/company/the-agile-brand/ For consulting on marketing technology, customer experience, and more visit GK5A: https://www.gk5a.com The Agile Brand podcast is brought to you by TEKsystems.Learn more here: https://www.teksystems.com/versionnextnow The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company If you are struggling with projects, sign up for Basecamp. Their pricing is simple and they give you ALL their features in a single plan. No upsells. No upgrades. Go to basecamp.com/agile and try Basecamp for free. No credit card required and cancel anytime. Thank you, Basecamp for sponsoring this episode!
Today we're going to talk about building infrastructure at scale while keeping in mind the people, processes, and platforms involved, and how this translates to an improvement in the customer experience. To help me discuss this topic, I'd like to welcome Deepak Giridharagopal, CTO at Puppet by Perforce. RESOURCES Puppet's 2023 State of DevOps Report: Platform Engineering Edition: https://www.puppet.com/resources/state-of-platform-engineering The Agile Brand podcast website: https://www.gregkihlstrom.com/theagilebrandpodcast Sign up for The Agile Brand newsletter here: https://www.gregkihlstrom.com Get the latest news and updates on LinkedIn here: https://www.linkedin.com/company/the-agile-brand/ For consulting on marketing technology, customer experience, and more visit GK5A: https://www.gk5a.com The Agile Brand podcast is brought to you by TEKsystems.Learn more here: https://www.teksystems.com/versionnextnow The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
In this podcast episode, we discuss the driving forces for organizations to increase their open-source software usage and the biggest open-source trends today. Here is our conversation with Javier Perez, chief Open Source evangelist and senior director of product management at Perforce Software and he talks about the findings from the 2023 State of Open Source report by OpenLogic by Perforce and the Open Source Initiative.
Growing as a women technology leader | Monica Bajaj | #TGV308“Leadership is about making others better as a result of your presence and making sure that impact lasts in your absence" ~Sheryl SandbergTune into #TGV308 to get clarity on the above topic. Here are the timestamp-based pointers from Monica's conversation with Naveen Samala on The Guiding Voice0:00:00 INTRODUCTION AND CONTEXT SETTING 0:03:15 Monica's PROFESSIONAL JOURNEY AND THE TOP 3 THINGS THAT HELPED IN His/Her SUCCESS0:05:45 Challenges that Monica faced in your career as a woman leader0:08:18 What is leadership in her opinion? What has she learned not to do from leaders?0:09:00 Who inspired you to take up leadership roles?0:12:00 Her experience with technologies and how tech evolved per your career journey?0:14:00 How does she keep herself up to date on the technologies?0:16:00 Her advice to women aspiring to take leadership roles? How to balance work and life?0:18:30 WITTY ANSWERS TO THE RAPID-FIRE QUESTIONS0:21:00 ONE PIECE OF ADVICE TO THOSE ASPIRING TO MAKE BIG IN THEIR CAREERS 0:23:00 TRIVIA ABOUT Women Leaders ABOUT THE GUEST:Monica Bajaj is a seasoned engineering leader with over 2 decades of industry experience building and scaling global diverse teams across both the consumer and enterprise space. She is currently VP of Engineering at Okta where she leads the developer experience portfolio for their Customer Identity Cloud for developers and SaaS applications. Monica brings decades of industry experience across various tech industries and has been successful in building global diverse teams and scaling technology across multiple industries.Prior to Okta, Monica led engineering leadership roles at Workday, UKG, Perforce, and NetApp primarily in the space of Consumer/ Enterprise, Infrastructure, and Security.Monica earned an M.S. in Computer Science from IIT Mumbai, India. Outside of work, Monica is very active in programs around mentorship, diversity, and uplifting women and young girls. She has won several awards as a Mentor of the Year 2021 award “ and has been recognized as top 60 Women technical leaders by the Girl Geek X Community. She has also served on the board as Technology and Chief Compliance Officer for Women in Localization for their GDPR and Security initiatives. Outside work, Monica enjoys the outdoors, hiking, painting, and the cuisines of the world.Connect with Monica:https://www.linkedin.com/in/mobajaj/CONNECT WITH THE HOST ON LINKEDIN:Naveen Samala: https://www.linkedin.com/in/naveensamalahttp://www.naveensamala.comIf you wish to become a productivity monk: enroll for this course: https://www.udemy.com/course/productivitymonk/TGV Inspiring Lives Volume 1 is available on Amazon for pre-orderKindle:https://amzn.eu/d/cKTKtyCPaperback:https://amzn.eu/d/4Y1HAXjFOLLOW ON TWITTER:@guidingvoice@naveensamala Hosted on Acast. See acast.com/privacy for more information.
In this episode we talk with Gideon Pridor, Chief Marketing Officer at Workvivo. Gideon Pridor is Workvivo's interim CMO and chief storyteller. Gideon is a veteran marketing leader with a track record of growing disruptive startups into global leaders, that redefine their markets. Before Workvivo (and a well-deserved sabbatical) Gideon was VP Marketing at TravelPerk, One of the world's fastest-growing SaaS companies, Perfecto (acquired by Perforce), and others. Gideon is a frequent speaker at industry events, an HRtech enthusiast, and a startup mentor helping companies build their story and tell it to the world. We hope you enjoy it.
EP#66 Willis Nana : Data Ingenieur En savoir plus sur moi : https://espresso-jobs.com/conseils-carriere/les-geeks-du-web-willis-nana/ "Creates simple solutions to complex problems" - Mes Skills Data Engineering
Benjy Weinberger is the co-founder of Toolchain, a build tool platform. He is one of the creators of the original Pants, an in-house Twitter build system focused on Scala, and was the VP of Infrastructure at Foursquare. Toolchain now focuses on Pants 2, a revamped build system.Apple Podcasts | Spotify | Google PodcastsIn this episode, we go back to the basics, and discuss the technical details of scalable build systems, like Pants, Bazel and Buck. A common challenge with these build systems is that it is extremely hard to migrate to them, and have them interoperate with open source tools that are built differently. Benjy's team redesigned Pants with an initial hyper-focus on Python to fix these shortcomings, in an attempt to create a third generation of build tools - one that easily interoperates with differently built packages, but still fast and scalable.Machine-generated Transcript[0:00] Hey, welcome to another episode of the Software at Scale podcast. Joining me here today is Benji Weinberger, previously a software engineer at Google and Twitter, VP of Infrastructure at Foursquare, and now the founder and CEO of Toolchain.Thank you for joining us.Thanks for having me. It's great to be here. Yes. Right from the beginning, I saw that you worked at Google in 2002, which is forever ago, like 20 years ago at this point.What was that experience like? What kind of change did you see as you worked there for a few years?[0:37] As you can imagine, it was absolutely fascinating. And I should mention that while I was at Google from 2002, but that was not my first job.I have been a software engineer for over 25 years. And so there were five years before that where I worked at a couple of companies.One was, and I was living in Israel at the time. So my first job out of college was at Check Point, which was a big successful network security company. And then I worked for a small startup.And then I moved to California and started working at Google. And so I had the experience that I think many people had in those days, and many people still do, of the work you're doing is fascinating, but the tools you're given to do it with as a software engineer are not great.This, I'd had five years of experience of sort of struggling with builds being slow, builds being flaky with everything requiring a lot of effort. There was almost a hazing,ritual quality to it. Like, this is what makes you a great software engineer is struggling through the mud and through the quicksand with this like awful substandard tooling. And,We are not users, we are not people for whom products are meant, right?We make products for other people. Then I got to Google.[2:03] And Google, when I joined, it was actually struggling with a very massive, very slow make file that took forever to parse, let alone run.But the difference was that I had not seen anywhere else was that Google paid a lot of attention to this problem and Google devoted a lot of resources to solving it.And Google was the first place I'd worked and I still I think in many ways the gold standard of developers are first class participants in the business and deserve the best products and the best tools and we will if there's nothing out there for them to use, we will build it in house and we will put a lot of energy into that.And so it was for me, specifically as an engineer.[2:53] A big part of watching that growth from in the sort of early to late 2000s was. The growth of engineering process and best practices and the tools to enforce it and the thing i personally am passionate about is building ci but i'm also talking about.Code review tools and all the tooling around source code management and revision control and just everything to do with engineering process.It really was an object lesson and so very, very fascinating and really inspired a big chunk of the rest of my career.I've heard all sorts of things like Python scripts that had to generate make files and finally they move the Python to your first version of Blaze. So it's like, it's a fascinating history.[3:48] Maybe can you tell us one example of something that was like paradigm changing that you saw, like something that created like a magnitude, like order of magnitude difference,in your experience there and maybe your first aha moment on this is how good like developer tools can be?[4:09] Sure. I think I had been used to using make basically up till that point. And Google again was, as you mentioned, using make and really squeezing everything it was possible to squeeze out of that lemon and then some.[4:25] But when the very early versions of what became blaze which was that big internal build system which inspired basil which is the open source variant of that today. Hey one thing that really struck me was the integration with the revision controls system which was and i think still is performance.I imagine many listeners are very familiar with Git. Perforce is very different. I can only partly remember all of the intricacies of it, because it's been so long since I've used it.But one interesting aspect of it was you could do partial checkouts. It really was designed for giant code bases.There was this concept of partial checkouts where you could check out just the bits of the code that you needed. But of course, then the question is, how do you know what those bits are?But of course the build system knows because the build system knows about dependencies. And so there was this integration, this back and forth between the, um.[5:32] Perforce client and the build system that was very creative and very effective.And allowed you to only have locally on your machine, the code that you actually needed to work on the piece of the codebase you're working on,basically the files you cared about and all of their transitive dependencies. And that to me was a very creative solution to a problem that involved some lateral thinking about how,seemingly completely unrelated parts of the tool chain could interact. And that's kind of been that made me realize, oh, there's a lot of creative thought at work here and I love it.[6:17] Yeah, no, I think that makes sense. Like I interned there way back in 2016. And I was just fascinated by, I remember by mistake, I ran like a grep across the code base and it just took forever. And that's when I realized, you know, none of this stuff is local.First of all, like half the source code is not even checked out to my machine.And my poor grep command is trying to check that out. But also how seamlessly it would work most of the times behind the scenes.Did you have any experience or did you start working on developer tools then? Or is that just what inspired you towards thinking about developer tools?I did not work on the developer tools at Google. worked on ads and search and sort of Google products, but I was a big user of the developer tools.Exception which was that I made some contributions to the.[7:21] Protocol buffer compiler which i think many people may be familiar with and that is. You know if i very deep part of the toolchain that is very integrated into everything there and so that gave me.Some experience with what it's like to hack on a tool that's everyone in every engineer is using and it's the sort of very deep part of their workflow.But it wasn't until after google when i went to twitter.[7:56] I noticed that the in my time of google my is there the rest of the industry had not. What's up and suddenly i was sort of stressed ten years into the past and was back to using very slow very clunky flaky.Tools that were not designed for the tasks we were trying to use them for. And so that made me realize, wait a minute, I spent eight years using these great tools.They don't exist outside of these giant companies. I mean, I sort of assumed that maybe, you know, Microsoft and Amazon and some other giants probably have similar internal tools, but there's something out there for everyone else.And so that's when I started hacking on that problem more directly was at Twitter together with John, who is now my co-founder at Toolchain, who was actually ahead of me and ahead ofthe game at Twitter and already begun working on some solutions and I joined him in that.Could you maybe describe some of the problems you ran into? Like were the bills just taking forever or was there something else?[9:09] So there were...[9:13] A big part of the problem was that Twitter at the time, the codebase I was interested in and that John was interested in was using Scala. Scala is a fascinating, very rich language.[9:30] Its compiler is very slow. And we were in a situation where, you know, you'd make some small change to a file and then builds would take just,10 minutes, 20 minutes, 40 minutes. The iteration time on your desktop was incredibly slow.And then CI times, where there was CI in place, were also incredibly slow because of this huge amount of repetitive or near repetitive work. And this is because the build tools,etc. were pretty naive about understanding what work actually needs to be done given a set of changes.There's been a ton of work specifically on SBT since then.[10:22] It has incremental compilation and things like that, but nonetheless, that still doesn't really scale well to large corporate codebases that are what people often refer to as monorepos.If you don't want to fragment your codebase with all of the immense problems that that brings, you end up needing tooling that can handle that situation.Some of the biggest challenges are, how do I do less than recompile the entire codebase every time. How can tooling help me be smart about what is the correct minimal amount of work to do.[11:05] To make compiling and testing as fast as it can be?[11:12] And I should mention that I dabbled in this problem at Twitter with John. It was when I went to Foursquare that I really got into it because Foursquare similarly had this big Scala codebase with a very similar problem of incredibly slow builds.[11:29] The interim solution there was to just upgrade everybody's laptops with more RAM and try and brute force the problem. It was very obvious to everyone there, tons of,force-creation pattern still has lots of very, very smart engineers.And it was very obvious to them that this was not a permanent solution and we were casting around for...[11:54] You know what can be smart about scala builds and i remember this thing that i had hacked on twitter and. I reached out to twitter and ask them to open source it so we could use it and collaborate on it wasn't obviously some secret sauce and that is how the very first version of the pants open source build system came to be.I was very much designed around scarlet did eventually.Support other languages. And we hacked on it a lot at Foursquare to get it to...[12:32] To get the codebase into a state where we could build it sensibly. So the one big challenge is build speed, build performance.The other big one is managing dependencies, keeping your codebase sane as it scales.Everything to do with How can I audit internal dependencies?How do I make sure that it is very, very easy to accidentally create all sorts of dependency tangles and cycles and create a code base whose dependency structure is unintelligible, really,hard to work with and actually impacts performance negatively, right?If you have a big tangle of dependencies, you're more likely to invalidate a large chunk of your code base with a small change.And so tooling that allows you to reason about the dependencies in your code base and.[13:24] Make it more tractable was the other big problem that we were trying to solve. Mm-hmm. No, I think that makes sense.I'm guessing you already have a good understanding of other build systems like Bazel and Buck.Maybe could you walk us through what are the difference for PANs, Veevan? What is the major design differences? And even maybe before that, like, how was Pants designed?And is it something similar to like creating a dependency graph? You need to explicitly include your dependencies.Is there something else that's going on?[14:07] Maybe just a primer. Yeah. Absolutely. So I should mention, I was careful to mention, you mentioned Pants V1.The version of Pants that we use today and base our entire technology stack around is what we very unimaginatively call Pants V2, which we launched two years ago almost to the day.That is radically different from Pants V1, from Buck, from Bazel. It is quite a departure in ways that we can talk about later.One thing that I would say Panacea V1 and Buck and Bazel have in common is that they were designed around the use cases of a single organization. is a.[14:56] Open source variant or inspired by blaze its design was very much inspired by. Here's how google does engineering and a buck similarly for facebook and pansy one frankly very similar for.[15:11] Twitter and we sort of because Foursquare also contributed a lot to it, we sort of nudged it in that direction quite a bit. But it's still very much if you did engineering in this one company's specific image, then this might be a good tool for you.But you had to be very much in that lane.But what these systems all look like is, and the way they are different from much earlier systems is.[15:46] They're designed to work in large scalable code bases that have many moving parts and share a lot of code and that builds a lot of different deployables, different, say, binaries or DockerDocker images or AWS lambdas or cloud functions or whatever it is you're deploying, Python distributions, Java files, whatever it is you're building, typically you have many of them in this code base.Could be lots of microservices, could be just lots of different things that you're deploying.And they live in the same repo because you want that unity. You want to be able to share code easily. you don't want to introduce dependency hell problems in your own code. It's bad enough that we have dependency hell problems third-party code.[16:34] And so these systems are all if you squint at them from thirty thousand feet today all very similar in that they make that the problem of. Managing and building and testing and packaging in a code base like that much more tractable and the way they do this is by applying information about the dependencies in your code base.So the important ingredient there is that these systems understand the find the relatively fine grained dependencies in your code base.And they can use that information to reason about work that needs to happen. So a naive build system, you'd say, run all the tests in the repo or in this part of the repo.So a naive system would literally just do that, and first they would compile all the code.[17:23] But a scalable build system like these would say, well, you've asked me to run these tests, but some of them have already been cached and these others, okay, haven't.So I need to look at these ones I actually need to run. So let me see what needs to be done before I can run them.Oh, so these source files need to be compiled, but some of those already in cache and then these other ones I need to compile. But I can apply concurrency because there are multiple cores on this machine.So I can know through dependency analysis which compile jobs can run concurrently and which cannot. And then when it actually comes time to run the tests, again, I can apply that sort of concurrency logic.[18:03] And so these systems, what they have in common is that they use dependency information to make your building testing packaging more tractable in a large code base.They allow you to not have to do the thing that unfortunately many organizations find themselves doing, which is fragmenting the code base into lots of different bits andsaying, well, every little team or sub team works in its own code base and they consume each other's code through, um, so it was third party dependencies in which case you are introducing a dependency versioning hell problem.Yeah. And I think that's also what I've seen that makes the migration to a tool like this hard. Cause if you have an existing code base that doesn't lay out dependencies explicitly.[18:56] That migration becomes challenging. If you already have an import cycle, for example.[19:01] Bazel is not going to work with you. You need to clean that up or you need to create one large target where the benefits of using a tool like Bazel just goes away. And I think that's a key,bit, which is so fascinating because it's the same thing over several years. And I'm hoping that,it sounds like newer tools like Go, at least, they force you to not have circular dependencies and they force you to keep your code base clean so that it's easy to migrate to like a scalable build system.[19:33] Yes exactly so it's funny that is the exact observation that let us to pans to see to so they said pans to be one like base like buck was very much inspired by and developed for the needs of a single company and other companies were using it a little bit.But it also suffered from any of the problems you just mentioned with pans to for the first time by this time i left for square and i started to chain with the exact mission of every company every team of any size should have this kind of tooling should have this ability this revolutionary ability to make the code base is fast and tractable at any scale.And that made me realize.We have to design for that we have to design for not for. What a single company's code base looks like but we have to design.To support thousands of code bases of all sorts of different challenges and sizes and shapes and languages and frameworks so.We actually had to sit down and figure out what does it mean to make a tool.Like this assistant like this adoptable over and over again thousands of times you mentioned.[20:48] Correctly, that it is very hard to adopt one of those earlier tools because you have to first make your codebase conform to whatever it is that tool expects, and then you have to write huge amounts of manual metadata to describe all of the dependencies in your,the structure and dependencies of your codebase in these so-called build files.If anyone ever sees this written down, it's usually build with all capital letters, like it's yelling at you and that those files typically are huge and contain a huge amount of information your.[21:27] I'm describing your code base to the tool with pans be to eat very different approaches first of all we said this needs to handle code bases as they are so if you have circular dependencies it should handle them if you have. I'm going to handle them gracefully and automatically and if you have multiple conflicting external dependencies in different parts of your code base this is pretty common right like you need this version of whatever.Hadoop or NumPy or whatever it is in this part of the code base, and you have a different conflicting version in this other part of the code base, it should be able to handle that.If you have all sorts of dependency tangles and criss-crossing and all sorts of things that are unpleasant, and better not to have, but you have them, the tool should handle that.It should help you remove them if you want to, but it should not let those get in the way of adopting it.It needs to handle real-world code bases. The second thing is it should not require you to write all this crazy amount of metadata.And so with Panzer V2, we leaned in very hard on dependency inference, which means you don't write these crazy build files.You write like very tiny ones that just sort of say, you know, here is some code in this language for the build tool to pay attention to.[22:44] But you don't have to edit the added dependencies to them and edit them every time you change dependencies.Instead, the system infers dependencies by static analysis. So it looks at your, and it does this at runtime.So you, you know, almost all your dependencies, 99% of the time, the dependencies are obvious from import statements.[23:05] And there are occasional and you can obviously customize this because sometimes there are runtime dependencies that have to be inferred from like a string. So from a json file or whatever is so there are various ways to customize this and of course you can always override it manually.If you have to be generally speaking ninety.Seven percent of the boilerplate that used to going to build files in those old systems including pans v1 no. You know not claiming we did not make the same choice but we goes away with pans v2 for exactly the reason that you mentioned these tools,because they were designed to be adopted once by a captive audience that has no choice in the matter.And it was designed for how that code base that adopting code base already is. is these tools are very hard to adopt.They are massive, sometimes multi-year projects outside of that organization. And we wanted to build something that you could adopt in days to weeks and would be very easy,to customize to your code base and would not require these massive wholesale changes or huge amounts of metadata.And I think we've achieved that. Yeah, I've always wondered like, why couldn't constructing the build file be a part of the build. In many ways, I know it's expensive to do that every time. So just like.[24:28] Parts of the build that are expensive, you cache it and then you redo it when things change.And it sounds like you've done exactly that with BANs V2.[24:37] We have done exactly that. The results are cached on a profile basis. So the very first time you run something, then dependency inference can take some time. And we are looking at ways to to speed that up.I mean, like no software system has ever done, right? Like it's extremely rare to declare something finished. So we are obviously always looking at ways to speed things up.But yeah, we have done exactly what you mentioned. We don't, I should mention, we don't generate the dependencies into build for, we don't edit build files and then you check them in.We do that a little bit. So I mentioned you do still with PANSTL V2, you need these little tiny build files that just say, here is some code.They typically can literally be one line sometimes, almost like a marker file just to say, here is some code for you to pay attention to.We're even working on getting rid of those.We do have a little script that generates those one time just to help you onboard.But...[25:41] The dependencies really are just generated a runtime as on demand as needed and used a runtime so we don't have this problem of. Trying to automatically add or edit a otherwise human authored file that is then checked in like this generating and checking in files is.Problematic in many ways, especially when those files also have to take human written edits.So we just do away with all of that and the dependency inference is at runtime, on demand, as needed, sort of lazily done, and the information is cached. So both cached in memory in the surpassed V2 has this daemon that runs and caches a huge amount of state in memory.And the results of running dependency inference are also cached on disk. So they survive a daemon restart, etc.I think that makes sense to me. My next question is going to be around why would I want to use panthv2 for a smaller code base, right? Like, usually with the smaller codebase, I'm not running into a ton of problems around the build.[26:55] I guess, do you notice these inflection points that people run into? It's like, okay, my current build setup is not enough. What's the smallest codebase that you've seen that you think could benefit? Or is it like any codebase in the world? And I should start with,a better build system rather than just Python setup.py or whatever.I think the dividing line is, will this code base ever be used for more than one thing?[27:24] So if you have a, let's take the Python example, if literally all this code base will ever do is build this one distribution and a top level setup pie is all I need. And that is, you know, this,sometimes you see this with open source projects and the code base is going to remain relatively small, say it's only ever going to be a few thousand lines and the tests, even if I runthe tests from scratch every single time, it takes under five minutes, then you're probably fine.But I think two things I would look at are, am I going to be building multiple things in this code base in the future, or certainly if I'm doing it now.And that is much more common with corporate code bases. You have to ask yourself, okay, my team is growing, more and more people are cooperating on this code base.I want to be able to deploy multiple microservices. I want to be able to deploy multiple cloud functions.I want to be able to deploy multiple distributions or third-party artifacts.I want to be able to.[28:41] You know, multiple sort of data science jobs, whatever it is that you're building. If you want, if you ever think you might have more than one, now's the time to think about,okay, how do I structure the code base and what tooling allows me to do this effectively?And then the other thing to look at is build times. If you're using compiled languages, then obviously compilation, in all cases testing, if you start to see like, I can already see that that tests are taking five minutes, 10 minutes, 15 minutes, 20 minutes.Surely, I want some technology that allows me to speed that up through caching, through concurrency, through fine-grained invalidation, namely, don't even attempt to do work that isn't necessary for the result that was asked for.Then it's probably time to start thinking about tools like this, because the earlier you adopt it, the easier it is to adopt.So if you wait until you've got a tangle of multiple setup pies in the repo and it's unclear how you manage them and how you keep their dependencies synchronized,so there aren't version conflicts across these different projects, specifically with Python,this is an interesting problem.I would say with other languages, there is more because of the compilation step in jvm languages or go you.[30:10] Encounter the need for a build system much much earlier a bill system of some kind and then you will ask yourself what kind with python because you can get a bite for a while just running. What are the play gate and pie test and i directly and all everything is all together in a single virtual and.But the Python tooling, as mighty as it is, mostly is not designed for larger code bases with multiple, that deploy multiple things and have multiple different sets of.[30:52] Internal and external dependencies the tooling generally implicitly assume sort of one top level set up i want top level. Hi project dot com all you know how are you configuring things and so especially using python let's say for jango flask apps or for data scienceand your code base is growing and you've hired a bunch of data scientists and there's more and more code going in there. With Python, you need to start thinking about what tooling allows me to scale this code base. No, I think I mostly resonate with that. The first question that comes to my mind is,let's talk specifically about the deployment problem. If you're deployed to multiple AWS lambdas or cloud functions or whatever, the first thought that would come to my mind isis I can use separate Docker images that can let me easily produce this container image that I can ship independently.Would you say that's not enough? I totally get that for the build time problem.A Docker image is not going to solve anything. But how about the deployment step?[32:02] So again, with deployments, I think there are two ways where a tool like this can really speed things up.One is only build the things that actually need to be redeployed. And because the tool understands dependencies and can do change analysis, it can figure that out.So one of the things that HansB2 does is it integrates with Git.And so it natively understands how to figure out Git diffs. So you can say something like, show me all the whatever, lambdas, let's say, that are affected by changes between these two branches.[32:46] And it knows and it understands it can say, well, these files changed and you know, we, I understand the transitive dependencies of those files.So I can see what actually needs to be deployed. And, you know, many cases, many things will not need to be redeployed because they haven't changed.The other thing is there's a lot of performance improvements and process improvements around building those images. So, for example, we have for Python specifically, we have an executable format called PEX,which stands for Python executable, which is a single file that embeds all of your Python code that is needed for your deployable and all of its external requirements, transitive external requirements, all bundled up into this single sort of self-executing file.This allows you to do things like if you have to deploy 50 of these, you can basically have a single docker image.[33:52] The different then on top of that you add one layer for each of these fifty and the only difference in that layer is the presence of this pecs file. Where is without all this typically what you would do is.You have fifty docker images each one of which contains a in each one of which you have to build a virtual and which means running.[34:15] Pip as part of building the image, and that gets slow and repetitive, and you have to do it 50 times.We have a lot of ways to speed up. Even if you are deploying 50 different Docker images, we have ways of speeding that up quite dramatically.Because again, of things like dependency analysis, the PECS format, and the ability to build incrementally.Yeah, I think I remember that at Dropbox, we came up with our own, like, par format to basically bundle up a Python binary with, I think par stood for Python archive. I'm notentirely sure. But it did something remarkably similar to solve exactly this problem. It just takes so long, especially if you have a large Python code base. I think that makes sense to me. The other thing that one might ask is, with Python, you don't really have,too long of a build time, is what you would guess, because there's nothing to build. Maybe myPy takes some time to do some static analysis, and, of course, your tests can take forever,and you don't want to rerun them. But there isn't that much of a build time that you have to think about. Would you say that you agree with this, or there's some issues that end,up happening on real-world code basis.[35:37] Well that's a good question the word builds means different things to different people and we recently taken to using the time see i more. Because i think that is clear to people what that means but when i say build or see i mean it in the law in in the extended sense everything you do to go from.Human written source code to a verified.Test did. deployable artifact and so it's true that for python there's no compilation step although arguably. Running my pie is really important and now that i'm really in the habit of using.My pie i will probably never not use it on python code ever again but so that are.[36:28] Sort of build-ish steps for Python such as type checking, such as running code generators like Thrift or Protobuf.And obviously a big, big one is running, resolving third-party dependencies such as running PIP or poetry or whatever it is you're using. So those are all build steps.But with Python, really the big, big, big thing is testing and packaging and primarily testing.And so with Python, you have to be even more rigorous about unit testing than you do with other languages because you don't have a compiler that is catching whole classes of bugs.So and again, MyPy and type checking does really help with that. And so when I build to me includes, build in the large sense includes running tests,includes packaging and includes everything, all the quality control that you run typically in CI or on your desktop in order to go say, well, I've made some edits and here's the proof that these edits are good and I can merge them or deploy them.[37:35] I think that makes sense to me. And like, I certainly saw it with the limited number of testing, the limited amount of type checking you can do with Python, like MyPy is definitelyimproving on this. You just need to unit test a lot to get the same amount of confidence in your own code and then unit tests are not cheap. The biggest question that comes tomy mind is that is BANs V2 focused on Python? Because I have a TypeScript code base at my workplace and I would love to replace the TypeScript compiler with something that was slightly smarter and could tell me, you know what, you don't need to run every unit test every change.[38:16] Great question so when we launched a pass me to which was two years ago. The.We focused on python and that was the initial language we launched with because you had to start somewhere and in the city ten years in between the very scarlet centric work we were doing on pansy one. And the launch of hands be to something really major happened in the industry which was the python skyrocketed in popularity sky python went from.Mostly the little scripting language around the edges of your quote unquote real code, I can use python like fancy bash to people are building massive multi billion dollar businesses entirely on python code bases and there are a few things that drove this one was.I would say the biggest one probably was the python became the. Language of choice for data science and we have strong support for those use cases. There was another was the,Django and Flask became very popular for writing web apps more and more people were used there were more in Intricate DevOps use cases and Python is very popular for DevOps for various good reasons. So.[39:28] Python became super popular. So that was the first thing we supported in pants v2, but we've since added support for or Go, Java, Scala, Kotlin, Shell.Definitely what we don't have yet is JavaScript TypeScript. We are looking at that very closely right now, because that is the very obvious next thing we want to add.Actually, if any listeners have strong opinions about what that should look like, we would love to hear from them or from you on our Slack channels or on our GitHub discussions where we are having some lively discussions about exactly this because the JavaScript.[40:09] And TypeScript ecosystem is already very rich with tools and we want to provide only value add, right? We don't want to say, you have to, oh, you know, here's another paradigm you have to adopt.And here's, you know, you have to replace, you've just done replacing whatever this with this, you know, NPM with yarn. And now you have to do this thing. And now we're, we don't want to beanother flavor of the month. We only want to do the work that uses those tools, leverages the existing ecosystem but adds value. This is what we do with Python and this is one of the reasons why our Python support is very, very strong, much stronger than any other comparable tool out there is.[40:49] A lot of leaning in on the existing Python tool ecosystem but orchestrating them in a way that brings rigor and speed to your builds.And I haven't used the word we a lot. And I just kind of want to clarify who we is here.So there is tool chain, the company, and we're working on, um, uh, SAS and commercial, um, solutions around pants, which we can talk about in a bit.But there is a very robust open source community around pants that is not. tightly held by Toolchain, the company in a way that some other companies open source projects are.So we have a lot of contributors and maintainers on Pants V2 who are not working at Toolchain, but are using Pants in their own companies and their own organizations.And so we have a very wide range of use cases and opinions that are brought to bear. And this is very important because, as I mentioned earlier,we are not trying to design a system for one use case, for one company or a team's use case.We are trying, you know, we are working on a system we want.[42:05] Adoption for over and over and over again at a wide variety of companies. And so it's very important for us to have the contributions and the input from a wide variety of teams and companiesand people. And it's very fortunate that we now do. I mean, on that note, the thing that comes to my mind is another benefit of your scalable build system like Vance or Bazel or Buck is that youYou don't have to learn various different commands when you are spelunking through the code base, whether it's like a Go code base or like a Java code base or TypeScript code base.You just have to run pants build X, Y, Z, and it can construct the appropriate artifacts for you. At least that was my experience with Bazel.Is that something that you are interested in or is that something that pants V2 does kind of act as this meta layer for various other build systems or is it much more specific and knowledgeable about languages itself?[43:09] It's, I think your intuition is correct. The idea is we want you to be able to do something like pants test or pants test, you know, give it a path to a directory and it understands what that means.Oh, this directory contains Python code. Therefore, I should run PyTest in this way. And oh, Oh, it also contains some JavaScript code, so I should run the JavaScript test in this way.And it basically provides a conceptual layer above all the individual tools that gives you this uniformity across frameworks, across languages.One way to think about this is.[43:52] The tools are all very imperative. say you have to run them with a whole set of flags and inputs and you have to know how to use each one separately. So it's like having just the blades of a Swiss Army knife withno actual Swiss Army knife. A tool like Pants will say, okay, we will encapsulate all of that complexity into a much more simple command line interface. So you can do, like I said,test or pants lint or pants format and it understands, oh, you asked me to format your code. I see that you have the black and I sort configured as formatters. So I will run them. And I happen to know that formatting, because formatting can change the source files,I have to run them sequentially. But when you ask for lint, it's not changing the source files. So I know that I can run them multiple lint as concurrently, that sort of logic. And And different tools have different ways of being configured or of telling you what they want to do, but we...[44:58] Can't be to sort of encapsulate all that away from you and so you get this uniform simple command line interface that abstract away a lot of the specifics of these tools and let you run simple commands and the reason this is important is that. This extra layer of indirection is partly what allows pants to apply things like cashing.And invalidation and concurrency because what you're saying is.[45:25] Hey, the way to think about it is not, I am telling pants to run tests. It is I am telling pants that I want the results of tests, which is a subtle difference.But pants then has the ability to say, well, I don't actually need to run pi test on all these tests because I have results from some of them already cached. So I will return them from cache.So that layer of indirection not only simplifies the UI, but provides the point where you can apply things like caching and concurrency.Yeah, I think every programmer wants to work with declarative tools. I think SQL is one of those things where you don't have to know how the database works. If SQL were somewhat easier, that dream would be fulfilled. But I think we're all getting there.I guess the next question that I have is, what benefit do I get by using the tool chain, like SaaS product versus Pants V2?When I think about build systems, I think about local development, I think about CI.[46:29] Why would I want to use the SaaS product? That's a great question.So Pants does a huge amount of heavy lifting, but in the end it is restricted to the resources is on the machine on which it's running. So when I talk about cash, I'm talking about the local cash on that machine. When I talk about concurrency, I'm talking about using,the cores on your machine. So maybe your CI machine has four cores and your laptop has eight cores. So that's the amount of concurrency you get, which is not nothing at all, which is great.[47:04] Thanks for watching![47:04] You know as i mentioned i worked at google for many years and then other companies where distributed systems were saying like i come from a distributed systems background and it really. Here is a problem.All of a piece of work taking a long time because of. Single machine resource constraints the obvious answer here is distributed distributed the work user distributed system and so that's what tool chain offers essentially.[47:30] You configure Pants to point to the toolchain system, which is currently SAS.And we will have some news soon about some on-prem solutions.And now the cache that I mentioned is not just did this test run with these exact inputs before on my machine by me me while I was iterating, but has anyone in my organization or any CI run this test before,with these exact inputs?So imagine a very common situation where you come in in the morning and you pull all the changes that have happened since you last pulled.Those changes presumably passed CI, right? And the CI populated the cache.So now when I run tests, I can get cache hits from the CI machine.[48:29] Now pretty much, yeah. And then with concurrency, again, so let's say, you know, post cache, there are still 200 tests that need to be run.I could run them eight at a time on my machine or the CI machine could run them, you know, say, four at a time on four cores, or I could run 50 or 100 at a time on a cluster of machines.That's where, again, as your code base gets bigger and bigger, that's where some massive, massive speedups come in.The other aspects of the... I should mention that the remote execution that I just mentioned is something we're about to launch. It is not available today. The remote caching is.The other aspects are things like observability. So when you run builds on your laptop or CI, they're ephemeral.Like the output gets lost in the scroll back.And it's just a wall of text that gets lost with them.[49:39] Toolchain all of that information is captured and stored in structured form so you have. Hey the ability to see past bills and see build behavior over time and drill death search builds and drill down into individual builds and see well.How often does this test fail and you know when did this get slow all this kind of information and so you get.This more enterprise level.Observability into a very core piece of developer productivity, which is the iteration time.The time it takes to run tests and build deployables and parcel the quality control checks so that you can merge and deploy code directly relates to time to release.It directly relates to some of the core metrics of developer productivity. How long is it going to take to get this thing out the door?And so having the ability to both speed that up dramatically through distributing the work and having observability into what work is going on, that is what toolchain provides,on top of the already, if I may say, pretty robust open source offering.[51:01] So yeah, that's kind of it.[51:07] Pants on its own gives you a lot of advantages, but it runs standalone. Plugging it into a larger distributed system really unleashes the full power of Pants as a client to that system.[51:21] No, I think what I'm seeing is this interesting convergence. There's several companies trying to do this for Bazel, like BuildBuddy and Edgeflow. So, and it really sounds like the build system of the future, like 10 years from now.[51:36] No one will really be developing on their local machines anymore. Like there's GitHub code spaces on one side. It's like you're doing all your development remotely.[51:46] I've always found it somewhat odd that development that happens locally and whatever scripts you need to run to provision your CI machine to run the same set of testsare so different sometimes that you can never tell why something's passing locally and failing in in CI or vice versa. And there really should just be this one execution layer that can say, you know what, I'm going to build at a certain commit or run at a certain commit.And that's shared between the local user and the CI user. And your CI script is something as simple as pants build slash slash dot dot dot. And it builds the whole code base for,you. So yeah, I certainly feel like the industry is moving in that direction. I'm curious whether You think that's the same.Do you have an even stronger vision of how folks will be developing 10 years from now? What do you think it's going to look like?Oh, no, I think you're absolutely right. I think if anything, you're underselling it. I think this is how all development should be and will be in the future for multiple reasons.One is performance.[52:51] Two is the problem of different platforms. And so today, big thorny problem is I want to, you know, I want to,I'm developing on my Mac book, but the production, so I'm running, when I run tests locally and when I run anything locally, it's running on my Mac book, but that's not our deployable, right?Typically your deploy platform is some flavor of Linux. So...[53:17] With the distributed system approach you can run the work in. Containers that exactly match your production environments you don't even have to care about can this run.On will my test pass on mac os do i need ci the runs on mac os just to make sure the developers can. past test on Mac OS and that is somehow correlated with success on the production environment.You can cut away a whole suite of those problems, which today, frankly, I had mentioned earlier, you can get cache hits on your desktop from remote, from CI populating the cache.That is hampered by differences in platform.Is hampered by other differences in local setup that we are working to mitigate. But imagine a world in which build logic is not actually running on your MacBook, or if it is,it's running in a container that exactly matches the container that you're targeting.It cuts away a whole suite of problems around platform differences and allows you to focus because on just a platform you're actually going to deploy too.[54:35] And the...[54:42] And just the speed and the performance of being able to work and deploy and the visibility that it gives you into the productivity and the operational work of your development team,I really think this absolutely is the future.There is something very strange about how in the last 15 years or so, so many business functions have had the distributed systems treatment applied to them.Function is now that there are these massive valuable companies providing systems that support sales and systems that support marketing and systems that support HR and systems supportoperations and systems support product management and systems that support every business function,and there need to be more of these that support engineering as a business function.[55:48] And so i absolutely think the idea that i need a really powerful laptop so that my running tests can take thirty minutes instead of forty minutes when in reality it should take three minutes is. That's not the future right the future is to as it has been for so many other systems to the web the laptop is that i can take anywhere is.Particularly in these work from home times, is a work from anywhere times, is just a portal into the system that is doing the actual work.[56:27] Yeah. And there's all these improvements across the stack, right? When I see companies like Versel, they're like, what if you use Next.js, we provide the best developer platform forthat and we want to provide caching. Then there's like the lower level systems with build systems, of course, like bands and Bazel and all that. And at each layer, we're kindof trying to abstract the problem out. So to me, it still feels like there is a lot of innovation to be done. And I'm also going to be really curious to know, you know, there'sgoing to be like a few winners of this space, or if it's going to be pretty broken up. And like everyone's using different tools. It's going to be fascinating, like either way.Yeah, that's really hard to know. I think one thing you mentioned that I think is really important is you said your CI should be as simple as just pants build colon, colon, or whatever.That's our syntax would be sort of pants test lint or whatever.I think that's really important. So.[57:30] Today one of the big problems with see i. Which is still growing right now home market is still growing is more more teams realize the value and importance of automated.Very aggressive automated quality control. But configuring CI is really, really complicated. Every CI provider have their own configuration language,and you have to reason about caching, and you have to manually construct cache keys to the extent,that caching is even possible or useful.There's just a lot of figuring out how to configure and set up CI, And even then it's just doing the naive thing.[58:18] So there are a couple of interesting companies, Dagger and Earthly, or interesting technologies around simplifying that, but again, you still have to manually,so they are providing a, I would say, better config and more uniform config language that allows you to, for example, run build steps in containers.And that's not nothing at all.[58:43] Um, but you still manually creating a lot of, uh, configuration to run these very coarse grained large scale, long running build steps, you know, I thinkthe future is something like my entire CI config post cloning the repo is basically pants build colon, colon, because the system does the configuration for you.[59:09] It figures out what that means in a very fast, very fine grained way and does not require you to manually decide on workflows and steps and jobs and how they all fit together.And if I want to speed this thing up, then I have to manually partition the work somehow and write extra config to implement that partitioning.That is the future, I think, is rather than there's the CI layer, say, which would be the CI providers proprietary config or theodagger and then underneath that there is the buildtool, which would be Bazel or Pants V2 or whatever it is you're using, could still be we make for many companies today or Maven or Gradle or whatever, I really think the future is the integration of those two layers.In the same way that I referenced much, much earlier in our conversation, how one thing that stood out to me at Google was that they had the insight to integrate the version control layer and the build tool to provide really effective functionality there.I think the build tool being the thing that knows about your dependencies.[1:00:29] Can take over many of the jobs of the c i configuration layer in a really smart really fast. Where is the future where essentially more and more of how do i set up and configure and run c i is delegated to the thing that knows about your dependencies and knows about cashing and knows about concurrency and is able,to make smarter decisions than you can in a YAML config file.[1:01:02] Yeah, I'm excited for the time that me as a platform engineer has to spend less than 5% of my time thinking about CI and CD and I can focus on other things like improving our data models rather than mucking with the YAML and Terraform configs. Well, yeah.Yeah. Yeah. Today you have to, we're still a little bit in that state because we are engineers and because we, the tools that we use are themselves made out of software. There's,a strong impulse to tinker and there's a strong impulse sake. Well, I want to solve this problem myself or I want to hack on it or I should be able to hack on it. And that's, you should be able to hack on it for sure. But we do deserve more tooling that requires less hacking,and more things and paradigms that have tested and have survived a lot of tire kicking.[1:02:00] Will we always need to hack on them a little bit? Yes, absolutely, because of the nature of what we do. I think there's a lot of interesting things still to happen in this space.Yeah, I think we should end on that happy note as we go back to our day jobs mucking with YAML. Well, thanks so much for being a guest. I think this was a great conversation and I hope to have you again for the show sometime.Would love that. Thanks for having me. It was fascinating. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev
In this week's show, I first clear up some goofs from last week. Then we talk about ID tags on trucks, Hurricane Ian HOS exemptions and tips on hauling FEMA loads, a cool video explaining deregulation, what makes a professional driver, and finally we discuss electric vehicle adoption and I finally learned how California got so much clout when it comes to emissions standards. All that, plus more news stories, and of course, listener feedback involving Mario Kart, trucker spy cameras, HOS elimination, and podcast reviews. This episode of Trucker Dump is sponsored by: Porter Freight Funding - So many services to offer, including Factoring, Dispatching, Freight Brokering, Fuel Cards, Insurance, and Compliance. Call 205-397-0934 to learn more. News Links: What are the Autonomy Levels for Autonomous Vehicles? from Perforce.com International A26 engine-related recall: Potential connecting-rod issues in 7,000 trucks from OverdriveOnline.com Nikola recalls battery-electric trucks for seat belt issue from OverdriveOnline.com FMCSA issues HOS waiver for Hurricane Ian relief haulers in eight states from CDLLife.com FreightWaves CEO Fuller offers advice to truckers hauling FEMA loads from FreightWaves.com Canada ends border vaccine mandate, OOIDA calls on Biden to follow suit from OverdriveOnline.com Federal Trade Commission joining independent contractor fray from FreightWaves.com FMCSA considering 5-year ‘special' waiver for propane haulers from FreightWaves.com The FMCSA eyes plan to require electronic identification technology for commercial vehicles from CDLLife.com Submit your comments here Federal agency seeks input on changes to hazmat registration fees from LandLine.media Arizona lane filtering law in effect from LandLine.media Love's opens new truck stop in Iowa with nearly 70 truck parking spaces from CDLLife.com New York to Ban Sale of Gas Cars by 2035 from ttnews.com (Transport Topics) Why the Clean Air Act's Special Treatment of California Is Permissible Even in Light of the Equal-Sovereignty Notion Invoked in Shelby County from Justia.com Toyota's CEO Says EV Adoption Will Take Longer Than Expected from ttnews.com (Transport Topics) Eye-opening video breaks down ‘The Decision That Broke American Trucking' from CDLLife.com Time to sign up for Truck to Success course from LandLine.media Want to improve retention in trucking? Define 'professionalism' to set the standard, build value from OverdriveOnline.com Listener Feedback: Andy Wadel (@Peterbilt Andy) joins the Trucker Dump Slack group. @Goose shares some insight on how speeders are caught in Australia. Shannon left a review on Apple Podcasts and I'm still trying to decide if it's good or not.
Kui me tavaliselt kajastame oma saates idufirmasid, mis on alles oma teekonna alguses, siis täna räägime sellest, mis juhtub idufirmas, kui keegi nad ära ostab. Stuudiosse on Perforce Estonia juht Hannes Linno, kelle käe all on nüüd näiteks endine Eesti idufirma ZeroTurnaround. Perforce'i äri ongi osta kokku idufirmasid, panna tiimid tööle ja müüa arendajatele mõeldud SaaS-tooteid korraliku müügimasina kaudu. Saatejuhid on Henrik Roonemaa ja Taavi Kotka. Tunnusmuusika Paul Oja. Restarti toetab Katana: tootjate parim abiline.
TalkingHeadz is an interview format podcast featuring the movers and shakers of enterprise communications - we also have great guests. In this episode Dave and Evan discuss employee engagement with Gideon Pridor, CMO and Chief Storyteller of Workvivo. Gideon drives marketing strategy and execution for technology start-ups with a penchant for international business development. Before Workvivo (and a well-deserved sabbatical) Gideon was VP Marketing at TravelPerk, One of the world's fastest-growing SaaS companies, Perfecto (acquired by Perforce), and others. Gideon is a frequent speaker at industry events, an HRtech enthusiast, and a startup mentor helping companies build their story and tell it to the world.
About TomTom enjoys being a bridge between people and technology. When he's not thinking about ways to make enterprise demos less boring, Tom enjoys spending time with his wife and dogs, reading, and gaming with friends.Links Referenced: LaunchDarkly: https://launchdarkly.com Heidi Waterhouse Twitter: https://twitter.com/wiredferret TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted episode is brought to us by our friends at LaunchDarkly. And it's always interesting when there's a promoted guest episode because they generally tend to send someone who has a story to tell in different ways.Sometimes they send me customers of theirs. Other times they send me executives. And for this episode, they have sent me Tom Totenberg, who's a senior solutions engineer at LaunchDarkly. Tom, thank you for drawing the short straw. It's appreciated.Tom: [laugh]. Anytime. Thank you so much for having me, Corey.Corey: So, you're a senior solutions engineer, which in many different companies is interpreted differently, but one of the recurring themes tends to pop up is often that is a different way of saying sales engineer because if you say sales, everyone hisses and recoils when you enter the conversation. Is that your experience or do you see your role radically differently?Tom: Well, I used to be one of those people who did recoil when I heard the word sales. I was raised in a family where you didn't talk about finances, you know? That's considered to be faux pas, and when you hear the word sales, you immediately think of a car lot. But what I came to realize is that, especially when we talk about cloud software or any sort of community where you start to run into the same people at conferences over and over and over again, turns out the good salespeople are the ones who actually try to form relationships and try to solve problems. And I realized that oh, I like to work with those people. It's pretty exciting. It's nice to be aspirational about what people can do and bring in the technical chops to see if you can actually make it happen. So, that's where I fit in.Corey: The way that I've always approached it has been rather different. Because before I got into tech, I worked in sales a bunch of times and coming up from the—I guess, clawing your way up doing telesales was a polite way of describing—back in the days before there were strong regulations against it, calling people at dinner to sell them credit cards. And what's worse is I was surprisingly effective at it for a kid who, like, you grew up in a family where we didn't talk about money. And it's easy to judge an industry by its worst examples. Another one of these would be recruiting, for example.When everyone talks about how terrible third-party recruiters are because they're referring to the ridiculous spray-and-pray model of just blasting out emails to everything that hold still long enough that meets a keyword. And yeah, I've also met some recruiters that are transformative as far as the conversations you have with them go. But some of that with sales. It's, “Oh, well, you can't be any fun to talk to because I had a really bad experience buying a used car once and my credit was in the toilet.”Tom: Yeah, exactly. And you know, I have a similar experience with recruiters coming to LaunchDarkly. So, not even talking about the product; I was a skeptic, I was happy where I was, but then as I started talking to more and more people here, I'm assuming you've read the book Accelerate; you probably had a hand in influencing part of it.Corey: I can neither confirm nor deny because stealing glory is something I only do very intentionally.Tom: Oh okay, excellent. Well, I will intentionally let you have some of that glory for you then. But as I was reading that book, it reminded me again of part of why I joined LaunchDarkly. I was a skeptic, and they convinced me through everyone that I talked to just what a nice place it is, and the great culture, it's safe to fail, it's safe to try stuff and build stuff. And then if it fails, that's okay. This is the place where that can happen, and we want to be able to continue to grow and try something new.That's again, getting back to the solutions engineer, sales engineer part of it, how can we effectively convey this message and teach people about what it is that we do—LaunchDarkly or not—in a way that makes them excited to see the possibilities of it? So yeah, it's really great when you get to work with those type of people, and it absolutely shouldn't be influenced by the worst of them. Sometimes you need to find the right ones to give you a chance and get in the door to start having those conversations so you can make good decisions on your own, not just try to buy whatever someone's—whatever their initiative is or whatever their priority is, right?Corey: Once upon a time when I first discovered LaunchDarkly, it was pretty easy to describe what you folks did. Feature flags. For longtime listeners of the show, and I mean very longtime listeners of the show, your colleague Heidi Waterhouse was guest number one. So, I've been talking to you folks about a variety of different things in a variety of different ways. But yeah, “LaunchDarkly. Oh, you do feature flags.”And over time that message has changed somewhat into something I have a little bit of difficulty to be perfectly honest with you in pinning down. At the moment we're recording this, if I pull up launchdarkly.com, it says, “Fundamentally change how you deliver software. Innovate faster, deploy fearlessly, and make each release a masterpiece.”And I look at the last release I pushed out, which wound up basically fixing a couple of typos there, and it's like, “Well, shit. Is it going to make me sign my work because I'm kind of embarrassed by a lot of it.” So, it's aspirational, I get it, but it also somehow [occludes 00:05:32] a little bit of meaning. What is it you'd say it is you do here.Tom: Oh, Office Space. Wonderful. Good reference. And also, to take about 30 seconds back, Heidi Waterhouse, what a wonderful human. wiredferret on Twitter. Please, please go look her up. She's got just always such wonderful things to say. So—Corey: If you don't like Heidi Waterhouse, it is a near certainty it is because you've not yet met her. She's wonderful.Tom: Exactly. Yes, she is. So, what is it we'd say we do here? Well, when people think about feature flags—or at this point now, ‘feature management,' which is a broader scope—that's the term that we're using now, it's really talking about that last bit of software delivery, the last mile, the last leg, whatever your—you know, when you're pushing the button, and it's going to production. So, you know, a feature flag, if you ask someone five or ten years ago, they might say, oh, it's a fancy if statement controlled by a config file or controlled by a database.But with a sort of modern architecture, with global delivery, instant response time or fraction of a second response time, it's a lot more fundamental than that. That's why the word fundamental is there: Because it comes down to psychological safety. It comes down to feeling good about your life every day. So, whether it is that you're fixing a couple typos, or if you're radically changing some backend functionality, and trying out some new sort of search algorithm, a new API route that you're not sure if it's going to work at scale, honestly, you shouldn't have to stay up at night, you shouldn't have to think about deploying on a weekend because you should be able to deploy half-baked code to production safely, you should be able to do all of that. And that's honestly what we're all about.Now, there's some extra elements to it: Feedback loops, experimentation, metrics to make sure that your releases are doing well and doing what you anticipated that they would do, but really, that's what it comes down to is just feeling good about your work and making sure that if there is a fire, it's a small fire, and the entire audience isn't going to get part of the splash zone, right? We're making it just a little safer. Does that answer your question? Is that what you're getting at? Or am I still just speaking in the lingo?Corey: That gets it a lot closer. One of the breakthrough moments—of course I picked it up from one of Heidi's talks—is feature flag seems like a front end developer thing, yadda, yadda, yadda. And she said historically, yeah, in some ways, in some cases, that's how it started. But think about it this way. Think about separating out configuration from your deploy process. And what would that mean? What would that entail?And I look at my current things that I have put out there, and there is no staging environment, my feature branches main, and what would that change? In my case, basically nothing. But that's okay. Because I'm an irresponsible lunatic who should not be allowed near anything expensive, which is why I'm better at stateless things because I know better than to take my aura near things like databases.Tom: Yeah. So, I don't know how old you are Corey. But back—Corey: I'm in my mid-30s, which—Tom: Hey—Corey: —enrages my spouse who's slightly older. Because I'm turning 40 in July, but it's like, during the pandemic, as it has for many of us, the middle has expanded.Tom: There you go. Right. Exa—[laugh] exactly. Can neither confirm nor deny. You can only see me from about the mid-torso up, so, you know, you're not going to see whether I've expanded.But when we were in school doing group projects, we didn't have Google Docs. We couldn't see what other people were working on. You'd say, “Hey, we've got to write this paper. Corey, you take the first section, I'll take the second section, and we'll go and write and we'll try to squish it back together afterward.” And it's always a huge pain in the ass, right? It's terrible. Nobody likes group projects.And so the old method of Gitflow, where we're creating these feature branches and trying to squish them back later, and you work on that, and you work on this thing, and we can't see what each other are doing, it all comes down to context switching. It is time away from work that you care about, time away from exciting or productive work that you actually get to see what you're doing and put it into production, try it out. Nobody wants to deal with all the extra administrative overhead. And so yeah, for you, when you've got your own trunk-based development—you know, it's all just main—that's okay. When we're talking about teams of 40, 50, 100, 1000 suddenly becomes a really big deal if you were to start to split off and get away from trunk-based development because there's so much extra work that goes into trying to squish all that work back together, right? So, nobody wants to do all the extra stuff that surrounds getting software out there.Corey: It's toil. It feels consistently like it is never standardized so you always have to wind up rolling your own CI/CD thing for whatever it is. And forget between jobs; between different repositories and building things out, it's, “Oh, great. I get to reinvent the wheel some more.” It's frustrating.Tom: [laugh]. It's either that or find somebody else's wheel that they put together and see if you can figure out where all those spokes lead off to. “Is this secure? I don't know.”Corey: How much stuff do you have running in your personal stuff that has more or less been copied around for a decade or so? During the pandemic, I finally decided, all right, you know what I'm doing? That's right, being productive. We should fix that. I'm going to go ahead and redo my shell config—my zshrc—from scratch because, you know, 15 years of technical debt later, a lot of the things I used to really need it to do don't really apply anymore.Let's make it prettier, and let's make it faster. And that was great and all, but just looking through it, it was almost like going back in time for weird shell aliases that I don't need anymore. It's, well, that was super handy when I ran a Ruby production environment, but I haven't done that in seven years, and I haven't been in this specific scenario that one existed for since 2011. So maybe, maybe I can turn that one off.Tom: Yeah, maybe. Maybe we can get rid of that one. I mean, when's the last time you ran npm install on something you were going to try out here and paid attention to the warnings that came up afterward? “Hey, this one's deprecated. That one's deprecated.” Well, let's see if it works first, and then we'll worry about that later.Corey: Exactly. Security problems? Whatever. It's a Lambda function. What do I care?Tom: Yeah, it's fine. [laugh]. Exactly. Yeah. So, a lot of this is hypothetical for someone in my position, too, because I didn't ever get formal training as a software developer. I can copy and paste from Stack Overflow with the best of them and there's all sorts of resources out there, but really the people that we're talking to are the ones who actually live that day in, day out.And so I try to step into their shoes and try to feel that pain. But it's tough. Like, you have to be able to speak both languages and try to relate to people to see what are they actually running into, and is that something that we can help with? I don't know.Corey: The way that I tend to think about these things—and maybe it's accurate, and maybe it's not—it's just, no one shows up hoping to do a terrible job at work today, but we are constrained by a whole bunch of things that are imposed upon us. In some of the more mature environments, some of that is processes there for damn good reasons. “Well, why can't I just push everything I come up with to production?” “It's because we're a bank, genius. How about you think a little bit before you open your mouth?”Other times, it's because well, I have to go and fight with the CI/CD system, and I'm just going to go ahead and patch this one-line change into production. Better processes, better structure have made that a lot more… they've made it a lot easier to be able to do things the right way. But I would say we're nowhere near its final form, yet. There's so much yak-shaving that has to go into building out anything that it's frustrating, on some level, just all of the stuff you have to do, just to get the scaffolding in place to write nonsense. I mean, back when they announced Lambda functions it was, “In the future, the only code you'll write is business logic.”Yeah, well, I use a crap-ton of Lambda here and it feels like most of the code I write is gluing all of the weird formats and interchanges together in different APIs. Not a lot of business logic in that; and awful lot of JSON finickiness.Tom: Yeah, I'm with you. And especially at scale, I still have a hard time wrapping my mind around how all of that extra translation is possibly going to give the same sort of performance and same sort of long-term usability, as opposed to something that just natively speaks the same language end-to-end. So yeah, I agree, there's still some evolution, some standardization that still needs to happen because otherwise we're going to end up with a lot of cruft at various points in the code to, just like you said, translate and make sure we're speaking the same language.Getting back to process though, I spent a good chunk of my career working with companies that are, I would say, a little more conservative, and talking to things like automotive companies, or medical device manufacturers. Very security-conscious, compliant places. And so agile is a four-letter word for them, right, [laugh] where we're going faster automatically means we're being dangerous because what would the change control board say? And so there's absolutely a mental shift that needs to happen on the business side. And developers are fighting this cultural battle, just to try to say, hey, it's better if we can make small iterative changes, there is less risk if we can make small, more iterative changes, and convincing people who have never been exposed to software or know the ins and outs of what it takes to get something from my laptop to the cloud or production or you know, wherever, then that's a battle that needs to be fought before you can even start thinking about the tooling. Living in the Midwest, there's still a lot of people having that conversation.Corey: So, you are clearly deep in the weeds of building and deploying things into production. You're clearly deep into the world of explaining various solutions to different folks, and clearly you have the obvious background for this. You majored in music. Specifically, you got a master's in it. So, other than the obvious parallel of you continue to sing for your supper, how do you get from there to here?Tom: Luck and [laugh]. Natural curiosity. Corey, right now you are sitting on the desk that is also housing my PC gaming computer, right? I've been building computers just to play video games since I was a teenager. And that natural curiosity really came in handy because when I—like many people—realize that oh, no, the career choice that I made when I was 18 ended up being not the career choice that I wanted to pursue for the rest of my life, you have to be able to make a pivot, right, and start to apply some of the knowledge that you got towards some other industries.So, like many folks who are now solutions engineers, there's no degree for solutions engineering, you can't go to school for it; everyone comes from somewhere else. And so in my case, that just happened to be music theory, which was all pedagogy and teaching and breaking down big complex pieces of music into one node at a time, doing analysis, figuring out what's going on underneath the hood. And all of those are transferable skills that go over to software, right? You open up some giant wall of spaghetti code and you have to start following the path and breaking it down because every piece is easy one note at a time, every bit of code—in theory—is easy one line at a time, or one function at a time, one variable at a time. You can continue to break it down further and further, right?So, it's all just taking the transferable skills that you may not see how they get transferred, but then bringing them over to share your unique perspective, because of your background, to wherever it is you're going. In my case, it was tech support, then training, and then solutions engineering.Corey: There's a lot to be said for blending different disciplines. I think that there was, uh, the naughts at least, and possibly into the teens, there was a bias for hiring people who look alike. And no, I'm not referring to the folks who are the white dudes you and I clearly present as but the people with a similar background of, “Oh, you went to these specific schools”—as long as they're Stanford—“And you majored in a narrow list of things”—as long as they're all computer science. And then you wind up going into the following type of role because this is the pedigree we expect and everything, soup to nuts, is aligned around that background and experience. Where you would find people who would be working in the industry for ten years, and they would bomb the interview because it turns out that most of us don't spend our days implementing quicksort on whiteboards or doing other algorithmic-based problems.We're mostly pushing pixels around a screen hoping to make ourselves slightly happier than we were. Here we are. And that becomes a strange world; it becomes a really, really weird moment, and I don't know what the answer is for fixing any of that.Tom: Yeah, well, if you're not already familiar with a quote, you should be, which is that—and I'm going to paraphrase here—but, “Diverse backgrounds lead to diversity in thought,” right? And that presents additional opportunities, additional angles to solve whatever problems you're encountering. And so you're right, you know, we shouldn't be looking for people who have the specific background that we are looking for. How it's described in Accelerate? Can you tell that I read it recently?Which we should be looking for capabilities, right? Are you capable? Do you have the capacity to do the problem-solving, the logic? And of course, some education or experience to prove that, but are you the sort of person who will be able to tackle this challenge? It doesn't matter, right, if you've handled that specific thing before because if you've handled that specific thing before, you're probably going to implement it the same way, again, even if that's not the appropriate solution, this time.So, scrap that and say, let's find the right people, let's find people who can come up with creative solutions to the problems that we're facing. Think about ways to approach it that haven't been done before. Of course don't throw out everything with the—you know, the bathwater out with a baby or whatever that is, but come in with some fresh perspectives and get it done.Corey: I really wish that there was more of an acceptance for that. I think we're getting there. I really do, but it takes time. And it does pay dividends. I mean, that's something I want to talk to you about.I love the sound of my own voice. I wouldn't have two podcasts if I didn't. The counterargument, though, is that there's an awful lot of things that get, you know, challenging, especially when, unlike in a conference setting, it's most people consider it rude to get up and walk out halfway through. When we're talking and presenting information to people during a pandemic situation, well, that changes a lot. What do you do to retain people's interest?Tom: Sure. So, Covid really did a number on anyone who needs to present information or teach. I mean, just ask the millions of elementary, middle school, and high schoolers out there, even the college kids. Everyone who's still getting their education suddenly had to switch to remote learning.Same thing in the professional world. If you are doing trainings, if you're doing implementation, if you're doing demos, if you're trying to convey information to a new audience, it is so easy to get distracted at the computer. I know this firsthand. I'm one of those people where if I'm sitting in an airport lobby and there's a TV on my eyes are glued to that screen. That's me. I have a hard time looking away.And the same thing happens to anyone who's on the receiving end of any sort of information sharing, right? You got Slack blowing you up, you've got email that's pinging you, and that's bound to be more interesting than whatever the person on the screen is saying. And so I felt that very acutely in my job. And there's a couple of good strategies around it, right, which is, we need to be able to make things interactive. We shouldn't be monologuing like I am doing to you right now, Corey.We shouldn't be [laugh] just going off on tangents that are completely irrelevant to whoever's listening. And there's ways to make it more interactive. I don't know if you are familiar, or how much you've watched Twitch, but in my mind, the same sorts of techniques, the same sorts of interactivity that Twitch streamers are doing, we should absolutely be bringing that to the business world. If they can keep the attention of 12-year-olds for hours at a time, why can we not capture the attention of business professionals for an hour-long meeting, right? There's all sorts of techniques and learnings that we can do there.Corey: The problem I keep running into is, if you go stumbling down that pathway into the Twitch streaming model, I found it awkward the few experiments I've made with it because unless I have a whole presentation ready to go and I'm monologuing the whole time, the interactive part with the delay built in and a lot of ‘um' and ‘ah' and waiting and not really knowing how it's going to play out and going seat of the pants, it gets a little challenging in some respects.Tom: Yeah, that's fair. Sometimes it can be challenging. It's risky, but it's also higher reward. Because if you are monologuing the entire time, who's to say that halfway through the content that you are presenting is content that they want to actually hear, right? Obviously, we need to start from some sort of fundamental place and set the stage, say this is the agenda, but at some point, we need to get feedback—similar to software development—we need to know if the direction that we're going is the direction they also want to go.Otherwise, we start diverging at minute 10 and by minute 60, we have presented nothing at all that they actually want to see or want to learn about. So, it's so critical to get that sort of feedback and be able to incorporate it in some way, right? Whether that way is something that you're prepared to directly address. Or if it's something that says, “Hey, we're not on the same page. Let's make sure this is actually a good use of time instead of [laugh] me pretending and listening to myself talk and not taking you into account.” That's critical, right? And that is just as important, even if it feels worse in the moment.Corey: This episode is sponsored in part by our friends at ChaosSearch. You could run Elasticsearch or Elastic Cloud—or OpenSearch as they're calling it now—or a self-hosted ELK stack. But why? ChaosSearch gives you the same API you've come to know and tolerate, along with unlimited data retention and no data movement. Just throw your data into S3 and proceed from there as you would expect. This is great for IT operations folks, for app performance monitoring, cybersecurity. If you're using Elasticsearch, consider not running Elasticsearch. They're also available now in the AWS marketplace if you'd prefer not to go direct and have half of whatever you pay them count towards your EDB commitment. Discover what companies like Equifax, Armor Security, and Blackboard already have. To learn more, visit chaossearch.io and tell them I sent you just so you can see them facepalm, yet again.Corey: From where I sit, one of the many, many, many problems confronting us is that there's this belief that everyone is like we are. I think that's something fundamental, where we all learn in different ways. I have never been, for example—this sounds heretical sitting here saying it, but why not—I'm not a big podcast person; I don't listen to them very often, just because it's such a different way of consuming information. I think there are strong accessibility reasons for there to be transcripts of podcasts. That's why every 300-and-however-many-odd episodes that this one winds up being the sequence in, every single one of them has a transcript attached to it done by a human.And there's a reason for that. Not just the accessibility wins which are obvious, but the fact that I can absorb that information way more quickly if I need to review something, or consume that. And I assume other people are like me, they're not. Other people prefer to listen to things than to read them, or to watch a video instead of listening, or to build something themselves, or to go through a formal curriculum in order to learn something. I mean, I'm sitting here with an eighth-grade education, myself. I take a different view to how I go about learning things.And it works for me, but assuming that other people learn the same way that I do will be awesome for a small minority of people and disastrous for everyone else. So, maybe—just a thought here—we shouldn't pattern society after what works for me.Tom: Absolutely. There is a multiple intelligence theory out there, something they teach you when you're going to be a teacher, which is that people learn in different ways. You don't judge a fish by its ability to climb a tree. We all learn in different ways and getting back to what we were talking about presenting effectively, there needs to be multiple approaches to how those people can consume information. I know we're not recording video, but for everyone listening to this, I am waving my hands all over the place because I am a highly visual learner, but you must be able to accept that other people are relying more on the auditory experience, other people need to be able to read that—like you said with the accessibility—or even get their hands on it and interact with it in some way.Whether that is Ctrl-F-ing your way through the transcript—or Command-F I'm sorry, Mac users [laugh]; I am also on a Mac—but we need to make sure that the information is ready to be consumed in some way that allows people to be successful. It's ridiculous to think that everyone is wired to be able to sit in front of a computer or in a little cubicle for eight hours a day, five days a week, and be able to retain concentration and productivity that entire time. Absolutely not. We should be recording everything, allowing people to come back and consume it in small chunks, consume it in different formats, consume it in the way that is most effective to them. And the onus for that is on the person presenting, it is not on the consumer.Corey: I make it a point to make what I am doing accessible to the people I am trying to reach, not to me. And sometimes I'm slacking, for example, we're not recording video today, so whenever it looks like I'm not paying attention to you and staring off to the side, like, oh, God, he's boring. No. I have the same thing mirrored on both of my screens; I just prefer to look at the thing that is large and easy to read, rather than the teleprompter, which is a nine-inch screen that is about four feet in front of my face. It's one of those easier for me type of things.On video, it looks completely off, so I don't do it, but I'm oh good, I get to take the luxury of not having to be presentable on camera in quite the same way. But when I'm doing a video scenario, I absolutely make it a point to not do that because it is off-putting to the people I'm trying to reach. In this case, I'm not trying to reach you; I already have. This is a promoted guest episode you're trying to reach the audience, and I believe from what I can tell, you're succeeding, so please keep at it.Tom: Oh, you bet. Well, thank you. You know this already, but this is the very first podcast I've ever been a guest on. So, thank you also for making it such a welcoming place. For what it's worth, I was not offended and didn't think you weren't listening. Obviously, we're having a great time here.But yeah, it's something that especially in the software space, people need to be aware of because everyone's job is—[laugh]. Whether you like it or not, here's a controversial statement: Everyone's job is sales. Are you selling your good ideas for your product, to your boss, to your product manager? Are you able to communicate with marketing to effectively say, “Hey, this is what, in tech support, I'm seeing. This is what people are coming to me with. This is what they care about.”You are always selling your own performance to your boss, to your customers, to other departments where you work, to your spouse, to everybody you interact with. We're all selling ourselves all the time. And all of that is really just communication. It's really just making sure you're able to meet people where they are and, effectively, bridge your point of view with theirs to make sure that we're on the same page and, you know, we're able to communicate well. That's so especially important now that we're all remote.Corey: Just so you don't think this is too friendly of a place, let's go ahead and finish out the episode with a personal attack. Before you wound up working at LaunchDarkly. You were at Perforce. What's up with that? I mean, that seems like an awfully big company to cater to its single customer, who is of course J. Paul Reed.Tom: [laugh]. Yeah. Well, Perforce is a wonderful place. I have nothing but love for Perforce, but it is a very different landscape than LaunchDarkly, certainly. When I joined Perforce, I was supporting product called Helix ALM, which, they're still headquartered—Perforce is headquartered here in Minneapolis. I just saw some Perforce folks last week. It truly is a great place, and it is the place that introduced me to so many DevOps concepts.But that's a fair statement. Perforce has been around for a while. It has grown by acquisition over the past several years, and they are putting together new offerings by mixing old offerings together in a way that satisfies more modern needs, things like virtual production, and game development, and trying to package this up in a way that you can then have a game development environment in a box, right? So, there's a lot of things to be said for that, but it very much is a different landscape than a smaller cloud-native company. Which it's its own learning curve, let me tell you, but truly, yeah, to your Perforce, there's a lot more complexity to the products themselves because they've been around for a little bit longer.Solid, solid products, but there's a lot going on there. And it's a lot harder to learn them right upfront. As opposed to something like LaunchDarkly, which seems simple on the surface and you can get started with some of the easy concepts in implementation in, like, an hour, but then as you start digging deeper, whoof, suddenly, there's a lot more complexity hidden underneath the surface than just in terms of how this is set up, and some of those edge cases.Corey: I have to say for the backstory, for those who are unfamiliar, is I live about four miles away from J. Paul Reed, who is a known entity in reliability engineering, in the DevOps space, has been for a long time. So, to meet him, of course I had to fly to Israel. And he was keynoting DevOpsDays Tel Aviv. And I had not encountered him before, and it was this is awesome, I loved his talk, it was fun.And then I gave a talk a little while later called, “Terrible Ideas in Git.” And he's sitting there just glaring at me, holding his water bottle that is a branded Perforce thing, and it's like, “Do you work there?” He's like, “No. I just love Perforce.” It's like, “Congratulations. Having used it, I think you might be the only one.”I kid. I kid. It was great and a lot of different things. It was not quite what I needed when I needed it to but that's okay. It's gotten better and everyone else is not me, as we've discussed; people have different use cases. And that started a very long-running joke that J. Paul Reed is the entirety of the Perforce customer base.Tom: [laugh]. Yeah. And to your point, there's definitely use cases—you're talking about Perforce Version Control or Helix Core.Corey: Back in those days, I don't believe it was differentiated.Tom: It was just called Perforce. Exactly right. But yeah, as Perforce has gotten bigger, now there's different product lines; you name it. But yeah, some of those modern scalable problems, being able to handle giant binary files, being able to do automatic edge replication for globally distributed teams so that when your team in APAC comes online, they're not having to spend the first two hours of their day just getting the most recent changes from the team in the Americas and Europe. Those are problems that Perforce is absolutely solving that are out there, but it's not problems that everybody faces and you know, there's just like everybody else, we're navigating the landscape and trying to find out where the product actually fits and how it needs to evolve.Corey: And I really do wish you well on it. I think there's going to be an awful lot of—Tom: Mm-hm.Corey: —future stories where there is this integration. And you'd say, “Oh, well, what are you wishing me well for? I don't work there anymore.” But yeah, but isn't that kind of we're talking about, on some level, of building out things that are easy, that are more streamlined, that are opinionated in the right ways, I suppose. And honestly, that's the thing that I found so compelling about LaunchDarkly. I have a hard time imagining I would build anything for production use that didn't feature it these days if I were, you know, better at computers?Tom: Sure. Yeah. [laugh]. Well, we do have our opinions on how some things should work, right? Where the data is exposed because with any feature flagging system or feature management—LaunchDarkly included—you've got a set of rules, i.e. who should see this, where is it turned on? Where is it turned off? Who in your audience or user base should be able to see these features? That's the rules engine side of it.And on the other side, you've got the context to decide, well, you know, I'm Corey, I'm logging in, I'm in my mid-30s. And I know all this information about Corey, and those rules need to then be able to determine whether something should be on or off or which experience Corey gets. So, we are very opinionated over the architecture, right, and where that evaluation actually happens and how that data is exposed or where that's exposed. Because those two halves need to meet and both halves have the potential to be extremely sensitive. If I'm targeting based off of a list of 10,000 of my premium users' email addresses, I should not be exposing that list of 10,000 email addresses to a web browser or a mobile phone.That's highly insecure. And inefficient; that's a large amount of text to send, over 10,000 email addresses. And so when we're thinking about things like page load times, and people being able to push F12 to inspect the page, absolutely not, we shouldn't be exposing that there. At the same time, it's a scary prospect to say, “Hey, I'm going to send personal information about Corey over to some third-party service, some edge worker that's going to decide whether Corey should see a feature or not.” So, there's definitely architectural considerations of different use cases, but that's something that we think through all the time and make sure is secure.There's a reason—I'm going to put on my sales engineer hat here—which is to say that there is a reason that the Center for Medicare and Medicaid Services is our sponsor for FedRAMP moderate certification, in process right now, expected to be completed mid-2022. I don't know. But anybody who is unfamiliar with that, if you've ever had to go through high trust certification, you know, any of these compliances to make your regulators happy, you know that FedRAMP is so incredibly stringent. And that comes down to evaluating where are we exposing the data? Who gets to see that? Is security built in and innate into the architecture? Is that something that's been thought through?I have went so far afield from the original point that you made, but I agree, right? We've got to be opinionated about some things while still providing the freedom to use it in a way that is actually useful to you and [laugh] and we're not, you know, putting up guardrails, that mean that you've got such a narrow set of use cases.Corey: I'd like to hope—maybe I'm wrong on this—that it gets easier the more that we wind up doing these things because I don't think that it necessarily has been easy enough for an awful lot of us.Tom: When you say ‘it,' what do you mean?Corey: All of it. That's the best part, I suppose the easy parts of working on computers, which I guess might be typing if you learn it early enough.Tom: Sure. [laugh] yeah. Mario Teaches Typing, or Starcraft taught me how to type quickly. You can't type slowly or else your expansion is going to get destroyed. No, so for someone who got their formal education in music or for someone with an eighth-grade education, I agree there needs to be resources out there.And there are. Not every single StackOverflow post with a question that's been asked has the response, “That's a dumb question.” There are some out there. There's definitely a community or a group of folks who think that there is a correct way to do things and that if you're asking a question, that it's a dumb question. It really isn't. It's getting back to the diverse backgrounds and diverse schools of thought that are coming in.We don't know where someone is coming from that led them to that question without the context, and so we need to continue providing resources to folks to make it easy to self-enable and continue abstracting away the machine code parts of it in friendlier and friendlier ways. I love that there are services like Squarespace out there now, right, that allow anybody to make a website. You don't have to have a degree in computer science to spin something up and share it with the world on the web. We're going to continue to see that type of abstraction, that type of on-ramp for folks, and I'm excited to be part of it.Corey: I really look forward to it. I'm curious to see what happens next for you, especially as you continue—‘you' being the corporate ‘you' here; that's like the understood ‘you' are the royal ‘you.' This is the corporate ‘you'—continue to refine the story of what it is LaunchDarkly does, where you start, where you stop, and how that winds up playing out.Tom: Yeah, you bet. Well, in the meantime, I'm going to continue to play with things like GitHub Copilot, see how much I can autofill, and see which paths that takes me down?Corey: Oh, I've been using it for a while. It's great. Just tab-complete my entire life. It's amazing.Tom: Oh, yeah. Absolutely.Corey: [unintelligible 00:36:08] other people's secrets start working, great, that makes my AWS bill way lower when I use someone else's keys. But that's neither here nor there.Tom: Yeah, exactly. That's a next step of doing that npm install or, you know, bringing in somebody else's [laugh] tools that they've already made. Yeah, just a couple weeks ago, I was playing around with it, and I typed in two lines: I imported the LaunchDarkly SDK and the configuration for the LaunchDarkly SDK, and then I just let it autofill, whatever it wanted. It came out with about 100 lines of something or other. [laugh]. And not all of it made sense, but hey, I saw where the thought process was. It was pretty cool to see.Corey: I really want to thank you for spending as much time and energy as you have talking about how you see the world and where you folks are going. If people want to learn more. Where's the best place to find you?Tom: At launchdarkly.com. Of course, any other various different booths, DevOpsDays, we're at re:Invent, we're at QCon right now. We're at all sorts of places, so come stop by, say hi, get a demo. Maybe we'll talk.Corey: Excellent. We will be tossing links to that into the [show notes 00:37:09]. Thanks so much for your time. I really appreciate it.Tom: Corey, Thank you.Corey: Tom Totenberg, senior solutions engineer at LaunchDarkly. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry and insulting comment, and then I'll sing it to you.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Take a Network Break! This week we cover several acquisitions including Barracuda Networks being bought by KKR, SailPoint being picked up by Thoma Bravo, Perforce buying Puppet, and Intel snapping up Ananki to get into the private 5G market. Plus more tech news.
Take a Network Break! This week we cover several acquisitions including Barracuda Networks being bought by KKR, SailPoint being picked up by Thoma Bravo, Perforce buying Puppet, and Intel snapping up Ananki to get into the private 5G market. Plus more tech news. The post Network Break 378: Barracuda Gets A New Owner; Intel Buys Ananki For Private 5G appeared first on Packet Pushers.
Take a Network Break! This week we cover several acquisitions including Barracuda Networks being bought by KKR, SailPoint being picked up by Thoma Bravo, Perforce buying Puppet, and Intel snapping up Ananki to get into the private 5G market. Plus more tech news.
Take a Network Break! This week we cover several acquisitions including Barracuda Networks being bought by KKR, SailPoint being picked up by Thoma Bravo, Perforce buying Puppet, and Intel snapping up Ananki to get into the private 5G market. Plus more tech news. The post Network Break 378: Barracuda Gets A New Owner; Intel Buys Ananki For Private 5G appeared first on Packet Pushers.
Take a Network Break! This week we cover several acquisitions including Barracuda Networks being bought by KKR, SailPoint being picked up by Thoma Bravo, Perforce buying Puppet, and Intel snapping up Ananki to get into the private 5G market. Plus more tech news. The post Network Break 378: Barracuda Gets A New Owner; Intel Buys Ananki For Private 5G appeared first on Packet Pushers.
Take a Network Break! This week we cover several acquisitions including Barracuda Networks being bought by KKR, SailPoint being picked up by Thoma Bravo, Perforce buying Puppet, and Intel snapping up Ananki to get into the private 5G market. Plus more tech news.
Foundry — Cloud computing study 2022 https://foundryco.com/tools-for-marketers/research-cloud-computing/ Puppet acquired (by Perforce) https://puppet.com/blog/an-open-letter-from-the-ceo-of-puppet Backup frustration brought this CTO to forefront of ransomware protection https://www.theregister.com/2022/04/12/nasuni-ransomware-without-backup/ Microsoft Customers Decry Cloud Contracts That Sideline Rivals https://www.bloomberg.com/news/articles/2022-04-12/microsoft-customers-decry-cloud-contracts-that-sideline-rivals Atlassian cloud outage could take days to resolve https://www.techtarget.com/searchitoperations/news/252515706/Atlassian-cloud-outage-could-take-days-to-resolve https://twitter.com/hondanhon/status/1513727021087543300 https://twitter.com/atlassian/status/1513913041540333569 Recommendations Dominic Event: MongoDB World, 7th-9th June in NYC — discount code DOMINICWELLINGTON25 for 25% off! https://www.mongodb.com/world-2022 Mike wecrashed: https://tv.apple.com/us/show/wecrashed/umc.cmc.6qw605uv2rwbzutk2p2fsgvq9 Follow the show on Twitter @Roll4Enterprise or on our LinkedIn page. Theme music by Renato Podestà. Please send us suggestions for topics and/or guests for future episodes!
Perforce Software has been building developer tools since 1995, a long time in the tech world. The company was acquired by Clearlake Capital in 2018, and over the last several years has been modernizing and expanding its reach through acquisition.
On this episode, John Simmons - VP, North America Sales @ Perforce joins us to share his story!Remember, #yourintentionmatters! Why? Because that's the result you'll tend to get.www.soarondemand.com
Today we're going to talk about how shift-left testing is able to speed up digital experience and digital transformation initiatives, while also improving quality of software and the end-user experience. To help me discuss this topic, I'd like to welcome Eran Kinsbruner, Chief Evangelist, Perforce.
Today we're going to talk about how shift-left testing is able to speed up digital experience and digital transformation initiatives, while also improving quality of software and the end-user experience. To help me discuss this topic, I'd like to welcome Eran Kinsbruner, Chief Evangelist, Perforce.
In this episode, we're joined by Eran Kinsbruner, Senior Director and Chief Evangelist at Perforce Software to discuss test automation and the future of test environments.
“We're All In Sales” My guest on the pod today is John Simmons. John is not a product manager. But he represents one of our most important “customers” – I'm using air quotes – he's VP of North America Sales for Methodics, at Perforce. The sales team is our #1 most important ally in a Read More
How do Google developers create and popularize internal tools? In this episode of the Sourcegraph Podcast, Han-Wen Nienhuys, creator of the open-source code search engine Zoekt, joins Beyang Liu, co-founder and CTO of Sourcegraph, to discuss the agonizing experience with Perforce that drove Han-Wen to build his first dev tool, explain the value of coding on trains and planes, and share the story of how building code search nearly inspired a street named after him in Sweden. Along the way, Han-Wen offers an inside look at the history behind some of Google's most famous dev tools, such as Blaze, Code Search, and Piper.Show notes & transcript: https://about.sourcegraph.com/podcast/han-wen-nienhuys/Sourcegraph: about.sourcegraph.com
Derrick Stolee is a Principal Software Engineer at GitHub, where he focuses on the client experience of large Git repositories.Apple Podcasts | Spotify | Google PodcastsSubscribers might be aware that I’ve done some work on client-side Git in the past, so I was pretty excited for this episode. We discuss the Microsoft Windows and Office repository’s migrations to Git, recent performance improvements to Git for large monorepo, and more.Highlightslightly edited[06:00] Utsav: How and why did you transition from academia to software engineering?Derrick Stolee: I was teaching and doing research at a high level and working with really great people. And I found myself not finding the time to do the work I was doing as a graduate student. I wasn't finding time to do the programming and do these really deep projects. I found that the only time I could find to do that was in the evenings and weekends because that's when other people weren't working, who could collaborate with me on their projects and move those projects forward. And then, I had a child and suddenly my evenings and weekends aren't available for that anymore.And so the individual things I was doing just for myself and for, you know, that was more programming oriented, fell by the wayside. I'd found myself a lot less happy with that career. And so I decided, you know what, there are two approaches I could take here. One is I could spend the next year or two winding down my collaborations and spinning up more of this time to be working on my own during regular work hours. Or I could find another job and I was going to set out.And, I lucked out that Microsoft has an office here in Raleigh, North Carolina, where we now live. This is where Azure DevOps was being built and they needed someone to help solve some graph problems. So it was really nice that it happened to work out that way. I know for a fact that they took a chance on me because of their particular need. I didn't have significant professional experience in the industry.[21:00] Utsav: What drove the decision to migrate Windows to Git?The Windows repository moving to Git was a big project driven by Brian Harry, who was the CVP of Azure DevOps at the time. Previously, Windows used this source control system called Source Depot, which was a fork of Perforce. No one knew how to use this version control system until they got there and learned on the job, and that caused some friction in terms of onboarding people.But also if you have people working in the windows code base for a long time, they only learn this version control system. They don't know Git and they don't know what everyone else is using. And so they're feeling like they're falling behind and they're not speaking the same language when they talk to somebody else who's working commonly used version control tools. So they saw this as a way to not only update the way their source control works to a more modern tool but specifically allow this more free exchange of ideas and understanding. The Windows Git repository is going to be big and have some little tweaks here and there, but at the end of the day, you're just running Git commands and you can go look at StackOverflow to solve questions as opposed to needing to talk to specific people within the Windows organization and how to use this version control tool.TranscriptUtsav Shah: Welcome to another episode of the Software at Scale Podcast, joining me today is Derek Stolee, who is a principal software engineer at GitHub. Previously, he was a principal software engineer at Microsoft, and he has a Ph.D. in Mathematics and Computer Science from the University of Nebraska, welcome. Derek Stolee: Thanks, happy to be here. Utsav Shah: So a lot of work that you do on Git, from my understanding, it's similar to the work you did in your Ph.D. around graph theory and stuff. So maybe you can just walk through the initial like, what got you interested in graphs and math in general?Derek Stolee: My love of graph theory came from my first algorithms class in college my sophomore year, just doing simple things like path-finding algorithms. And I got so excited about it, I started clicking around Wikipedia constantly, I just read every single article I could find on graph theory. So I learned about the four-color theorem, and I learned about different things like cliques, and all sorts of different graphs, the Peterson graph, and I just kept on discovering more. I thought this is interesting to me, it works well with the way my brain works and I could just model these things while [unclear 01:32]. And as I kept on doing more, for instance, graph theory, and combinatorics, my junior year for my math major, and it's like I want to pursue this. Instead of going into the software, I had planned with my undergraduate degree, I decided to pursue a Ph.D. in first math, then I split over to the joint math and CS program, and just worked on very theoretical math problems but I also would always pair it with the fact that I had this programming background and algorithmic background. So I was solving pure math problems using programming, and creating these computational experiments, the thing I call it was computational competent works. Because I would write these algorithms to help me solve these problems that were hard to reason about because the cases just became too complicated to hold in your head. But if you could quickly write a program, to then over the course of a day of computation, discover lots of small examples that can either answer it for you or even just give us a more intuitive understanding of the problem you're trying to solve and that was my specialty as I was working in academia.Utsav Shah: You hear a lot about proofs that are just computer-assisted today and you could just walk us through, I'm guessing, listeners are not math experts. So why is that becoming a thing and just walk through your thesis read in super layman terms, what do you do?Derek Stolee: There are two very different ways what you can mean when you say I have automated proof, there are some things like Coke, which are completely automated formal logic proofs, which specify all the different axioms and the different things I know to be true. And the statement I want to prove and constructs the sequence of proof steps, what I was focused more on was taking a combinatorial problem. For instance, do graphs with certain sub-structures exist, and trying to discover those examples using an algorithm that was finely tuned to solve those things, so one problem was called uniquely Kr saturated graphs. A Kr was essentially a set of our vertices where every single pair was adjacent to each other and to be saturated means I don't have one inside my graph but if I add any missing edge, I'll get one. And then the uniquely part was, I'll get exactly one and now we're at this fine line of doing these things even exist and can I find some interesting examples. And so you can just do, [unclear 04:03] generate every graph of a certain size, but that blows up in size. And so you end up where you can get maybe to 12 vertices, every graph of up to 12 vertices or so you can just enumerate and test. But to get beyond that, and find the interesting examples, you have to be zooming in on the search space to focus on the examples you're looking for. And so I generate an algorithm that said, Well, I know I'm not going to have every edge, so it's fixed one, parents say, this isn't an edge. And then we find our minus two other vertices and put all the other edges in and that's the one unique completion of that missing edge. And then let's continue building in that way, by building up all the possible ways you can create those sub-structures because they need to exist as opposed to just generating random little bits and that focus the search space enough that we can get to 20 or 21 vertices and see this interesting shapes show up. From those examples, we found some infinite families and then used regular old-school math to prove that these families were infinite once we had those small examples to start from.Utsav Shah: That makes a lot of sense and that tells me a little bit about how might someone use this in a computer science way? When would I need to use this in let's say, not my day job but just like, what computer science problems would I solve given something like that?Derek Stolee: It's always asking a mathematician what the applications of the theoretical work are. But I find whenever you see yourself dealing with a finite problem, and you want to know what different ways can this data be up here? Is it possible with some constraints? So a lot of things I was running into were similar problems to things like integer programming, trying to find solutions to an integer program is a very general thing and having those types of tools in your back pocket to solve these problems is extremely beneficial. And also knowing integer programming is still NP-hard. So if you have the right data shape, it will take an exponential amount of time to work, even though there are a lot of tools to solve most cases, when your data looks aren't particularly structured to have that exponential blow up. So knowing where those data shapes can arise and how to take a different approach can be beneficial.Utsav Shah: And you've had a fairly diverse career after this. I'm curious, what was the difference? What was the transition from doing this stuff to get or like developer tools? How did that end up happening?Derek Stolee: I was lucky enough that after my Ph.D. was complete, I landed a tenure track job in a math and computer science department, where I was teaching and doing research at a high level and working with great people. I had the best possible accountant’s workgroup, I could ask for doing interesting stuff, working with graduate students. And I found myself not finding the time to do the work I was doing as a graduate student, I wasn't finding time to do the programming and do these deep projects I wanted, I had a lot of interesting math project projects, I was collaborating with a lot of people, I was doing a lot of teaching. But I was finding that the only time I could find to do that was in evenings and weekends because that's when other people weren't working, who could collaborate with me on their projects and move those projects forward. And then I had a child and suddenly, my evenings and weekends aren't available for that anymore. And so the individual things I was doing just for myself, and for that we're more programming oriented, fell by the wayside and found myself a lot less happy with that career. And so I decided, there are two approaches I could take here; one is I could spend the next year or two, winding down my collaborations and spinning up more of this time to be working on my own during regular work hours, or I could find another job. And I was going to set out, but let's face it, my spouse is also an academic and she had an opportunity to move to a new institution and that happened to be soon after I made this decision. And so I said, great, let's not do the two-body problem anymore, you take this job, and we move right in between semesters, during the Christmas break, and I said, I will find my job, I will go and I will try to find a programming job, hopefully, someone will be interested. And I lucked out that, Microsoft has an office here in Raleigh, North Carolina, where we now live and they happen to be the place where what is now known as Azure DevOps was being built. And they needed someone to help solve some graph theory problems in the Git space. So it was nice that it happened to work out that way and I know for a fact that they took a chance on me because of their particular need. I didn't have significant professional experience in the industry, I just said, I did academics, so I'm smart and I did programming as part of my job, but it was always about myself. So, I came with a lot of humility, saying, I know I'm going to learn to work with a team. in a professional setting. I did teamwork with undergrad, but it's been a while. So I just come in here trying to learn as much as I can, as quickly as I can, and contribute in this very specific area you want me to go into, and it turns out that area they needed was to revamp the way Azure Repos computed Git commit history, which is a graph theory problem. The thing that was interesting about that is the previous solution is that they did everything in the sequel they'd when you created a new commit, he would say, what is your parent, let me take its commit history out of the sequel, and then add this new commit, and then put that back into the sequel. And it took essentially a sequel table of commit IDs and squashes it into a varbinary max column of this table, which ended up growing quadratically. And also, if you had a merge commit, it would have to take both parents and interestingly merge them, in a way that never matched what Git log was saying. And so it was technically interesting that they were able to do this at all with a sequel before I came by. But we need to have the graph data structure available, we need to dynamically compute by walking commits, and finding out how these things work, which led to creating a serialized commit-graph, which had that topological relationship encoded in concise data, into data. That was a data file that would be read into memory and very quickly, we could operate on it and do things topologically sorted. And we could do interesting File History operations on that instead of the database and by deleting these Database entries that are growing quadratically, we saved something like 83 gigabytes, just on the one server that was hosting the Azure DevOps code. And so it was great to see that come into fruition.Utsav Shah: First of all, that's such an inspiring story that you could get into this, and then they give you a chance as well. Did you reach out to a manager? Did you apply online? I'm just curious how that ended up working? Derek Stolee: I do need to say I have a lot of luck and privilege going into this because I applied and waited a month and didn't hear anything. I had applied to the same group and said, here's my cover letter, I heard nothing but then I have a friend who was from undergrad, who was one of the first people I knew to work at Microsoft. And I knew he worked at this little studio as the Visual Studio client editor and I said, well, this thing, that's now Azure DevOps was called Visual Studio online at the time, do you know anybody from this Visual Studio online group, I've applied there, haven't heard anything I'd love if you could get my resume on the top list. And it turns out that he had worked with somebody who had done the Git integration in Visual Studio, who happened to be located at this office, who then got my name on the top of the pile. And then that got me to the point where I was having a conversation with who would be my skip-level manager, and honestly had a conversation with me to try to suss out, am I going to be a good team player? There's not a good history of PhDs working well with engineers, probably because they just want to do their academic work and work in their space. I remember one particular question is like, sometimes we ship software and before we do that, we all get together, and everyone spends an entire day trying to find bugs, and then we spend a couple of weeks trying to fix them, they call it a bug bash, is that something you're interested in doing? I'm 100% wanting to be a good citizen, good team member, I am up for that. I that's what it takes to be a good software engineer, I will do it. I could sense the hesitation and the trepidation about looking at me more closely but it was overall, once I got into the interview, they were still doing Blackboard interviews at that time and I felt unfair because my phone screen interview was a problem. I had assigned my C Programming students as homework, so it's like sure you want to ask me this, I have a little bit of experience doing problems like this. So I was eager to show up and prove myself, I know I made some very junior mistakes at the beginning, just what's it like to work on a team? What's it like to check in a change and commit that pull request at 5 pm? And then go and get in your car and go home and realize when you are out there that you had a problem? And you've caused the bill to go red? Oh, no, don't do that. So I had those mistakes, but I only needed to learn them once. Utsav Shah: That's amazing and going to your second point around [inaudible 14:17], get committed history and storing all of that and sequel he also go, we had to deal with an extremely similar problem because we maintain a custom CI server and we try doing Git [inaudible 14:26] and try to implement that on our own and that did not turn out well. So maybe you can walk listeners through like, why is that so tricky? Why it is so tricky to say, is this commit before another commit is that after another commit, what's the parent of this commit? What's going on, I guess?Derek Stolee: Yes the thing to keep in mind is that each commit has a list of a parent or multiple parents in the case of emerging, and that just tells you what happened immediately before this. But if you have to go back weeks or months, you're going to be traversing hundreds or 1000s of commits and these merge commits are branching. And so not only are we going deep in time in terms of you just think about the first parent history is all the merge all the pull requests that have merged in that time. But imagine that you're also traversing all of the commits that were in the topic branches of those merges and so you go both deep and wide when you're doing this search. And by default, Git is storing all of these commits as just plain text objects, in their object database, you look it up by its Commit SHA, and then you go find that location in a pack file, you decompress it, you go parse the text file to find out the different information about, what's its author-date, committer date, what are its parents, and then go find them again, and keep iterating through that. And it's a very expensive operation on these orders of commits and especially when it says the answer's no, it's not reachable, you have to walk every single possible commit that is reachable before you can say no. And both of those things cause significant delays in trying to answer these questions, which was part of the reason for the commit-graph file. First again, it was started when I was doing Azure DevOps server work but it's now something it's a good client feature, first, it avoids that going through to the pack file, and loading this plain text document, you have to decompress and parse by just saying, I've got it well-structured information, that tells me where in the commit-graph files the next one. So I don't have to store the whole object ID, I just have a little four-byte integer, my parent is this one in this table of data, and you can jump quickly between them. And then the other benefit is, we can store extra data that are not native to the commit object itself, and specifically, this is called generation number. The generation number is saying, if I don't have any parents, my generation number is one, so I'm at level one. But if I have parents, I'm going to have one larger number than the maximum most parents, so if I have one parent is; one, now two, and then three, if I merge, and I've got four and five, I'm going to be six. And what that allows me to do is that if I see two commits, and one is generation number 10, and one is 11, then the one with generation number 10, can't reach the one with 11 because that means an edge would go in the wrong direction. It also means that if I'm looking for the one with the 11, and I started at 20, I can stop when I hit commits that hit alright 10. So this gives us extra ways of visiting fewer commits to solve these questions.Utsav Shah: So maybe a basic question, why does the system care about what the parents of a commit are why does that end up mattering so much?Derek Stolee: Yes, it matters for a lot of reasons. One is if you just want to go through the history of what changes have happened to my repository, specifically File History, the way to get them in order is not you to say, give me all the commits that changed, and then we sort them by date because the commit date can be completely manufactured. And maybe something that was committed later emerged earlier, that's something else. And so by understanding those relationships of where the parents are, you can realize, this thing was committed earlier, it landed in the default branch later and I can see that by the way that the commits are structured to these parent relationships. And a lot of problems we see with people saying, where did my change go, or what happened here, it's because somebody did a weird merge. And you can only find it out by doing some interesting things with Git log to say, this merge caused a problem and cause your file history to get mixed up and if somebody resolved the merging correctly to cause this problem where somebody change got erased and you need to use these social relationships to discover that.Utsav Shah: Should everybody just be using rebase versus merge, what's your opinion?Derek Stolee: My opinion is that you should use rebase to make sure that the commits that you are trying to get reviewed by your coworkers are as clear as possible. Present a story, tell me that your commits are good, tell me in the comments just why you're trying to do this one small change, and how the sequence of commits creates a beautiful story that tells me how I get from point A to point B. And then you merge it into your branch with everyone else's, and then those commits are locked, you can't change them anymore. Do you not rebase them? Do you not edit them? Now they're locked in and the benefit of doing that as well, I can present this best story that not only is good for the people who are reviewing it at the moment, but also when I go back in history and say, why did I change it that way? You've got all the reasoning right there but then also you can do things like go down Do Git log dash the first parent to just show me which pull requests are merged against this branch. And that's it, I don't see people's commits. I see this one was merged, this one was merged, this one was merged and I can see the sequence of those events and that's the most valuable thing to see.Utsav Shah: Interesting, and then a lot of GitHub workflows, just squash all of your commits into one, which I think is the default, or at least a lot of people use that; any opinions on that, because I know the Git workflow for development does the whole separate by commits, and then merge all of them, do you have an opinion, just on that?Derek Stolee: Squash merges can be beneficial; the thing to keep in mind is that it's typically beneficial for people who don't know how to do interactive rebase. So their topic match looks like a lot of random commits that don't make a lot of sense. And they're just, I tried this and then I had a break. So I fixed a bug, and I kept on going forward, I'm responding to feedback and that's what it looks like. That's if those commits aren't going to be helpful to you in the future to diagnose what's going on and you'd rather just say, this pull request is the unit of change. The squash merge is fine, it's fine to do that, the thing I find out that is problematic as a new user is also then don't realize that they need to change their branch to be based on that squash merge before they continue working. Otherwise, they'll bring in those commits again, and their pull request will look very strange. So there are some unnatural bits to using squash merge, that require people to like, let me just start over from the main branch again, to do my next work. And if you don't remember to do that, it's confusing.Utsav Shah: Yes, that makes a lot of sense. So going back to your story, so you started working on improving, get interactions in Azure DevOps? When did the whole idea of let's move the windows repository to get begin and how did that evolve?Derek Stolee: Well, the biggest thing is that the windows repository moving to get was decided, before I came, it was a big project by Brian Harry, who was the CVP of Azure DevOps at the time. Windows was using this source control system called source depot, which was a literal fork of Perforce. And no one knew how to use it until they got there and learn on the job. And that caused some friction in terms of well, onboarding people is difficult. But also, if you have people working in the windows codebase, for a long time, they learn this version control system. They don't know what everyone else is using and so they're feeling like they're falling behind. And they're not speaking the same language as when they talk to somebody else who's working in the version control that most people are using these days. So they saw this as a way to not only update the way their source control works to a more modern tool but specifically Git because it allowed more free exchange of ideas and understanding, it's going to be a mono repo, it's going to be big, it's going to have some little tweaks here and there. But at the end of the day, you're just running Git commands and you can go look at Stack Overflow, how to solve your Git questions, as opposed to needing to talk to specific people within the windows organization, and how to use this tool. So that, as far as I understand was a big part of the motivation, to get it working. When I joined the team, we were in the swing of let's make sure that our Git implementation scales, and the thing that's special about Azure DevOps is that it's using, it doesn't use the core Git codebase, it has a complete reimplementation of the server-side of Git in C sharp. So it was rebuilding a lot of things to just be able to do the core features, but is in its way that worked in its deployment environment and it had done a pretty good job of handling scale. But the issues that the Linux repo was still a challenge to host. At that time, it had half a million commits, maybe 700,000 commits, and it's the site number of files is rather small. But we were struggling especially with the commit history being so deep to do that, but also even when they [inaudible 24:24] DevOps repo with maybe 200 or 300 engineers working on it and in their daily work that was moving at a pace that was difficult to keep up with, so those scale targets were things we were daily dealing with and handling and working to improve and we could see that improvement in our daily lives as we were moving forward.Utsav Shah: So how do you tackle the problem? You're on this team now and you know that we want to improve the scale of this because 2000 developers are going to be using this repository we have two or 300 people now and it's already not like perfect. My first impression is you sit and you start profiling code and you understand what's going wrong. What did you all do?Derek Stolee: You're right about the profiler, we had a tool, I forget what it's called, but it would run on every 10th request selected at random, it would run a dot net profiler and it would save those traces into a place where we could download them. And so we can say, you know what Git commit history is slow. And now that we've written it in C sharp, as opposed to a sequel, it's the C sharp fault. Let's go see what's going on there and see if we can identify what are the hotspots, you go pull a few of those traces down and see what's identified. And a lot of it was chasing that like, I made this change. Let's make sure that the timings are an improvement, I see some outliers over here, they're still problematic, we find those traces and be able to go and identify that the core parts to change. Some of them are more philosophical, we need to change data structures, we need to introduce things like generation numbers, we need to introduce things like Bloom filters for filed history, nor to speed that up because we're spending too much time parsing commits and trees. And once we get to the idea that once we're that far, it was time to essentially say, let's assess whether or not we can handle the windows repo. And I think would have been January, February 2017. My team was tasked with doing scale testing in production, they had the full Azure DevOps server ready to go that had the windows source code in it didn't have developers using it, but it was a copy of the windows source code but they were using that same server for work item tracking, they had already transitioned, that we're tracking to using Azure boards. And they said, go and see if you can make this fall over in production, that's the only way to tell if it's going to work or not. And so a few of us got together, we created a bunch of things to use the REST API and we were pretty confident that the Git operation is going to work because we had a caching layer in front of the server that was going to avoid that. And so we just went to the idea of like, let's have through the REST API and make a few changes, and create a pull request and merge it, go through that cycle. We started by measuring how often developers would do that, for instance, in the Azure DevOps, and then scale it up and see where be going and we crashed the job agents because we found a bottleneck. Turns out that we were using lib Git to do merges and that required going into native code because it's a C library and we couldn't have too many of those running, because they each took a gig of memory. And so once this native code was running out, things were crashing and so we ended up having to put a limit on how that but it was like, that was the only Fallout and we could then say, we're ready to bring it on, start transitioning people over. And when users are in the product, and they think certain things are rough or difficult, we can address them. But right now, they're not going to cause a server problem. So let's bring it on. And so I think it was that a few months later that they started bringing developers from source depot into Git.Utsav Shah: So it sounds like there was some server work to make sure that the server doesn't crash. But the majority of work that you had to focus on was Git inside. Does that sound accurate?Derek Stolee: Before my time in parallel, is my time was the creation of what's now called VFS Forget, he was GVFs, at the time, realized that don't let engineers name things, they won't do it. So we've renamed it to VFS forget, it's a virtual file system Forget, a lot of [inaudible 28:44] because the source depot, version that Windows is using had a virtualized file system in it to allow people to only download a portion of the working tree that they needed. And they can build whatever part they were in, and it would dynamically discover what files you need to run that build. And so we did the same thing on the Git side, which was, let's make the Git client let's modify in some slight ways, using our fork of Git to think that all the files are there. And then when a file is [inaudible 29:26] we look through it to a file system event, it communicates to the dot net process that says, you want that file and you go download it from the Git server, put it on disk and tell you what its contents are and now you can place it and so it's dynamically downloading objects. This required aversion approach protocol that we call the GVFs protocol, which is essentially an early version of what's now called get a partial clone, to say, you can go get the commits and trees, that's what you need to be able to do most of your work. But when you need the file contents into the blob of a file, we can download that as necessary and populate it on your disk. The different thing is that personalized thing, the idea that if you just run LS at the root directory, it looks like all the files are there. And that causes some problems if you're not used to it, like for instance, if you open the VS code in the root of your windows source code, it will populate everything. Because VS code starts crawling and trying to figure out I want to do searching and indexing. And I want to find out what's there but Windows users were used to this, the windows developers; they had this already as a problem. So they were used to using tools that didn't do that but we found that out when we started saying, VFS forget is this thing that Windows is using, maybe you could use it to know like, well, this was working great, then I open VS code, or I ran grep, or some other tool came in and decided to scan everything. And now I'm slow again, because I have absolutely every file in my mana repo, in my working directory for real. And so that led to some concerns that weren’t necessarily the best way to go. But it did specifically with that GFS protocol, it solved a lot of the scale issues because we could stick another layer of servers that were closely located to the developers, like for instance, get a lab of build machines, let's take one of these cache servers in there. So the build machines all fetch from there and there you have quick throughput, small latency. And they don't have to bug the origin server for anything but the Refs, you do the same thing around the developers that solved a lot of our scale problems because you don't have these thundering herds of machines coming in and asking for all the data all at once.Utsav Shah: If we had a super similar concept of repository mirrors that would be listening to some change stream every time anything changed on a region, it would run GitHub, and then all the servers. So it's remarkable how similar the problems that we're thinking about are. One thing that I was thinking about, so VFS Forget makes sense, what's the origin of the FS monitor story? So for listeners, FS monitor is the File System Monitor in Git that decides whether files have changed or not without running [inaudible 32:08] that lists every single file, how did that come about?Derek Stolee: There are two sides to the story; one is that as we are building all these features, custom for VFS Forget, we're doing it inside the Microsoft slash Git fork on GitHub working in the open. So you can see all the changes we're making, it's all GPL. But we're making changes in ways that are going fast. And we're not contributing to upstream Git to the core Git feature. Because of the way VFS Forget works, we have this process that's always running, that is watching the file system and getting all of its events, it made sense to say, well, we can speed up certain Git operations, because we don't need to go looking for things. We don't want to run a bunch of L-stats, because that will trigger the download of objects. So we need to refer to that process to tell me what files have been updated, what's new, and I created the idea of what's now called FS monitor. And people who had built that tool for VFS Forget contributed a version of it upstream that used Facebook's watchman tool and threw a hook. So it created this hook called the FS monitor hook, it would say, tell me what's been updated since the last time I checked, the watchmen or whatever tools on their side would say, here's the small list of files that have been modified. You don't have to go walking all of the hundreds of 1000s of files because you just change these [inaudible 0:33:34]. And the Git command could store that and be fast to do things like Git status, we could add. So that was something that was contributed just mostly out of the goodness of their heart, we want to have this idea, this worked well and VFS Forget, we think can be working well for other people in regular Git, here we go and contributing and getting it in. It became much more important to us in particular when we started supporting the office monitor repo because they had a similar situation where they were moving from their version of source depot into Git and they thought VFS Forget is just going to work.The issue is that the office also has tools that they build for iOS and macOS. So they have developers who are on macOS and the team has just started by building a similar file system, virtualization for macOS using kernel extensions. And was very far along in the process when Apple said, we're deprecating kernel extensions, you can't do that anymore. If you're someone like Dropbox, go use this thing, if you use this other thing, and we tried both of those things, and none of them work in this scenario, they're either too slow, or they're not consistent enough. For instance, if you're in Dropbox, and you say, I want to populate my files dynamically as people ask for them. The way that Dropbox in OneNote or OneDrive now does that, the operating system we decided I'm going to delete this content because the disk is getting too big. You don't need it because you can just get it from the remote again, that inconsistency was something we couldn't handle because we needed to know that content once downloaded was there. And so we were at a crossroads of not knowing where to go. But then we decided, let's do an alternative approach, let's look at what the office monorepo is different from the windows monitor repo. And it turns out that they had a very componentized build system, where if you wanted to build a word, you knew what you needed to build words, you didn't need the Excel code, you didn't need the PowerPoint code, you needed the word code and some common bits for all the clients of Microsoft Office. And this was ingrained in their project system, it’s like if you know that in advance, Could you just tell Git, these are the files I need to do my work in to do my build. And that’s what they were doing in their version of source depot, they weren't using a virtualized file system and their version of source depot, they were just enlisting in the projects I care about. So when some of them were moving to Git with VFS Forget, they were confused, why do I see so many directories? I don't need them. So what we did is we decided to make a new way of taking all the good bits from VFS forget, like the GVFs protocol that allowed us to do the reduced downloads. But instead of a virtualized file system to use sparse checkout is a good feature and that allows us you can say, tell Git, only give me within these directories, the files and ignore everything outside. And that gives us the same benefits of working as the smaller working directory, than the whole thing without needing to have this virtualized file system. But now we need that File System Monitor hook that we added earlier. Because if I still have 200,000 files on my disk, and I edit a dozen, I don't want to walk with all 200,000 to find those dozen. And so the File System Monitor became top of mind for us and particularly because we want to support Windows developers and Windows process creation is expensive, especially compared to Linux; Linux, process creation is super-fast. So having hooky run, that then does some shell script stuff to come to communicate to another process and then come back. Just that process, even if it didn't, you don't have to do anything. That was expensive enough to say we should remove the hook from this equation. And also, there are some things that watchman does that we don't like and aren't specific enough to Git, let's make a version of the file system monitor that is entrenched to get. And that's what my colleague Jeff Hosteller, is working on right now. And getting reviewed in the core Git client right now is available on Git for Windows if you want to try it because the Git for Windows maintainer is also on my team. And so we only get an early version in there. But we want to make sure this is available to all Git users. There's an imputation for Windows and macOS and it's possible to build one for Linux, we just haven't included this first version. And that's our target is to remove that overhead. I know that you at Dropbox got had a blog post where you had a huge speed up just by replacing the Perl script hook with a rusted hook, is that correct?Utsav Shah: With the go hook not go hog, yes, but eventually we replace it with the rust one.Derek Stolee: Excellent. And also you did some contributions to help make this hook system a little bit better and not fewer bucks. Utsav Shah: I think yes, one or two bugs and it took me a few months of digging and figuring out what exactly is going wrong and it turned out there's this one environment variable which you added to skip process creation. So we just had to make sure to get forest on track caches on getting you or somebody else edited. And we just forced that environment variable to be true to make sure we cache every time you run Git status. So subsequent with Git statuses are not slow and things worked out great. So we just ended up shipping a wrapper that turned out the environment variable and things worked amazingly well. So, that was so long ago. How long does this process creation take on Windows? I guess that's one question that I have had for you for while, why did we skip writing that cache? Do you know what was slow but creating processes on Windows?Derek Stolee: Well, I know that there are a bunch of permission things that Windows does, it has many backhauls about can you create a process of this kind and what elevation privileges do you exactly have. And there are a lot of things like there that have built up because Windows is very much about re maintaining backward compatibility with a lot of these security sorts of things. So I don't know all the details I do know that it's something around the order of 100 milliseconds. So it's not something to scoff at and it's also the thing that Git for windows, in particular, has difficulty to because it has to do a bunch of translation layers to take this tool that was built for your Unix environment, and has dependencies on things like shell and Python, and Perl and how to make sure that it can work in that environment. That is an extra cost like if windows need to pay over even a normal windows process. Utsav Shah: Yes, that makes a lot of sense and maybe some numbers on I don't know how much you can share, like how big was the windows the office manrico annual decided to move from source depot to get like, what are we talking about here?Derek Stolee: The biggest numbers we think about are like, how many files do I have, but I didn't do anything I just checked out the default branch should have, and I said, how many files are there? And I believe the windows repository was somewhere around 3 million and that uncompressed data was something like 300 gigabytes of like that those 3 million files taking up that long. I don't know what the full size is for the office binary, but it is 2 million files at the head. So definitely a large project, they did their homework in terms of removing large binaries from the repository so that they're not big because of that, it's not like it's Git LSS isn't going to be the solution for them. They have mostly source code and small files that are not the reason for their growth. The reason for their growth is they have so many files, and they have so many developers moving, it moving that code around and adding commits and collaborating, that it's just going to get big no matter what you do. And at one point, the windows monorepo had 110 million Git objects and I think over 12 million of those were commits partly because they had some build machinery that would commit 40 times during its build. So they rein that in, and we've set to do a history cutting and start from scratch and now it's not moving nearly as quickly, but it's still very similar size so they've got more runways.Utsav Shah: Yes, maybe just for comparison to listeners, like the numbers I remember in 2018, the biggest repositories that were open-source that had people contributing to get forward, chromium. And remember chromium being roughly like 300,000 files, and there were like a couple of chromium engineers contributing to good performance. So this is just one order of magnitude but bigger than that, like 3 million files, I don't think there's a lot of people moving such a large repository around especially with the kind of history with like, 12 million objects it's just a lot. What was the reaction I guess, of the open-source community, the maintainers of getting stuff when you decided to help out? Did you have a conversation to start with they were just super excited when you reached out on the mailing list? What happened?Derek Stolee: So for full context, I switched over to working on the client-side and contributed upstream get kind of, after all of the DFS forget was announced and released as open-source software. And so, I can only gauge what I saw from people afterward and people I've become to know since then, but the general reaction was, yes, it's great that you can do this, but if you had contributed to get everyone would benefit and part of the things were, the initial plan wasn't ever to open source it or, the goal was to make this work for Windows if that's the only group that ever uses it that was a success. And it turns out, we can maybe try to say it, because we can host the windows source code, we can handle your source code was kind of like a marketing point for Azure Repos and that was a big push to put this out there and say in the world, but to say like, well, it also needs this custom thing that's only on Azure Repos and we created it with our own opinions that wouldn't be up to snuff with the Git project. And so, things like FS monitor and partial clone are direct contributions from Microsoft engineers at the time that we're saying, here's a way to contribute the ideas that made VFS forget work to get and that was an ongoing effort to try to bring that back but it kind of started after the fact kind of, hey, we are going to contribute these ideas but at first, we needed to ship something. So we shipped something without working with the community but I think that over the last few years, is especially with the way that we've shifted our stance within our strategy to do sparse check out things with the Office monitor repo, we've much more been able to align with the things we want to build, we can build them for upstream Git first, and then we can benefit from them and then we don't have to build it twice. And then we don't have to do something special that's only for our internal teams that again, once they learn that thing, it's different from what everyone else is doing and we have that same problem again. So, right now the things that the office is depending on our sparse Checkout, yes, they're using the GVFs protocol, but to them, you can just call it partial clone and it's going to be the same from their perspective. And in fact, the way we've integrated it for them is that we've gone underneath the partial clone machinery from upstream Git and just taught it to do the GVFS protocol. So, we're much more aligned with because we know things are working for the office, upstream, Git is much more suited to be able to handle this kind of scale.Utsav Shah: And that makes a ton of sense and given that, it seems like the community wanted you to contribute these features back. And that's just so refreshing, you want to help out someone, I don't know if you've heard of those stories where people were trying to contribute to get like Facebook has like this famous story of trying to continue to get a long time ago and not being successful and choosing to go in Mercurial, I'm happy to see that finally, we could add all of these nice things to Git.Derek Stolee: And I should give credit to the maintainer, Junio Hamano, and people who are now my colleagues at GitHub, like Peff Jeff King, and also other Git contributors at companies like Google, who took time out of their day to help us learn what's it like to be a Git contributor, and not just open source, because open source merging pull requests on GitHub is a completely different thing than working in the Git mailing list and contributing patch sets via email. And so learning how to do that, and also, the level of quality expert expected is so high so, how can we navigate that space has new contributors, who have a lot of ideas, and are motivated to do this good work. But we needed to get over a hump of let's get into this community and establish ourselves as being good citizens and trying to do the right thing.Utsav Shah: And maybe one more selfish question from my side. One thing that I think Git could use is some kind of login system, where today, if somebody checks in PII into our repository into the main branch, from my understanding, it's extremely hard to get rid of that without doing a full rewrite. And some kinds of plugins for companies where they can rewrite stuff or hide stuff on servers, does GitHub have something like that?Derek Stolee: I'm not aware of anything on the GitHub or Microsoft side for that, we generally try to avoid it by doing pre received books, or when you push will reject it, for some reason, if we can, otherwise, it's on you to clear up the data. Part of that is because we want to make sure that we are maintaining repositories that are still valid, that are not going to be missing objects. I know that Google source control tool, Garrett has a way to obliterate these objects and I'm not exactly sure how it works to then say they get clients are fetching and cloning and they say, I don't have this object it'll complain, but I don't know how they get around that. And with the distributed nature of Git, it's hard to say that the Git project should take on something like that, because it is centralizing things to such a degree that you have to say, yes, you didn't send me all the objects you said you were going to, but I'll trust you to do that anyway, that trust boundary is something that gets cautious to violate. Utsav Shah: Yes, that makes sense and now to the non-selfish questions, maybe you can walk through listeners, why does it need to bloom filter internally?Derek Stolee: Sure. So let's think about commit history is specifically when, say you're in a Java repo, a repo that uses the Java programming language, and your directory structure mimics your namespace. So if you want to get to your code, you go down five directories before you find your code file. Now in Git that's represented as I have my commit, then I have my route tree, which describes the root of my working directory and then I go down for each of those directories I have another tree object, tree object, and then finally my file. And so when we want to do a history query, say what things have changed this file, I go to my first commit, and I say, let's compare it to its parent and I'm going to the root trees, well, they're different, okay they're different. Let me open them up find out which tree object they have at that first portion of the path and see if those are different, they're different let me keep going and you go all the way down these five things, you've opened up 10 trees in this diff, to parse these things and if those trees are big, that's expensive to do. And at the end, you might find out, wait a minute the blobs are identical way down here but I had to do all that work to find out now multiply that by a million. And you have to find out that this file that was changed 10 times in the history of a million commits; you have to do a ton of work to parse all of those trees. So, the Bloom filters come in, in a way to say, can we guarantee sometimes, and in the most case that these commits, did not change that path, we expect that most commits did not change the path you're looking for. So what we do is we injected it in the commit-graph file because that gives us a quick way to index, I'm at a commit in a position that's going to graph file, I can understand where this Bloom filter data is. And the Bloom filter is storing which paths were changed by that commit and a bloom filter is what's called a probabilistic data structure. So it doesn't list those paths, which would be expensive, if I just actually listed, every single path that changed at every commit, I would have this sort of quadratic growth again, in my data would be in the gigabytes, even for a small repo. But with the Bloom filter, I only need 10 bits per path so it's compact. The thing we sacrifice is that sometimes it says yes, to a path that is the answer is no but the critical thing is if it says no, you can be sure it's no, and its false-positive rate is 2%, at the compression settings we're using so I think about the history of my million commits 98% of them will this Bloom filter will say no, it didn't change. So I can immediately go to my next parent, and I can say this commit isn't important so let's move on then the sparse any trees, 2% of them, I still have to go and parse them and the 10 that changed it they'll say yes. So, I'll parse them, I'll get the right answer but we've significantly subtracted the amount of work we had to do to answer that query. And it's important when you're in these big monitor repos because you have so many commits, that didn't touch the file, you need to be able to isolate them.Utsav Shah: At what point or like at what repository number of files, because the size of file that thing you mentioned, you can just use LFS for that should solve a lot of problems with the number of files, that's the problem. At what number of files, do I have to start thinking about okay; I want to use these good features like sparse checkout and the commit graphs and stuff? Have you noticed a tipping point like that?Derek Stolee: Yes, there are some tipping points but it's all about, can you take advantage of the different features. So to start, I can tell you that if you have a recent version of Git saved from the last year, so you can go to whatever repository you want, and run, Git, maintenance, start, just do that in every [inaudible 52:48] is going to moderate size and that's going to enable background maintenance. So it's going to turn off auto GC because it's going to run maintenance on a regular schedule, it'll do things like fetch for you in the background, so that way, when you run Git fetch, it just updates the refs and it's really fast but it does also keep your commit graph up to date. Now, by default, it doesn't contain the Bloom filters, because Bloom filters is an extra data sink and most clients don't need it, because you're not doing these deep queries that you need to do at web-scale, like the GitHub server. The GitHub server does generate those Bloom filters so when you do a File History query on GitHub, it's fast but it does give you that commit-graph thing so you can do things like Git log graph fast. The topological sorting has to do for that, it can use the generation numbers to be quick, as opposed to before printers, it would take six seconds to do that just to show 10 commits, on the left few books had to walk all of them, so now you can get that for free. So whatever size repo is, you can just run that command, and you're good to go and it's the only time you have to think about it run at once now your posture is going to be good for a long time. The next level I would say is, can I reduce the amount of data I download during my clones and fetches and that the partial clones for the good for the site that I prefer blob fewer clones, so you go, Git clone, dash filter, equals blob, colon, none. I know it's complicated, but it's what we have and it just says, okay, filter out all the blobs and just give me the commits and trees that are reachable from the refs. And when I do a checkout, or when I do a history query, I'll download the blobs I need on demand. So, don't just get on a plane and try to do checkouts and things and expect it to work that's the one thing you have to be understanding about. But as long as you are relatively frequently, having a network connection, you can operate as if it's a normal Git repo and that can make your fetch times your cleaning time fast and your disk space a lot less. So, that's kind of like the next level of boosting up your scale and it works a lot like LFS, LFS says, I'm only going to pull down these big LFS objects when you do a checkout and but it uses a different mechanism to do that this is you've got your regular Git blobs in there. And then the next level is okay, I am only getting the blobs I need, but can I use even fewer and this is the idea of using sparse checkout to scope you’re working directory down. And I like to say that, beyond 100,000 files is where you can start thinking about using it, I start seeing Git start to chug along when you get to 100,000 200,000 files. So if you can at least max out at that level, preferably less, but if you max out at that level that would be great sparse checkout is a way to do that the issue right now that we're seeing is, you need to have a connection between your build system and sparse Checkout, to say, hey, I work in this part of the code, what files I need. Now, if that's relatively stable, and you can identify, you know what, all the web services are in this directory, that's all I care about and all the client code is over there, I don't need it, then a static gets merged Checkout, will work, you can just go Git's sparse checkout set, whatever directories you need, and you're good to go. The issue is if you want to be close, and say, I'm only going to get this one project I need, but then it depends on these other directories and those dependencies might change and their dependencies might change, that's when you need to build that connection. So office has a tool, they call scooper, that connects their project dependency system to sparks Checkout, and will help them automatically do that but if your dependencies are relatively stable, you can manually run Git sparse checkout. And that's going to greatly reduce the size of your working directory, which means Git's doing less when it runs checkout and that can help out.Utsav Shah: That's a great incentive for developers to keep your code clean and modular so you're not checking out the world and eventually, it's going to help you in all these different ways and maybe for a final question here. What are you working on right now? What should we be excited about in the next few versions of Git?Derek Stolee: I'm working on a project this whole calendar year, and I'm not going to be done with it to the calendar year is done called the Sparse Index. So it's related to sparse checkout but it's about dealing with the index file, the index file is, if you go into your Git repository, go to dot Git slash index. That file is index is a copy of what it thinks should be at the head and also what it thinks is in your working directory, so when it doesn't get status, it's walked all those files and said, this is the last time it was modified or when I expected was modified. And any difference between the index and what's actually in your working tree, Git needs to do some work to sync them up. And normally, this is just fast, it's not that big but when you have millions of files, every single file at the head has an entry in the index. Even worse, if you have a sparse Checkout, even if you have 100,000 of those 2 million files in your working directory, the index itself has 2 million entries in it, just most of them are marked with what's called the Skip Worksheet that says, don't write this to disk. So for the office monitor repo, this file is 180 megabytes, which means that every single Git status needs to read 180 gigabytes from disk, and with the LFS monitor going on, it has to go rewrite it to have the latest token from the LFS monitor so it has to rewrite it to disk. So, this takes five seconds to run a Git status, even though it didn't say much and you just have to like load this thing up and write it back down. So the sparse index says, well, because we're using sparse checkout in a specific way called cone mode, which is directory-based, not path file-based, you can say, well, once I get to a certain directory, I know that none of its files inside of it matter. So let's store that directory and its tree object in the index instead, so it's a kind of a placeholder to say, I could recover all the data, and all the files that would be in this directory by parsing trees, but I don't want it in my index, there's no reason for that I'm not manipulating those files when I run a Git add, I'm not manipulating them, I do Git commit. And even if I do a Git checkout, I don't even care; I just want to replace that tree with whatever I'm checking out what it thinks the tree should be. It doesn't matter for what the work I'm doing and for a typical developer in the office monorepo; this reduces the index size to 10 megabytes. So it's a huge shrinking of the size and it's unlocking so much potential in terms of our performance, our Git status times are now 300 milliseconds on Windows, on Linux, and Mac, which are also platforms, we support for the office monitor repo, it's even faster. So that's what I'm working on the issue here is that there's a lot of things in Git that care about the index, and they explore the index as a flat array of entries and they're always expecting those to be filenames. So all these things run the Git codebase that needs to be updated to say, well, what happens if I have a directory here? What's the thing I should do? And so, all of the ideas of what is the sparse index format, have been already released in two versions of Git, and then there's also some protections and say, well, if I have a sparse index on disk, but I'm in a command that has an integrated, well, let me parse those trees to expand it to a full index before I continue. And then at the end, I'll write a sparse index instead of writing a full index and what we've been going through is, let's integrate these other commands, we've got things like status, add, commit, checkout, those things are all integrated, we got more on the way like merge, cherry-pick, rebase. And these things all need different special care to make it to work but it's unlocking this idea that when you're in the office monitoring who after this is done, and you're working on a small slice of the repo, it's going to feel like a small repo. And that is going to feel awesome. I'm just so excited for developers to be able to explore that we have a few more integrations; we want to get in there. So that we can release it and feel confident that users are going to be happy. The issue being that expanding to a full index is more expensive than just reading the 180 megabytes from disk, if I just already have it in the format; it's faster than being to parse it. So we want to make sure that we have enough integrations that most scenarios users do are a lot faster, and only a few that they use occasionally get a little slower. And once we have that, we can be very confident that developers are going to be excited about the experience.Utsav Shah: That sounds amazing the index already has so many features like the split index, the shared index, I still remember trying to like Wim understands when you're trying to read a Git index, and it just shows you as the right format and this is great. And do you think at some point, if you had all the time, and like a team of 100, people, you'd want to rewrite Git in a way that it was aware of all of these different features and layered in a way where all the different commands did not have to think about these different operations, since Git get a presented view of the index, rather than have to deal with all of these things individually?Derek Stolee: I think the index because it's a list of files, and it's a sorted list of files, and people want to do things like replace a few entries or scan them in a certain order that it would benefit from being replaced by some sort of database, even just sequel lite would be enough. And people have brought that idea up but because this idea of a flat array of in-memory entries is so ingrained in the Git code base, that's just not possible. To do the work to layer on top, an API that allows the compatibility between the flat layer and it's something like a sequel, it's just not feasible to do, we would just disrupt users, it would probably never get done and just cause bugs. So, I don't think that that's a realistic thing to do but I think if we were to redesign it from scratch, and we weren't in a rush to get something out fast, that we would be able to take that approach. And for instance, you would sparse index, so I update one file after we write the whole index that is something I'll have to do it's just that it's smaller now. But if I had something like a database, we could just replace that entry in the database and that would be a better operation to do but it's just not built for that right now.Utsav Shah: Okay. And if you had one thing that you would change
Hoje recebemos Matheus Ferrareze para falar de treinamento com eletroestimulação de corpo inteiro, também conhecido como EMS Training O Matheus é cofundador e diretor técnico da Perforce, mestre em Ciências do Movimento Humano e doutorando em Eletroestimulação Neuromuscular de Corpo Inteiro pela UFRGS. Ele é especialista no assunto e ministra cursos para todo o Brasil. Inclusive esse EP é praticamente um curso completo para vocês que tem interesse em iniciar nessa área. Para entrar em contato acesse a www.perforce.com.br @perforcebrasil ---------------------------------------------- Siga a basquete lab no @basquete.lab Adquira nossos programas de treino através do www.basquetelab.com.br
I welcome Norman Morse, Triage Engineer at Perforce Software. We begin by discussing COVID-19’s impact, and his current role helping game developers. Hear thoughts on the importance of git and Perforce, how a previous job led to his current one, and the challenge of layoffs. Listen to his origin story around doing kernel development, building a team, and getting absorbed into EA. Learn his theory for success, going door-to-door for a job, teaching himself languages, and the importance of Linux and enthusiasm. We then talk about the importance of GDC and social skills, Toastmasters, studying NLP, along with learning technical skills. Hear about working in Unity back in 2008, supporting P4, and how GitHub is an engineer’s resume. Learn about favorite games he worked on, thoughts on the future, concerns about how studios fill roles, ageism, sharing a game engine, and crunching. We then wrap up talking about triple-A dev concerns, missing co-workers during COVID-19, commuting, a bad air conditioner, connecting with people, engineering tips, and how the industry is finally evolving. Bio: Norman Morse is an Engineer and 25+ year veteran of the game industry who’s worked both within game studios and tool companies. EA, Sega, Crystal Dynamics, and Stormfront Studios are some of the bigger game companies he’s been at along with many others, plus ten years as a Triage Engineer at Perforce Software. He’s also a life-long learner and self-taught engineer, Master Practitioner in Neurolinguistic Programming, Toastmasters advocate, and personal coach. Show Links: Perforce Swarm git WorldsAway Stormfront Studios Quake GitHub No Man’s Sky Surviving Mars Perforce Software JetBrains Connect Links: Norman on Twitter Game Dev Advice: *New: Game Dev Advice Patreon - please help support if you find this useful *Game Dev Advice Twitter *Game Dev Advice email *Game Dev Advice website *Level Ex website - we’re hiring *Game Dev Advice Hotline: (224) 484-7733, give a call! *Subscribe and go to the website full show notes with links Learn more about your ad choices. Visit podcastchoices.com/adchoices
Everyone wants the freshest data, and why not? Yesterday's information can still provide value, and last year's as well, but there's nothing so powerful as knowing what's happening right now. That's why streams of data are gaining tremendous traction. Apache Kafka is everywhere these days, as are orchestration platforms that leverage it. And Kafka isn't the only game in town. How can your company stay up to speed? Check out this episode of DM Radio to find out! Host @eric_kavanagh will interview several industry visionaries, including Neil Barton, CleanData; Shawn Rogers, TIBCO; and Justin Reock of Perforce.
Narrated by: Felpata_Lupin Advisories: Violence Summary: He will not think of things that don't end, nor things that could have been. He will do what must be done. https://fanfictalk.com/archive/viewstory.php?sid=2749
Espresso Talks welcomes Chief Evangelist of Perfecto.io, Eran Kinsbruner. Author of the recently published Accelerating Software Quality: Machine Learning and Artificial Intelligence (https://www.amazon.co.uk/dp/B08FKW8B9B/ref=cm_sw_r_tw_dp_x_736UFbEJ6VMVC).In this episode, Eran explores the future of mobile working with next-generation industry trends and insights.If you want to stay ahead of the curve, then do not miss this episode, featuring the Digital Mobile Index (Mobile & Web Test Coverage Index | Perfecto by Perforce).Make sure you register now for the upcoming Virtual Community Days 2020 from December 1st to 4th which Eran will be the opening keynote on Friday! Massive thanks to the team at Perforce.com who will be gold sponsors for this event!
Perforce is an adverb that is used to express necessity. Our word of the day comes directly from Middle English, meaning ‘by force.’ When we do something perforce, we do it because we’re forced to do so out of social or practical necessity. For example: When I asked Eric how things were going, I was doing so perforce, not because I wanted an answer. But Eric gave me a forty-five minute update on exactly how things were going.
"Will the unfavorable regulatory environment permit telehealth to flourish? Perforce we’re beginning to see a relaxation of restrictions that have hitherto obstructed progress. Recently, federal officials approved interstate licensing, thereby prompting greater telehealth conversion, utilization, and expansion. Medicare’s 1135 Waiver is also encouraging, and, in as much as it serves the same ends, the Drug Enforcement Administration’s leave to prescribe via telemedicine without a prior in-person meeting is a similarly promising development. In light of circumstances, anything that might reduce cost, improve delivery, and wrest control from bloated, dysfunctional health care systems is viable." David Hanekom is an internal medicine physician. He shares his story and discusses his KevinMD article, "Telehealth is the future but it is obscured by a dismal present." (https://www.kevinmd.com/blog/2020/07/telehealth-is-the-future-but-it-is-obscured-by-a-dismal-present.html)
Brad Hart is the Chief Technology Officer at Perforce & Johan Karlsson is a Senior Consultant there and we discussed a survey that Perforce recently conducted about the State of the industry. Check out the full survey here ● https://www.perforce.com/resources/vcs/game-development-report ► Check out the video version: https://youtu.be/_eR1x5GhqXs ···················································································· http://gameschoolonline.com/ ···················································································· ● Join our Discord: https://discord.com/invite/sJtGmpV ● Website: http://gamedevunchained.com ●Twitter: https://twitter.com/BLUchamps ● Patreon: https://www.patreon.com/bluchamps Every week Brandon talk to game developers on how to be successful.
Working in game development with a background as a researcher allows to compare two very different environments: A large particle physics laboratory and a small game development studio that creates games that lean towards artistic expression. Looking at parallels and differences of these two environments will allow to cover topics like automation, open source and tool re usability. ► Check out Perforce: https://bit.ly/2N1spVc ► Check out Unity: https://bit.ly/3e5J2eA ► Join the Character Art Jam and get FREE lessons! http://gameschoolonline.com/front-line-defense ► Check out Game School Online: https://beta.gameschoolonline.com/bundles/ApprenticeClub ► Check out the video version: https://www.youtube.com/watch?v=ZT5eoklsuyY ···················································································· https://github.com/jlingema ···················································································· Watch live 11 AM PST Tuesdays on https://www.twitch.tv/blu_champs ♥ Subscribe: http://www.youtube.com/bluchamps?sub... ● Join our Discord: https://discord.com/invite/sJtGmpV ● Website: http://gamedevunchained.com ●Twitter: https://twitter.com/BLUchamps https://twitter.com/Wadarass ● Patreon: https://www.patreon.com/bluchamps Every week Brandon talk to game developers on how to be successful. Give us a rating on iTunes: apple.co/2IKxTmU
This session will be a moderated conversation about the ever shifting landscape of visual art in the entertainment industry (games included)! Through the lens of experienced professionals, the hope is to communicate to aspiring artists how to avoid pitfalls, revealing common struggles of surviving as an artist in entertainment, and providing some perspective insights on staying relevant for the development of future creative projects. ► Check out Perforce: https://bit.ly/2N1spVc ► Check out Unity: https://bit.ly/3e5J2eA ► Join the Character Art Jam and get FREE lessons! https://app.rupie.io/event/a27Rqw6mNB/overview ► Check out Game School Online: https://beta.gameschoolonline.com/bundles/ApprenticeClub ► Check out the video version: https://www.youtube.com/watch?v=M3imrvnuoNg ···················································································· https://www.artstation.com/kincept http://kincept.co/ ···················································································· Watch live 11 AM PST Tuesdays on https://www.twitch.tv/blu_champs ♥ Subscribe: http://www.youtube.com/bluchamps?sub... ● Join our Discord: https://discord.com/invite/sJtGmpV ● Website: http://gamedevunchained.com ●Twitter: https://twitter.com/BLUchamps https://twitter.com/Wadarass ● Patreon: https://www.patreon.com/bluchamps Every week Brandon talk to game developers on how to be successful. Give us a rating on iTunes: apple.co/2IKxTmU
This talk will be a personal and technical roadmap of a game developer in Eastern Europe, where resources and a career in game development is scarce. ► Check out Perforce: https://bit.ly/2N1spVc ► Check out Unity: https://bit.ly/3e5J2eA ► Check out Game School Online: https://beta.gameschoolonline.com/bundles/ApprenticeClub ► Check out the video version: https://www.youtube.com/watch?v=lt-GvPO10gE ···················································································· https://www.artstation.com/igorpuskaric https://www.youtube.com/channel/UCyRH... ···················································································· Watch live 11 AM PST Tuesdays on https://www.twitch.tv/blu_champs ♥ Subscribe: http://www.youtube.com/bluchamps?sub... ● Join our Discord: https://discord.com/invite/sJtGmpV ● Website: http://gamedevunchained.com ●Twitter: https://twitter.com/BLUchamps https://twitter.com/Wadarass ● Patreon: https://www.patreon.com/bluchamps Every week Brandon talk to game developers on how to be successful. Give us a rating on iTunes: apple.co/2IKxTmU
This talk will share information about a world outside“traditional” game development. Medical gaming, simulation, training, mental health, and numerous other excellent opportunities are available and growing. Learn about these options, the pros and cons, and if any are right for your game development career. ► Check out Perforce: https://bit.ly/2N1spVc ► Check out Unity: https://bit.ly/3e5J2eA ► Check out the video version: https://youtu.be/eVlDn_zpuhk ···················································································· https://www.instagram.com/collinfogel/ http://collinfogel.com/ ···················································································· Watch live 11 AM PST Tuesdays on https://www.twitch.tv/blu_champs ♥ Subscribe: http://www.youtube.com/bluchamps?sub... ● Join our Discord: https://discord.com/invite/sJtGmpV ● Website: http://gamedevunchained.com ●Twitter: https://twitter.com/BLUchamps https://twitter.com/Wadarass ● Patreon: https://www.patreon.com/bluchamps Every week Brandon talk to game developers on how to be successful. Give us a rating on iTunes: apple.co/2IKxTmU
Welcome to Episode 83 Main Topic Interview with Chuck Gehman @charlesgehman He’s got this book https://www.manning.com/books/aws-cloudformation-in-action Let’s talk about the contents of the book It’s not done, let’s talk about MEAP AManning Early Adopter Program Scheduled for completion “early 2021” We’ve got a discount code! Podironsys20 We also have Preview codes, good for 2 months of access Perforce - version control http://www.perforce.com Infrastructure as Code Let’s talk about what that is exactly What sort of tools are usually leveraged here And what about Cloud Formation? Automation Pipeline. Announcements Patreon Update S0l3mn Erwin Trooper_Ish LinuXsys666 gimpyb Ryan tuxpreacher Mark DeMentor at PowerShellOnLinux.com Jon Marc Julius Andi J Charles 22532 Reviews Chat [unclemarc] It’s the Steam Summer Sale, so I’ll probably buy some more games I won’t play. I did buy a HOTAS, which makes Elite Dangerous much more fun...and playing in VR is sick! Also, Deep Rock Galactic gets 5 out of 5 drunken violent dwarves in space. It’s a must buy if you like coop shooters. Or drunken space dwarves. Charles fell off his bike… Nate’s jeep is still broken. News https://9to5mac.com/2020/06/24/former-intel-engineer-says-skylake-problems-were-turning-point-for-apples-arm-mac-transition/ https://en.wikipedia.org/wiki/Mac_transition_to_Intel_processors https://www.wcjb.com/2020/06/23/google-introduces-fact-checking-to-image-search/ https://www.microsoft.com/security/blog/2020/06/23/microsoft-continues-to-extend-security-for-all-with-mobile-protection-for-android/ https://analyticsindiamag.com/what-is-aws-snowcone/ https://www.zdnet.com/article/arm-and-linux-take-supercomputer-top500-crown/ https://www.cnn.com/2020/06/23/tech/ios-14-features-apple-android/index.html i should have dropped this from the news. Watch us live on the 2nd and 4th Thursday of every month! Subscribe and hit the bell! https://www.youtube.com/IronSysadminPodcast Slack workspace https://www.ironsysadmin.com/slack Find us on Twitter, and Facebook! https://www.facebook.com/ironsysadmin https://www.twitter.com/ironsysadmin Subscribe wherever you find podcasts! And don't forget about our patreon! https://patreon.com/ironsysadmin Intro and Outro music credit: Tri Tachyon, Digital MK 2http://freemusicarchive.org/music/Tri-Tachyon/
This talk will share information about a world outside“traditional” game development. Medical gaming, simulation, training, mental health, and numerous other excellent opportunities are available and growing. Learn about these options, the pros and cons, and if any are right for your game development career. ► Check out Parsec for Team: https://bit.ly/3h7BEkQ ► Check out Perforce: https://bit.ly/2N1spVc ► Check out Unity: https://bit.ly/3e5J2eA ► Check out the video version: https://youtu.be/ib_MegZm5-4 ···················································································· https://www.gamedevadvice.com/ https://www.linkedin.com/in/jpodlasek/ ···················································································· Watch live 11 AM PST Tuesdays on https://www.twitch.tv/blu_champs ♥ Subscribe: http://www.youtube.com/bluchamps?sub... ● Join our Discord: https://discord.com/invite/sJtGmpV ● Website: http://gamedevunchained.com ●Twitter: https://twitter.com/BLUchamps https://twitter.com/Wadarass ● Patreon: https://www.patreon.com/bluchamps Every week Brandon talk to game developers on how to be successful. Give us a rating on iTunes: apple.co/2IKxTmU
The Byte - A Byte-sized podcast about Containers, Cloud, and Tech
Episode 66**DISCOUNTS**40% Discount to Manning Publications use Code Podbyte20Charles Gehman has been building applications on AWS since 2012. He has been an architect, CTO, technical blogger, and developer for many years. He holds the certifications AWS Certified Developer and AWS Certified Solution Architect.Manning book - https://www.manning.com/books/aws-cloudformation-in-action^Website - https://www.chuckgehman.com/Twitter - https://twitter.com/charlesgehman Based in Minneapolis Originally from New York Works at Perforce - https://www.perforce.com/ Perforce is one of the original Version control systems Became a platform for DevOps tools Old motto “Version everything" Perforce is the main version control system used in the Video Game industry and 8 out of 10 Semiconductor manufacturing industries use Perforce. The one tech guy in marketing Cloud consultant MEAP program - manning.com/meap-program
Brad Hart is the CTO of Perforce, which is the number 1 leading software for big and small game companies using version control. He helps organizations in times especially now, to keep companies productive and connected. We cover everything on how the pandemic is changing the industry, and is forcing for some permanent changes. For the video version, please visit https://youtu.be/iNpShJ0HUZM This episode is sponsored by: CORE Enter the Retro Game Jam with Core and win a 2070 RTX NVIDIA Card https://www.perforce.com/ Support us on Patreon and get the exclusive weekly episodes of Life Unchained. A behind the scenes look of building a startup within game development. To watch future GDU episodes live, go to twitch.tv/blu_champs every Tuesdays at 11 AM PST Grab some Merch! Game with me every Thursday @ 8PM PST on the BLU Champs Discord Channel. Give us a rating on iTunes: apple.co/2IKxTmU
Today I'm joined by Eran Kinsbruner—Chief Evangelist, Author, Public Speaker and Blogger at Perfecto—for an engaging discussion on the latest industry trends for 2020 and how to test next-generation devices.
A big part of being a game developer is being in-between big projects. Sometimes, the in-between phase isn't just a few days or months, and it can feel like you're losing steam if you're not kicking off the next big game. Regardless of the timing, there are always small things you can do to improve yourself and your work if you need/want to use that time, without instantly embarking on another years long development cycle. This talk will explore some ways Nina Freeman, an indie game designer, has dealt with these challenges personally. She'll share some ways she found to make small improvements to her career, workflow and creative momentum, without actually starting a new game project (until she was really ready!) For the video version, please visit https://youtu.be/F1tmVZJ9Pks www.Perforce.com http://ninasays.so/beachdate/ https://www.twitch.tv/hentaiphd Follow Nina on Twitter: https://twitter.com/hentaiphd Support us on Patreon and get the exclusive weekly episodes of Life Unchained. A behind the scenes look of building a startup within the game industry. To watch future GDU episodes live, go to twitch.tv/blu_champs every Tuesdays at 11 AM PST Grab some Merch! Game with me every Thursday @ 8PM PST on the BLU Champs Discord Channel. Give us a rating on iTunes: apple.co/2IKxTmU
En este episodio nos acompaña Jordi Mon Companys (Senior Product Manager de Códice Software) para contarnos sobre Plastic SCM, un sistema de control de versiones que lleva desde 2005 resistiendo contra Git y hacen el stack completo (línea de comandos, herramientas gráficas, merge, diff, etc) con clientes en más de 20 países, desde equipos indie de videojuegos hasta coorporaciones con más de 3.000 licencias de Plastic. Jordi nos habla de su rol como PM gestionando PlasticSCM, SemanticMerge y gmaster (abreviación de Git Master), factores diferenciales con otros sistemas de control de versiones como Git o Subversion, su proceso para recoger feedback con desarrolladores, por qué el equipo de soporte es fundamental para producto, cómo compiten en precio con productos como Perforce o por qué decidieron integrarse con Unity aún compitiendo con funcionalidades core de Unity. También menciona su contribución como coorganizador de Product Hunt Madrid y por qué son tan exigentes con los speakers para centrarse en enseñanzas de producto. Un episodio lleno de consejos interesantes para cualquier profesional de empresa SaaS. Te recomendamos: Perfiles sociales de Jordi: Twitter: https://twitter.com/mordodemaru LinkedIn: https://www.linkedin.com/in/jordimoncompanys/ Enlaces: Historia de Códice Software, a quién venden y hacia dónde van: https://www.xataka.com/empresas-y-economia/codice-software-empresa-espanola-que-gracias-a-plastic-scm-compite-omnipresente-git Plastic SCM: https://www.plasticscm.com/ Documentación de la integración de Unity con Plastic SCM: https://docs.unity3d.com/es/530/Manual/plasticSCMIntegration.html Product Hunt Madrid: https://www.eventbrite.es/o/product-hunt-madrid-20230066080 Cómo organizar un evento de Product Hunt en tu ciudad. Lecciones de la organizadora principal de PH Toronto: https://medium.com/@darynakulya/how-to-organize-a-product-hunt-meetup-in-your-city-45e0b979afdb SurveyMonkey: https://es.surveymonkey.com/ El estado de las startups en España: https://youtu.be/0Xt9FFo74Pk
Brad Hart is the Chief Technology officer for Perforce. Before getting in the tech industry, Brad had a completely different career in Mechanical Engineering in aviation and his itch for working with computers turned from a hobby to him selling his first tech company. This episode is brought to you by Quixel. Goto Quixel.com and enter code GDU10 for 10% off for the first year! Support us on Patreon! Grab some Merch! Give us a rating on iTunes: apple.co/2IKxTmU
Ospite Stefano "Kunos" Casillo! Seconda parte, parliamo del passaggio all'Unreal Engine, di ray tracing e di shortcut!Le note dell'episodio complete:https://gameloop.it/2018/10/12/gameloop-podcast-gl16b-kunos-e-assetto-corsa-parte-2-aspetti-tecnici/
It was announced yesterday (https://www.prnewswire.com/news-releases/clearlake-capital-backed-perforce-software-to-acquire-perfecto-mobile-300726603.html) that Perforce has acquired Perfecto Mobile. Perfecto is one of the leading players in the mobile app testing space. Perforce originally was a version control solution, but with 5 acquisitions recently is branching out into a broader DevOps solution company. I sat down with the CEOs of Perfecto and Perforce to discuss the reasoning behind the merger, as well as the broader DevOps market trends that are at play here. Great conversation, have a listen!
"So too how a mountaineer has their safety line, a game team has their revision control system. If codes get messed up or something goes fubar, you have a RCS like a GitHub or Perforce that will help you recover NOT IF accidents happen but when IT happens especially as you start to scale up in development." Blair Leggett leads a small indie studio with his wife called One More Story Games. With over 20 plus years in the game industry, they have built their own game engine specifically for authors to create narrative games and published 7 games in the last 3 years while building the engine. Their work includes Danielle’s Inferno, Skycarver, and the upcoming Shakespeare’s Landlord. Connect: Website https://onemorestorygames.com/Game Engine Website http://storystylus.com/Game Shakespeare's Landlord http://www.lilybard.com/Twitter @1MoreStoryGamesYoutube https://www.youtube.com/user/OneMoreStoryGamesResources: UNSUNG HEROES OF THE GAMES INDUSTRY: TOOLS PROGRAMMERSGitHubPerforce https://www.perforce.com/Subscribe to the podcast: ITunes | Pocket Casts | CastBox | Other Give me a Rating & Review Thank you for listening!
Introduction Yannick et Benjamin reçoivent Xavier Gouchet, auteur de AutoMergeTool, pour parler de Git et des conflits lors des merges. Téléchargement direct Show notes 1:23″ – Workwell : https://www.workwell.io/ 2:30″ – Deezer : https://www.deezer.com/fr/ 2:50″ – Dropbox : https://www.dropbox.com/ 3:43″ – CVS : https://fr.wikipedia.org/wiki/Concurrent_versions_system 3:44″ – SourceSafe : https://fr.wikipedia.org/wiki/Microsoft_Visual_SourceSafe 3:47″ – Mercurial : https://fr.wikipedia.org/wiki/Mercurial 3:52″ – Perforce : https://www.perforce.com/ 3:53″ – SVN […]
On this episode, Ben, Mo, Daniel, and Jesse talk about what to expect when you're expecting to code. Listen in as they identify and discuss many of the essential, non-programming aspects of developer jobs. Topics include communication, source control, working with non-technical people, continual improvement, and more! If you are currently coding, wanting to code, or striving to be better at coding, this episode is for you. ##Things Mentioned Source Control: Team Foundation Server (TFS), Perforce, Git The Pragmatic Programmer Gource Autohotkey Practical Object-Oriented Design in Ruby It Depends, 010: Should I Learn This New Technology? It Depends, 006: Imposter Syndrome Python Swift Playgrounds For more information, check out our website at clearfunction.com. Follow us on Twitter at @clearfunction.
Naoki Hiroshima さんと、4chan, FBI, Google, 転職、Heroes Reborn などについて話しました。 Show Notes News - 4chan FULL CIRCLE by hiroyuki Christopher Poole (moot) @N hijack Google Is 2 Billion Lines of Code Google Brotli: a new compression algorithm for the internet Perforce Joining Fastly Heroes Reborn
A brief podcast on how to get started with this extremely popular source code manager. What is Git? Git is an open source distributed version control system. It is fast, and small making it an extremely popular subversioning software SCM (Source Code Manager). Some of its competitors are Subversion (SVN), CVS, Perforce, and Clear Case. […]
Beshrew that heart that makes my heart to groan For that deep wound it gives my friend and me; Is’t not enough to torture me alone, But slave to slavery my sweet’st friend must be? Me from my self thy cruel eye hath taken, And my next self thou harder hast engrossed; Of him, my self, and thee I am forsaken, A torment thrice threefold thus to be crossed. Prison my heart in thy steel bosom’s ward, But then my friend’s heart let my poor heart bail; Whoe’er keeps me, let my heart be his guard; Thou canst not then use rigour in my jail. And yet thou wilt, for I being pent in thee, Perforce am thine, and all that is in me. William Shakespeare Presenters Mark Chatterley Thierry Heles The post Sonnet 133: Beshrew that heart that makes my heart to groan appeared first on In Ear Entertainment.
Creating an RSS feed that lists certain defects, features, tasks, or incidents from your database is easy to set up. Just activate a couple settings in customer portal and you'll be rolling! OnTime allows you to integrate bugs and features with your Source Control Management solution, such as Sourcesafe or Perforce. This week we walk through an example of setting up your SCM tab to work with the Subversion SCM tool.
Creating an RSS feed that lists certain defects, features, tasks, or incidents from your database is easy to set up. Just activate a couple settings in customer portal and you'll be rolling! OnTime allows you to integrate bugs and features with your Source Control Management solution, such as Sourcesafe or Perforce. This week we walk through an example of setting up your SCM tab to work with the Subversion SCM tool.
OnTime allows you to integrate bugs and features with your Source Control Management solution, such as Sourcesafe or Perforce. This week we walk through an example of setting up your SCM tab to work with the Subversion SCM tool.
OnTime allows you to integrate bugs and features with your Source Control Management solution, such as Sourcesafe or Perforce. This week we walk through an example of setting up your SCM tab to work with the Subversion SCM tool.