American character encoding standard
POPULARITY
A quick episode this week, which includes attacking VS Code with ASCII control characters, as well as a referrer leak and SCIM hunting.Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/282.html[00:00:00] Introduction[00:00:57] Attacking Hypervisors - Training Update[00:06:20] Drag and Pwnd: Leverage ASCII characters to exploit VS Code[00:12:12] Full Referer URL leak through img tag[00:17:52] SCIM Hunting - Beyond SSO[00:25:17] Breaking the Sound Barrier Part I: Fuzzing CoreAudio with Mach MessagesPodcast episodes are available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosecYou can also join our discord: https://discord.gg/daTxTK9
BROOKLYN WRITER WINS GRAND PRIZE AT HOLLYWOOD AWARDS EVENT Featured in New Release HOLLYWOOD - Brooklyn, New York writer Randyn Bartholomew is the Grand Prize Winner in the L. Ron Hubbard Writers of the Future Contest earning him the Golden Pen Award trophy and a $5,000 cash prize. His winning story, "Ascii," is published in the international bestselling anthology, L. Ron Hubbard Presents Writers of the Future Volume 41 which will be officially released on April 22nd. Mr. Bartholomew was honored along with the other winners in the Writers and Illustrators of the Future Contests on April 10th at the Taglyan Complex in Hollywood, California. Born in New York state on pi day, Randyn grew up in the nearby New Jersey towns of Maplewood and Summit. Although majoring in math at Cornell, he's since switched gears to become a Brooklyn-based freelance writer of science journalism, ghost writing, copywriting, and, whenever possible, fiction. His articles have appeared in Scientific American, Salon, The Washington Post Magazine, among others. He enjoys running in Prospect Park, reading old books and new, and finding free lectures to attend. While he reads eclectically, his main love is science fiction. When people frown at this preference (or, much worse, smile politely) he calls in the cavalry and reminds himself of the Ray Bradbury quote, “I have never listened to anyone who criticized my taste in space travel, sideshows or gorillas. When this occurs, I pack up my dinosaurs and leave the room.” He's been using a flip phone for the last four years. The Contest, one of the most prestigious writing and illustrating competitions in the world, is currently in its 42nd year and is judged by some of the premier names in speculative fiction. WASHINGTON ARTIST HONORED AT HOLLYWOOD AWARDS GALA Featured in New Release HOLLYWOOD - Washington, Utah artist Ms. Tremani Sutcliffe is a winner in the L. Ron Hubbard Illustrators of the Future Contest and was honored along with ten other artists and twelve writers at the Taglyan Complex in Hollywood, California on April 10th. Her art is published along with the other writers' and illustrators' stories and art in the international bestselling anthology, L. Ron Hubbard Presents Writers of the Future Volume 41 which will be officially released on April 22nd 2025. Tremani Sutcliffe, born in 1990 in Provo, Utah, spent her early years exploring the rugged landscapes of middle-of-nowhere Arizona, where hiking in desert mountains and catching rattlesnakes ignited her adventurous spirit. Her passion for art stemmed from her love of books, and the fantastical covers that inspired her imagination. In true bookworm fashion, her artistic journey began at the local library, where she immersed herself in art instruction books, laying the foundation for her artistic journey. Through daily practice, relentless pursuit of new skills, and seeking mentorship from established artists, her commitment to learning new methods has continuously expanded her artistic repertoire. Tremani views art as a fusion of technique and creativity that brings beauty and meaning to life. After spending most of her young life drawing and painting with watercolors, she expanded her skillset to include oils. Although she also began working with acrylics, she quickly decided they must have been invented by an angry dude with horns and a pitchfork for the sole purpose of making her life miserable….and decided to develop her digital painting skills instead. The Illustrators of the Future Contest judges include, Bob Eggleton (11 Chesley Awards and 9 Hugo Awards), Larry Elmore (Dungeons & Dragons book covers), Echo Chernik (graphic designs for major corporations including Celestial Seasonings tea packaging), Rob Prior (art for Spawn, Heavy Metal comics and Buffy the Vampire Slayer), Ciruelo (Eragon Coloring Book).
Kürzlich titelten einige Tech-Magazine, ChatGPT baue "unsichtbare Zeichen" in generierte Texte ein. Was hat es damit auf sich? Gibt es wirklich unsichtbare Zeichen? In der 44. Folge von Informatik für die moderne Hausfrau beschäftigen wir uns damit, wie es funktioniert, dass Computer Zeichen (also Buchstaben, Ziffern etc.) speichern, interpretieren und korrekt darstellen können. Dazu schauen wir uns das Prinzip der Zeichencodierung genauer an und erfahren, was der Unterschied zwischen Zeichencodierung und Zeichensatz ist. Wir werfen einen Blick auf zwei der bekanntesten Zeichencodierungen, nämlich ASCII und Unicode, und klären auf, wozu es die sogenannten Steuerzeichen braucht. Den erwähnten Artikel zum Thema ChatGPT und vermeintlich unsichtbare Zeichen könnt ihr hier nachlesen: https://t3n.de/news/openai-zeichen-chatgpt-texte-1683993/ Grundlage für den Artikel ist dieser Bericht des KI-Unternehmens Rumi: https://www.rumidocs.com/newsroom/new-chatgpt-models-seem-to-leave-watermarks-on-text Alle Informationen über Unicode könnt ihr hier nachlesen: https://home.unicode.org/ Mehr zum Thema Leerzeichen erfahrt ihr hier: - https://unicode.org/charts/collation/chart_Whitespace.html - https://www.compart.com/de/unicode/category/Zs - https://de.wikipedia.org/wiki/Leerzeichen Es gibt übrigens auch eine Programmiersprache namens Whitespace, die auf der Nutzung unterschiedlicher Arten von Leerzeichen und (wie der Name schon sagt) Whitespaces basiert: https://de.wikipedia.org/wiki/Whitespace_(Programmiersprache) Ein Onlinetool, mit dem ihr euch Unicode-Steuerzeichen in euren Texten anzeigen lassen könnt, kann auf dieser Seite ausprobiert werden: https://www.soscisurvey.de/tools/view-chars.php In dieser Folge wird auf fünf weitere Folgen verwiesen: - Folge 21 - Sichere Datenübertragung und wie autoritäre Staaten sie unterwandern (können) - Interview mit Alexandra Dirksen - Folge 26 - Mehr Daten als erlaubt: Wie Buffer Overflows (Wahl-)Systeme beeinflussen können - Folge 31 - Zurück in die Vergangenheit oder wie die Zeitdarstellung in der Informatik funktioniert - Folge 32 - Adversarial Attacks: Wie sich KI-Systeme austricksen lassen - Folge 37 - Steganographie: Versteckte Botschaften, die Schaden anrichten können Ich habe zu Beginn der Folge auf die Studie zum Thema KI-Transformation von Stephanie Reiner hingewiesen - an der Online-Umfrage könnt ihr hier teilnehmen: https://sreineruni.limesurvey.net/271658?lang=de Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden. Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung oder eine kurze Rezension, um ihm zu mehr Sichtbarkeit zu verhelfen. Rezensionen könnt ihr zum Beispiel bei Apple Podcasts schreiben oder auf panoptikum.social. Falls ihr den Podcast werbefrei hören möchtet oder die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau Falls ihr mir auf anderem Wege etwas 'in den Hut werfen' möchtet, ist dies (auch ohne Registrierung) über die Plattform Ko-fi möglich: https://ko-fi.com/leaschoenberger Dieser Podcast wird gefördert durch das Kulturbüro der Stadt Dortmund.
Bytes und Strings (click here to comment) 18. April 2025, Jochen In dieser Episode werfen wir einen Blick auf das nächste Kapitel von "Fluent Python" über "Bytes und Strings". Johannes erklärt die wichtigsten Konzepte und warum UTF-8 fast immer die richtige Wahl ist.
SAMPLER & SANS REPROCHES (Radio broadcast)Playlist N° 1358 - Lundi 24 Mars 2025 - Horaire : 20h00-22h00EBM - SYNTHWAVE - INDUSTRIAL & RELATED MUSICGALAXIE RADIO 95.3FM --- www.galaxieradio.fr [ S&SR Selection de la semaine... SYNTHETISCHE LEBENSFORM "Current Profile" ] SYNTHETISCHE LEBENSFORM "Mystic Voids" DIG Album : Current Profile (Autoproduction) ENCEPHALON « Synthskin6d» DIG Album : Automaton All Along (Art Of Fact Records / Looters) THE WHEELS OF RITUAL « There Was A Time » DIG Compilation : Refusal (Freedom Club / Enfant Terrible) CRUSH OF SOULS « Psss» DIG Album : Lézire (AVANT ! Records) K.I.F.O.T.H. « Victim (Brain Leisure Cinematic edit) » DIG Compilation : Hydra (Aliens Productions) ELEVENTH FEAR "Aeterna" DIG Album : Fire Flakes (Neris Records) PUNX SOUNDCHECK & ASCII.DISKO "Gasoline (Hard Ton Remix)" DIG Single: Gasoline (The Remixes) (Icon Series) NIGHT RITUALZ « Take Me 2 The Crib » DIG Album : Night Ritualz (Metropolis Records) ATTRITION «The Alibi (BELLHEAD Remix) » DIG EP : A Permanent View ? (Two Gods) JEROME CHASSAGNARD « Le Tourbillon De La Vie » DIG EP : Asian Breaks (Hymen) ELEKTROTERAPI « Zlamane Serce (The Witch Said No Remix) » DIG Album : Dialogue (Remixes) (ScentAir records) STRONG PRODUCT « Лодка » DIG Album : IV (Autoproduction) MACHINE COMMAND "Blindspot" DIG Album: Affected (Autoproduction) GJ106 "Strange VST (Oberst Panizza Remix)" DIG Single : Strange VST (Autoproduction) BLACK DOT "Insatiable" DIG EP: Love At Glance (Italo Moderni) ROHN LEDERMAN "Steal The Light (Stefan Netschio Remix)" DIG EP: Steal The Light (Les Disques De La Pantoufle) SYNTHETISCHE LEBENSFORM "State Changes" DIG Album : Current Profile (Autoproduction) SOLO ANSAMBLIS "Riedlente (Charlie Remix)" DIG EP: SArmxRD (Solo Ansamblis / Damn Good) MIKROMETRIK "Corridor" DIG Album: My Hell (Acid Bath Records) ETHAN FAWKES "Dancefloor EBM" DIG Single: Dancefloor EBM With Remixes (Still Distant Records) MINUIT MACHINE "Party People" DIG EP: Party People (Synth Religion) SONO "Fifteen Minutes" DIG Album: Lost Lovers Motel (Sono Music) ARNAUD REBOTINI "December In G" DIG Album : Winter Sequences (Zend Avesta) THX TO : ART OF FACT RECORDS / LOOTERS (Sarah), FREEDOM CLUD / ENFANT TERRIBLE (Martijn), AVANT! RECORDS (Andrea), ALIENS PRODUCTIONS (Peter Ryby), ELEVENTH FEAR (Ludovic), METROPOLIS RECORDS (Gary), UTM MUSIC GROUP (Michel), HYMEN RECORDS (Stefan Alt), SCENTAIR RECORDS (Vladimir),PODCAST:YOUTUBE https://www.youtube.com/@SamplerEtSansReprochesYOUTUBE CHANNEL – NON STOP MUSIC -MIX ONLY + LIVE & INTERVIEWS REPORTS ITUNES :https://podcasts.apple.com/fr/podcast/sampler-sans-reproches/id1511413205 MIXCLOUD : https://www.mixcloud.com/SetSRradio/PODCLOUD :https://podcloud.fr/studio/podcasts/sampler-et-sans-reproches DEEZER :https://www.deezer.com/fr/show/1181282 GALAXIE RADIO http://galaxieradio.fr/ go to replay Sampler & Sans ReprochesAMAZON MUSIC https://music.amazon.fr/podcasts/9718c2fe-d841-4339-a3e5-82c31d018ed7/SAMPLER-SANS-REPROCHESHEARTHIS https://hearthis.at/sampler-sans-reproches/ ARCHIVE.ORG https://archive.org/download/1358-24032025-192/1358-24032025-192.mp3
Dans cet épisode, nous plongeons dans les profondeurs de l'histoire des NFT avec un invité de choix : Archivist (alias @punk3606), véritable détective de la blockchain et spécialiste de ce qu'on appelle désormais l'archéologie crypto.Deux ans après sa première venue, il revient avec de nouvelles découvertes, plus profondes encore, sur les origines des NFT et du digital ownership. Il nous propose une relecture complète de l'histoire des NFTs, bien au-delà des récits classiques débutant avec les CryptoPunks ou CryptoKitties.Ce qu'on apprend dans cet épisode* Namecoin (2011), deuxième blockchain après Bitcoin, est au cœur des premières expérimentations. C'est là que se trouvent les tout premiers non-fungible assets, bien avant Ethereum.* Le premier asset non fongible de l'histoire serait D/Bitcoin, enregistré sur Namecoin en avril 2011. Il s'agissait d'un domaine .bit.* Des ASCII arts mintés dès mai 2011 (dont le célèbre “boobies”) témoignent d'une forme primitive de crypto art.* En janvier 2012, le mème F**K Yea est encodé entièrement dans un Namecoin name. Il s'agit de la première image on-chain, possédable, transférable et reconstructible.* Le duo Trent & Masha McConaghy, avec la collaboration de Vitalik Buterin, a jeté les bases d'un protocole de certification d'œuvres numériques sur la blockchain, dès 2013, bien avant Ethereum.* Le débat autour du “premier NFT” est sans fin, car il dépend de la définition choisie : technique, artistique, expérimentale, commerciale…
Question: Name at least two items that must be submitted when adding a “Sequence Listing” after the application filing date. Answer: Adding a “Sequence Listing” after the application filing date involves the submission of: (1) a “Sequence Listing” either as a PDF image file, on physical sheets of paper, or as an ASCII plain text […] The post MPEP Q & A 324: Name items that must be submitted when adding a “Sequence Listing” after the application filing date. appeared first on Patent Education Series.
Steve Yegge's latest rant about the future of "coding", Ethan McCue shares some life altering Postgres patterns, Hillel Wayne makes the case for Verification-First Development, Gerd Zellweger experienced lots of pain setting up GitHub Actions & Cascii is a web-based ASCII diagram builder.
Steve Yegge's latest rant about the future of "coding", Ethan McCue shares some life altering Postgres patterns, Hillel Wayne makes the case for Verification-First Development, Gerd Zellweger experienced lots of pain setting up GitHub Actions & Cascii is a web-based ASCII diagram builder.
Steve Yegge's latest rant about the future of "coding", Ethan McCue shares some life altering Postgres patterns, Hillel Wayne makes the case for Verification-First Development, Gerd Zellweger experienced lots of pain setting up GitHub Actions & Cascii is a web-based ASCII diagram builder.
Luke realizes that his relationship has changed significantly over the years. But is he talking about his relationship with sports or his relationship with his girlfriend? Becca will have to listen to find out.
On this weeks' sode, Rockstar is prepared to take on Fortnite, ex-Bungie dev is a huge creep, NetEase fires everyone, and bunch of Riblets. And then JD flexes on all of us in our Sound Bits segment. Peterson reviews Dojo Masters.
Whether you say "Happy Honda Days!" or "Merry Chrysler!" you surely know that gift-giving is the reason for the season. That's why we're wrapping up 2024 with our annual peripheral gift exchange! This year we have teamed up with the folks at Dolly Parton HQ to help spread the cheer to every child in America! (*Please note, Dolly Parton is in no way involved with Debate This! and all opinions expressed within this podcast are not endorsed by real life Dolly Parton.*) Kyle is circling back to our first episode of the season. Andrew is optimizing work flows for the children. Matt is committing crime. We're running a Listener Survey for the first time ever! We'd love your input. If you have 5 minutes, check it out here: https://forms.gle/AQBnkg1C83hkkap28 The title of this week's episode was selected by our Patrons in our Discord Community! If you want to help us choose the next one, join our discord, and/or get some bonus content, become part of #ButtThwompNation at patreon.com/debatethiscast! Have you seen our Threads? threads.net/debatethiscast Have you seen our Instagram? instagram.com/debatethiscast Want to send us an email? debatethiscast@gmail.com Hey do you like our Twitter content? Would it bum you out if we left the platform? Let us know because we're thinking about it. MERCH! We have that! Right now you can go on the internet and order things that say Debate This! On them! All you need to do is head to MerchThis.net and give us your money! Ever wanted socks with the DT! logo on them? Well now you can get em! One more time that website is MerchThis.net! Properties we talked about this week: Rez, Steel Battalion, Buzz Trivia, Dolly Parton, Fortnite, Roblox, Mech Assault, Armored Core, Buzz Trivia Buzzer, Mega Jockey 9000, ASCII Trance Vibrator Music for Debate This! is provided by composer Ozzed under a creative commons license. Check out more of their 8-bit bops at www.ozzed.net!
This week, we cover OpenCost's big incubation milestone, CNCF's graduation rules, and a flurry of tech acquisitions. Plus, some thoughts on teaching kids about passwords. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=nWPR3HLPjfI) 493 (https://www.youtube.com/watch?v=nWPR3HLPjfI) Runner-up Titles Yes, No, Maybe Infinite Password Loop Bring your kids to work day: passwords. Password Talk Escaping characters Stone Cold Steve Austin Don't hire people with pets Eats AWS stuff natively. I compete on my ASCII character set.Stay in the sandbox Enron for cloud purchasing Rundown OpenCost Advances to CNCF Incubation (https://www.opencost.io/blog/cncf-incubation) Episode 492: Aran Khanna on Cloud Insurance (https://www.softwaredefinedtalk.com/492) VMware Reflections from Explore Barcelona and the Challenges of Modern App Delivery (https://news.broadcom.com/app-dev/reflections-from-explore-barcelona-and-the-challenges-of-modern-app-delivery) New SMB subscription may not end VMware migrations (https://arstechnica.com/information-technology/2024/11/new-smb-friendly-subscription-tier-may-be-too-late-to-stop-vmware-migrations/) M&A Apple to Acquire Pixelmator, Maker of Popular Photo-Editing Apps (https://www.bloomberg.com/news/articles/2024-11-01/apple-to-acquire-pixelmator-maker-of-popular-photo-editing-apps?utm_medium=email&utm_source=author_alert&utm_term=241101&utm_campaign=author_19842959) Red Hat acquires AI optimization startup Neural Magic (https://techcrunch.com/2024/11/12/red-hat-acquires-ai-optimization-startup-neural-magic/) IBM's Red Hat Acquisition Will Pay For Itself By Early Next Year (https://www.nextplatform.com/2024/10/24/ibms-red-hat-acquisition-will-pay-for-itself-by-early-next-year/) Snyk Acquires Developer-First DAST Provider Probely (https://www.globenewswire.com/news-release/2024/11/12/2979082/0/en/Snyk-Acquires-Developer-First-DAST-Provider-Probely.html) IBM's Red Hat Acquisition Will Pay For Itself By Early Next Year (https://www.nextplatform.com/2024/10/24/ibms-red-hat-acquisition-will-pay-for-itself-by-early-next-year/) VMware Reflections from Explore Barcelona and the Challenges of Modern App Delivery (https://news.broadcom.com/app-dev/reflections-from-explore-barcelona-and-the-challenges-of-modern-app-delivery) New SMB subscription may not end VMware migrations (https://arstechnica.com/information-technology/2024/11/new-smb-friendly-subscription-tier-may-be-too-late-to-stop-vmware-migrations/) Coté's take on Explore, in last week's Cloud Foundry Weekly (https://www.youtube.com/watch?v=Wkgwl9mKL2Y). RTO Amazon employees are a flight risk after the new return-to-office mandate, research reveals (https://finance.yahoo.com/news/amazon-exec-says-9-10-103742343.html) Remote work reduces child penalties by roughly half (https://x.com/arpitrage/status/1849530101035160031) Read the letter sent to AWS CEO Matt Garman, signed by 500 employees, (https://www.businessinsider.com/amazon-employees-open-letter-aws-ceo-office-return-rto-2024-10) Amazon CEO Andy Jassy denies that 5-day office mandate is a 'backdoor layoff' (https://www.cnbc.com/2024/11/05/amazon-ceo-andy-jassy-5-day-office-mandate-isnt-a-backdoor-layoff.html) Washington Post Employees Ordered Back to Office 5 Days a Week (https://www.nytimes.com/2024/11/07/business/media/washington-post-return-to-office.html?smid=nytcore-ios-share&referringSource=articleShare) Everyone agrees: A shorter workweek is great! (https://thehustle.co/news/everyone-agrees-a-shorter-workweek-is-great) Return-to-office mandates are more than “backdoor layoffs” (https://overcast.fm/+AAQLdtAb8Tc) Relevant to your Interests Google CEO says over 25% of new Google code is generated by AI (https://arstechnica.com/ai/2024/10/google-ceo-says-over-25-of-new-google-code-is-generated-by-ai/) Threads has 275 M Monthly Users (https://www.threads.net/@alexheath/post/DBw02uLSE53?xmt=AQGzqxkKe87WI9ToiqUrcEIU6mxhBohSO8BNX4ve1zqRHQ) Dropbox is laying off 20% of its global workforce (https://www.threads.net/@cnbc/post/DBwYF88uYSr?xmt=AQGz-t_BCEcQFjjZwD05xps9bJGHO7FL25RD1h6JIauuOQ) From IaC to Cloud Management: Pulumi's Evolution Story (https://thenewstack.io/from-iac-to-cloud-management-pulumis-evolution-story/) For Jeff Bezos and his businesses, Washington has become more important (https://www.washingtonpost.com/nation/2024/10/30/bezos-business-federal-government/) Russian court fines Google $2 decillion (https://www.theregister.com/2024/10/29/russian_court_fines_google/) GitHub Next | GitHub Spark (https://githubnext.com/projects/github-spark) The MacBook Air gets a surprise upgrade to 16GB of RAM (https://www.theverge.com/2024/10/30/24282981/apple-macbook-air-m2-m3-16gb-ram-minimum-price-unchanged) Meta says open sourcing Llama models will be a money-saver (https://www.theregister.com/2024/10/31/meta_q3_2024/) Google employees pressure costumed execs at all-hands meeting for clarity on cost cuts (https://www.cnbc.com/2024/11/01/google-employees-pressure-execs-at-all-hands-for-clarity-on-cost-cuts.html) Intel's future laptops will have memory sticks again (https://www.theverge.com/2024/11/1/24285513/intel-ceo-lunar-lake-one-off-memory-package-discrete-gpu) Against Incident Severities and in Favor of Incident Types (https://www.honeycomb.io/blog/against-incident-severities-favor-incident-types) Nintendo Just Launched a Music Streaming App, and It's Surprisingly Good (https://gizmodo.com/nintendo-just-launched-a-music-streaming-app-and-its-surprisingly-good-2000518802) Why The US Military Chose Silicon-Graphene Batteries (https://www.youtube.com/watch?v=l60hjFvj64s) Warren Buffett's GEICO repatriates work from the cloud (https://www.thestack.technology/warren-buffetts-geico-repatriates-work-from-the-cloud-continues-ambitious-infrastructure-overhaul/) Google Confirms Jarvis AI Is Real by Accidentally Leaking It (https://gizmodo.com/google-confirms-jarvis-ai-is-real-by-accidentally-leaking-it-2000521089) Curbside charging is coming to Michigan. (https://www.theverge.com/2024/11/6/24289516/curbside-charging-is-coming-to-michigan) Nintendo says the Switch successor will be compatible with Switch games (https://www.theverge.com/2024/11/5/24284745/switch-2-backward-compatibility-nintendo-online-preservation) Platform vs. DevEx teams: What's the difference? (https://newsletter.getdx.com/p/platform-vs-devex-teams) Why Strava Is a Privacy Risk for the President (and You Too) (https://lifehacker.com/health/stravas-heatmap-privacy-problem) Thunderbolt 5: Only Necessary for the Most Demanding Uses (https://tidbits.com/2024/11/06/thunderbolt-5-only-necessary-for-the-most-demanding-uses/) Guide to Selling Your Company (https://www.onlycfo.io/p/guide-to-selling-your-company) The mystery of Masayoshi Son, SoftBank's great disrupter (https://on.ft.com/3ADujb9) IronCalc (https://www.ironcalc.com/?utm_source=changelog-news) Neptyne is shutting down (https://www.neptyne.com/blog/neptyne-is-shutting-down) OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI (https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai) Matt Mullenweg says Automattic is 'very short-staffed' amid WordPress vs. WP Engine drama (https://techcrunch.com/2024/10/30/matt-mullenweg-says-automattic-is-very-short-staffed-amid-wordpress-vs-wp-engine-drama/) Automattic offered employees another chance to quit — this time with nine months' severance (https://techcrunch.com/2024/10/17/automattic-offered-employees-another-chance-to-quit-this-time-with-nine-months-severance/) Automattic's new site tracks how many websites left WP Engine following feud (https://techcrunch.com/2024/11/07/automattics-new-site-tracks-how-many-websites-left-wp-engine-following-feud-with-matt-mullenweg/) Cloudflare Blocks Automattic's WP Engine Tracker For Phishing (https://www.searchenginejournal.com/cloudflare-blocks-automattics-wp-engine-tracker-for-phishing/532244/) We're leaving Kubernetes - Blog (https://www.gitpod.io/blog/we-are-leaving-kubernetes) Nonsense 'Infinite monkey theorem' challenged by Australian mathematicians (https://www.bbc.com/news/articles/c748kmvwyv9o) Listener Feedback Anova Precision™ Oven 2.0 (https://anovaculinary.com/products/anova-precision-oven?adnet=g&gad_source=1&gbraid=0AAAAADhfRrCJj9bTdq3Z1e0hmcx0uuIQ5&gclid=Cj0KCQiAlsy5BhDeARIsABRc6Zsk_vcmd7dVaCIchSV2jLrJZSMXP3XPo34xTxNMGiCB3cxtJHwzFzIaAob8EALw_wcB) Conferences SREday Amsterdam (https://sreday.com/2024-amsterdam/), Nov 21, 2024, Coté speaking (https://sreday.com/2024-amsterdam/Michael_Cote_VMwarePivotal_We_Fear_Change), 20% off with code SRE20DAY CfgMgmtCamp (https://cfgmgmtcamp.org/ghent2025/), February 2rd to 5th. DevOpsDayLA (https://www.socallinuxexpo.org/scale/22x/events/devopsday-la) at SCALE22x (https://www.socallinuxexpo.org/scale/22x), March 6-9, 2025, discount code DEVOP SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Overcast (https://overcast.fm) features: Queue (https://www.reddit.com/r/OvercastFm/comments/1ehwixl/add_tomove_to_whats_the_difference/) and Uploads (https://thesweetsetup.com/upload-mp3-files-overcast/) Pixelmater Pro (https://www.pixelmator.com/pro/) Matt: Hardcore History: Wrath of the Khans (https://www.dancarlin.com/product/hardcore-history-wrath-of-the-khans-series/) podcast Wiz Ugly Sweaters Giveaway (https://www.linkedin.com/posts/wizsecurity_you-can-get-one-of-our-exclusive-2025-activity-7262464003807887362-fzNY?utm_source=share&utm_medium=member_desktop) Coté: Political Wire (https://politicalwire.com) Photo Credits Header (https://unsplash.com/photos/switched-on-iphone-dk4en2rFOIE) Artwork (https://unsplash.com/photos/person-holding-black-academic-hat-oTglG1D4hRA)
Wenn Hund Ascii einen fremden Hund gesehen hat, ist er in die Leine gesprungen und hat gebellt. Bei direktem Hundekontakt, hat er den Hund umgehauen, auf den Boden gedrückt, geknurrt und verletzt. Und dann war Ascii plötzlich der Hund, der als Statist andere Hunde im Training supportet hat. Krass! Wie und warum das passiert ist, erfährst du in unserer neuen Podcast-Folge. In dieser Folge gibt Ulli von Dog It Right spannende Einblicke in das Training an Hundebegegnungen mit ihrem Hund Ascii. Du bekommst einen tiefen Einblick in die verschiedenen Phasen des Trainings, von Begegnungen an der Leine bis hin zu direkten Hundekontakten. Du erfährst, welche Fortschritte Ascii gemacht hat, welche Herausforderungen beide gemeistert haben und wie du dieses Wissen auf dein eigenes Hundetraining übertragen kannst.Du erfährst auch, wie Ascii an der Leine und im direkten Kontakt mit anderen Hunden reagiert hat und wie das Training gestaltet wurde, um Hundebegegnungen entspannter zu meistern. Außerdem verrät Ulli, welche Managementmaßnahmen notwendig waren und wie sie es geschafft hat, Ascii schrittweise wieder direkten Kontakt zu fremden Hunden möglich zu machen. In Folge 112 erfährst du: - Wie gezieltes Training mit Markersignalen dabei geholfen hat, aggressives Verhalten zu verändern. - Welche Trainingstechniken und Alternativverhalten Ulli eingesetzt hat, um Ascii bei Hundebegegnungen zu unterstützen. - Wie Managementmaßnahmen im Alltag dabei halfen, unerwünschtes Verhalten zu minimieren. - Wie lange der Trainingsprozess gedauert hat und welche Fortschritte Ascii gemacht hat.
Video Episode: https://youtu.be/7et_7YkwAHs In today’s episode, we dive into the alarming rise of malware delivery through fake job applications targeting HR professionals, specifically focusing on the More_eggs backdoor. We also discuss critical gaming performance issues in Windows 11 24H2 and the vulnerabilities in DrayTek routers that expose over 700,000 devices to potential hacking. Lastly, we address the urgent exploitation of a remote code execution flaw in Zimbra email servers, emphasizing the need for immediate updates to safeguard against evolving threats. Links to articles: 1. https://thehackernews.com/2024/10/fake-job-applications-deliver-dangerous.html 2. https://www.bleepingcomputer.com/news/microsoft/microsoft-warns-of-windows-11-24h2-gaming-performance-issues/ 3. https://thehackernews.com/2024/10/alert-over-700000-draytek-routers.html 4. https://www.bleepingcomputer.com/news/security/critical-zimbra-rce-flaw-exploited-to-backdoor-servers-using-emails/ Timestamps 00:00 – Introduction 01:14 – Zimbra RCE Vulnerability 02:17 – 700k DrayTek Routers Vulnerable 04:36 – Recruiters Targeted with Malware 06:14 – Microsoft blocks updates for gamers 1. What are today’s top cybersecurity news stories? 2. How is More_eggs malware targeting HR professionals? 3. What vulnerabilities exist in DrayTek routers? 4. Why did Microsoft block Windows 11 24H2 upgrades? 5. What is the impact of the Zimbra RCE flaw? 6. How do fake job applications spread malware? 7. What security measures can protect against More_eggs malware? 8. What are the latest gaming issues with Windows 11? 9. How can DrayTek router vulnerabilities be mitigated? 10. What are the latest tactics used by cybercriminals in email attacks? More_eggs, Golden Chickens, spear-phishing, credential theft, Microsoft, Windows 11, Asphalt 8, Intel Alder Lake+, DrayTek, vulnerabilities, exploits, cyber attackers, Zimbra, RCE, vulnerability, exploitation, # Intro HR professionals are under siege as a spear-phishing campaign disguised as fake job applications delivers the lethal More_eggs malware, leading to potentially devastating credential theft. Powered by the notorious Golden Chickens group, this malware-as-a-service targets recruiters with chilling precision. **How are recruitment officers unknowingly downloading malicious files, and what methods are threat actors using to bypass security measures?** “Microsoft is blocking Windows 11 24H2 upgrades on some systems due to critical gaming performance issues like Asphalt 8 crashes and Easy Anti-Cheat blue screens. The company is scrambling to resolve these problems that uniquely impact devices with Intel Alder Lake+ processors.” How can gamers with affected systems work around these issues until Microsoft releases a fix? Over 700,000 DrayTek routers are currently vulnerable to 14 newly discovered security flaws, with some critical exploits that could be used to take full control of the devices and infiltrate enterprise networks. Despite patches being released, many routers remain exposed, creating a lucrative target for cyber attackers. How can these vulnerabilities impact businesses that rely on DrayTek routers for network security? Hackers are leveraging a critical Zimbra RCE vulnerability to backdoor servers through specially crafted emails that execute malicious commands, revealing widespread exploitation just days after a proof-of-concept was published. Notable security experts warn of attackers embedding harmful code in the email’s CC field, which the Zimbra server inadvertently executes. How are attackers camouflaging their malicious emails to slip through security measures unnoticed? # Stories Welcome back to our podcast. Today, we’re talking about a new cyber threat targeting HR professionals. Researchers at Trend Micro have uncovered a spear-phishing campaign where fake job applications deliver a JavaScript backdoor called More_eggs to recruiters. This malware, sold as malware-as-a-service by a group known as Golden Chickens, can steal credentials for online banking, email accounts, and IT admin accounts. What’s unique this time is that attackers are using spear-phishing emails to build trust, as observed in a case targeting a talent search lead in engineering. The attack sequence involves downloading a ZIP file from a deceptive URL, leading to the execution of the More_eggs backdoor. This malware probes the host system, connects to a command-and-control server, and can download additional malicious payloads. Trend Micro’s findings highlight the persistent and evolving nature of these attacks, which are difficult to attribute because multiple threat actors can use the same toolkits. The latest insights also connect these activities to known cybercrime groups like FIN6. Stay vigilant, especially if you work in HR or recruitment. 1. **Spear-Phishing**: – **Definition**: A targeted phishing attack aiming at specific individuals or companies, typically using information about the victim to make fraudulent messages more convincing. – **Importance**: This method is specifically dangerous because it can trick even tech-savvy users by exploiting personalized details, leading to significant security breaches like credential theft. 2. **More_eggs**: – **Definition**: A JavaScript backdoor malware sold as a malware-as-a-service (MaaS) with capabilities to siphon credentials and provide unauthorized access to infected systems. – **Importance**: Due to its ability to latently steal sensitive information and its widespread use by various e-crime groups, More_eggs represents a significant threat to corporate cybersecurity. 3. **Malware-as-a-Service (MaaS)**: – **Definition**: A business model where malicious software is developed and sold to cybercriminals who can then use it to conduct attacks. – **Importance**: This model lowers the barrier of entry for cybercriminals, allowing even those with limited technical skills to launch sophisticated attacks using pre-made malware. 4. **Golden Chickens**: – **Definition**: A cybercriminal group (also known as Venom Spider) attributed with developing and distributing the More_eggs malware. – **Importance**: Understanding threat actors like Golden Chickens can help cybersecurity professionals anticipate and defend against specific threat tactics. 5. **Command-and-Control (C2) Server**: – **Definition**: A server used by threat actors to maintain communications with compromised systems within a target network to execute commands and control malware. – **Importance**: Disrupting C2 servers is crucial because it can cut off the attacker's control over their malware, mitigating the threat. 6. **LNK File**: – **Definition**: A shortcut file in Windows that points to another file or executable. – **Importance**: Misuse of LNK files in phishing campaigns can lead to automated execution of malicious payloads, making them an effective vector for malware distribution. 7. **PowerShell**: – **Definition**: A task automation framework from Microsoft consisting of a command-line shell and scripting language. – **Importance**: PowerShell is often used by attackers to execute and conceal malicious scripts due to its powerful capabilities and integration with Windows. 8. **Tactics, Techniques, and Procedures (TTPs)**: – **Definition**: The behavior patterns or methodologies used by cyber threat actors to achieve their goals. – **Importance**: Identifying TTPs helps security professionals understand, detect, and mitigate specific attack strategies used by threat actors. 9. **Obfuscation**: – **Definition**: The process of deliberately making code or data difficult to understand or interpret. – **Importance**: Obfuscation is commonly used by malware developers to conceal malicious activities and bypass security mechanisms. 10. **Cryptocurrency Miner**: – **Definition**: Software used to perform the computational work required to validate and add transactions to a blockchain ledger in exchange for cryptocurrency rewards. – **Importance**: Unauthorized cryptocurrency mining (cryptojacking) can misuse system resources for financial gain, leading to performance degradation and security vulnerabilities. — On today’s tech update: Microsoft has blocked upgrades to Windows 11 version 24H2 on certain systems due to gaming performance issues. Players of Asphalt 8 may encounter game crashes, while some systems running Easy Anti-Cheat might experience blue screens. These problems mainly affect devices with Intel Alder Lake+ processors. Until Microsoft resolves these issues, impacted users are advised not to manually upgrade using tools like the Media Creation Tool. Microsoft is working on fixes and will include them in upcoming updates. 1. **Windows 11 24H2**: A version of Microsoft’s Windows 11 operating system, released in the second half (H2) of 2024. It is significant because it represents Microsoft’s ongoing update cycle aimed at improving system performance and user experience, though it also highlights the challenges of software compatibility and stability. 2. **Asphalt 8 (Airborne)**: A popular racing video game often used for showcasing graphical and processing capabilities of devices. Its relevance lies in exposing potential software and hardware compatibility issues when new operating systems are released. 3. **Easy Anti-Cheat**: A software tool designed to detect and prevent cheating in multiplayer games. It is crucial for maintaining fair play and integrity in online gaming environments but can pose compatibility challenges with system updates. 4. **Blue Screen of Death (BSoD)**: An error screen displayed on Windows computers following a system crash. It is important as it signals serious software or hardware issues that could affect system stability and data integrity. 5. **Intel Alder Lake+ processors**: A generation of Intel’s microprocessors known for their hybrid architecture design. Understanding these chips is important for recognizing which systems might be more susceptible to the reported compatibility issues. 6. **vPro platform**: A set of Intel technologies aimed at enhancing business security and manageability. It’s critical to cybersecurity professionals because it allows for hardware-level encryption and more robust security management, but compatibility with OS updates can be problematic. 7. **MEMORY_MANAGEMENT error**: A specific type of error indicating system memory management problems, often leading to system crashes. It is crucial for cybersecurity and IT professionals as it affects the stability and reliability of a system. 8. **Compatibility holds (Safeguard IDs)**: Mechanisms employed by Microsoft to prevent system upgrades when known issues are detected. These are essential for protecting users from potential system failures and ensuring a stable computing environment. 9. **Media Creation Tool**: A Microsoft utility used for installing or upgrading Windows OS. It's important for IT professionals as it provides a means to manually deploy Windows updates, though it highlights the risks of bypassing automatic update safeguards. 10. **KB5043145 (Preview Update)**: A specific Windows update known to cause issues such as reboot loops and connection failures. Understanding these updates is crucial for maintaining system stability and ensuring that deployed systems are free from vulnerabilities and bugs. — In a recent cybersecurity alert, over 700,000 DrayTek routers have been identified as vulnerable to hacking due to 14 newly discovered security flaws. These vulnerabilities, found in both residential and enterprise routers, include two rated critical, with one receiving the maximum CVSS score of 10.0. This critical flaw involves a buffer overflow in the Web UI, potentially allowing remote code execution. Another significant vulnerability is OS command injection via communication binaries. The report highlights the widespread exposure of these routers’ web interfaces online, creating a tempting target for attackers, particularly in the U.S. DrayTek has released patches to address these vulnerabilities, urging users to apply updates, disable unnecessary remote access, and utilize security measures like ACLs and two-factor authentication. This development coincides with international cybersecurity agencies offering guidance to secure critical infrastructure, emphasizing the importance of safety, protecting valuable OT data, secure supply chains, and the role of people in cybersecurity. 1. **Vulnerability**: A weakness in a system or software that can be exploited by hackers. – **Importance**: Identifying vulnerabilities is crucial in cyber security because it helps protect systems from attacks. 2. **Router**: A device that routes data from one network to another, directing traffic on the internet. – **Importance**: Routers are essential for internet connectivity and their security is vital to prevent unauthorized access to networks. 3. **Buffer Overflow**: A coding error where a program writes more data to a buffer than it can hold, potentially leading to system crashes or unauthorized code execution. – **Importance**: Buffer overflows are common vulnerabilities that can be exploited to gain control of a system. 4. **Remote Code Execution (RCE)**: A type of vulnerability that allows an attacker to execute code on a remote system without authorization. – **Importance**: RCE vulnerabilities are highly critical as they enable attackers to take over affected systems. 5. **Cross-site Scripting (XSS)**: A web security vulnerability that allows attackers to inject malicious scripts into content from otherwise trusted websites. – **Importance**: XSS can be used to steal information, deface websites, and spread malware. 6. **Adversary-in-the-Middle (AitM) Attack**: An attack where the attacker secretly intercepts and possibly alters the communication between two parties who believe they are directly communicating with each other. – **Importance**: AitM attacks can lead to data theft, man-in-the-middle proxy attacks, and unauthorized access to sensitive information. 7. **Denial-of-Service (DoS)**: An attack intended to shut down a machine or network, making it inaccessible to its intended users. – **Importance**: DoS attacks disrupt the availability of services and can cause significant downtime and financial loss. 8. **Access Control List (ACL)**: A list of permissions attached to an object that specifies which users or system processes can access the object and what operations they can perform. – **Importance**: ACLs are crucial for implementing security policies to control access to resources. 9. **Two-Factor Authentication (2FA)**: A security process in which the user provides two different authentication factors to verify themselves. – **Importance**: 2FA improves security by adding an additional layer of verification, making it harder for attackers to gain unauthorized access. 10. **Operational Technology (OT)**: Hardware and software that detects or causes changes through direct monitoring and control of physical devices, processes, and events in an enterprise. – **Importance**: OT security is critical for the functioning and safety of critical infrastructure systems, such as those in manufacturing, power generation, and transportation. — Today, we’re discussing a critical remote code execution (RCE) vulnerability in Zimbra email servers, tracked as CVE-2024-45519, which hackers are actively exploiting. This flaw allows attackers to trigger malicious commands simply by sending specially crafted emails, which are processed by Zimbra’s post journal service. First flagged by Ivan Kwiatkowski of HarfangLab and confirmed by Proofpoint, the exploit involves spoofed emails with commands hidden in the “CC” field. Once processed, these emails deliver a webshell to the server, giving attackers full access for data theft or further network infiltration. A proof-of-concept exploit was released by Project Discovery on September 27, prompting immediate malicious activity. Administrators are urged to apply security updates released in Zimbra’s latest versions—9.0.0 Patch 41 and later—or disable the vulnerable postjournal service and ensure secure network configurations to mitigate the threat. Stay vigilant and update your Zimbra servers immediately to protect against this critical vulnerability. 1. **Remote Code Execution (RCE)** – **Definition**: A type of security vulnerability that enables attackers to run arbitrary code on a targeted server or computer. – **Importance**: This flaw can be exploited to gain full control over the affected machine, leading to data theft, unauthorized access, and further network penetration. 2. **Zimbra** – **Definition**: An open-source email, calendaring, and collaboration platform. – **Importance**: Popular among organizations for its integrated communication tools, making it a significant target for cyberattacks due to the sensitive data it handles. 3. **SMTP (Simple Mail Transfer Protocol)** – **Definition**: A protocol used to send and route emails across networks. – **Importance**: Integral to email services, its exploitation can deliver malicious content to servers and users, forming a vector for cyber-attacks. 4. **Postjournal Service** – **Definition**: A service within Zimbra used to parse incoming emails over SMTP. – **Importance**: Its vulnerability can be leveraged to execute arbitrary commands, making it a crucial attack point for hackers. 5. **Proof-of-Concept (PoC)** – **Definition**: A demonstration exploit showing that a vulnerability can be successfully taken advantage of. – **Importance**: PoC exploits serve as proof that theoretical vulnerabilities are practical and dangerous, necessitating urgent security responses. 6. **Base64 Encoding** – **Definition**: A method of encoding binary data into an ASCII string format. – **Importance**: Often used to encode commands within emails or other data streams to evade basic security detections. 7. **Webshell** – **Definition**: A type of malicious script that provides attackers with remote access to a compromised server. – **Importance**: Webshells afford attackers sustained control over a server, allowing for ongoing data theft, disruptions, and further exploits. 8. **CVE (Common Vulnerabilities and Exposures)** – **Definition**: A list of publicly known cybersecurity vulnerabilities and exposures, identified by unique CVE IDs. – **Importance**: Helps standardize and track security issues, facilitating communication and management of vulnerabilities across the cybersecurity community. 9. **Patch** – **Definition**: An update to software aimed at fixing security vulnerabilities or bugs. – **Importance**: Patching vulnerabilities is critical for protecting systems from attacks exploiting known security flaws. 10. **Execvp Function** – **Definition**: A function in Unix-like operating systems that executes commands with an argument vector, featuring improved input sanitization. – **Importance**: By replacing vulnerable functions like ‘popen,’ ‘execvp’ helps prevent the execution of malicious code, thus enhancing system security. —
In this episode of the Security Swarm Podcast, host Andy Syrewicze and guest Michael Posey discuss the new password guidelines and recommendations released by NIST (National Institute of Standards and Technology). They cover a range of topics related to password security, including the importance of password length over complexity, the move away from composition rules and periodic password changes, the risks associated with knowledge-based authentication, the concept of password entropy, and more! Throughout the conversation, Andy and Michael draw on their extensive experience in the cybersecurity field to offer practical advice and perspectives on the changing landscape of password security. Do you want to join the conversation? Join us in our Security Lab LinkedIn Group! Key Takeaways: NIST recommends a minimum password length of 8 characters, with a suggested length of 15 characters or more. NIST has recommended removal of the requirement for password composition rules, such as the need for special characters, numbers, and uppercase letters. NIST states that password providers SHALL NOT require periodic password changes unless there is evidence of a breach, as this can lead to users creating predictable password patterns. The use of ASCII and Unicode characters is now encouraged, allowing for more diverse and random password options. Password entropy (randomness) is more important than password complexity, as modern computing power can quickly crack simple but complex-looking passwords. For mission-critical systems, organizations may still choose to implement more rigorous password policies, even if they deviate from the NIST recommendations. The industry is exploring new hashing methods and technologies, such as passkeys, to address the challenges posed by GPU-based brute-force attacks. Timestamps: (07:40) Credential Service Provider (CSP) Requirements and Recommendations (10:02) Removing Password Composition Rules (14:21) Ending Periodic Password Changes (19:48) The Importance of Password Entropy and Length (28:30) Phasing Out Knowledge-Based Authentication (30:30) The Impact of Password Length on Cracking Time Episode Resources: NIST Publication 800-63B -- To enhance your organization's security posture, consider implementing Hornetsecurity's Advanced Threat Protection. This solution provides AI-powered defense against sophisticated attacks, ensuring your emails and data remain secure. By adopting best practices in password management and utilizing advanced security features, you can significantly reduce the risk of breaches. Protect your business today and stay one step ahead of cyber threats. Learn more about Advanced Threat Protection here.
Question: List two requirements for adding a “Sequence Listing” after the application filing date? Answer: Adding a “Sequence Listing” after the application filing date involves the submission of: a “Sequence Listing” either as a PDF image file, on physical sheets of paper, or as an ASCII plain text file submitted via the USPTO patent electronic […] The post MPEP Q & A 309: List two requirements for adding a “Sequence Listing” after the application filing date? appeared first on Patent Education Series.
Chase Carter, games journalist and co-founder of Rascal News, deploys his incredible language skills to make "Flappy Bird: Shifted" with the prompts: ASCII, Supernatural Investigation, Flappy-bird-like I hope you're ready for a hard-hitting story about a search for a lost loved one, undertaken entirely by pressing a button to fight gravity. Shift your perspective to find your way through dimensions of text and punctuation in this moving ASCII-art game. This may potentially be the most competent game concept creation to date on this show! Read more of Chase's work at Rascal.news! Check out the Multitude Discord here, and check out Tab for a Cause at TabforaCause.org/bgh. Visit the DFTBA Big Game Hunger merch shop at bit.ly/jennamerch. Support this show, and submit your OWN random prompts, by subscribing at Patreon.com/TheJenna. Email the show at BigGameHungerPod@gmail.com. Big Game Hunger is part of the Multitude Collective of podcasts. Created and hosted by Jenna Stoeber. Big Game Hunger is a weekly video game podcast where Jenna Stoeber and a guest get three random prompts and have to make the big next game based on them.
Analysing MQQT data, getting domains unblocked from Cloudflare DNS, making ASCII animations, and why Joe is drawn to Linux Mint. Plus why we don't talk about Vivaldi even though it's quite good, why Félim was wrong about right click in PuTTY, and Will doesn't seem to understand Lemmy. Discoveries MQTT decode Cloudflare DNS was... Read More
Analysing MQQT data, getting domains unblocked from Cloudflare DNS, making ASCII animations, and why Joe is drawn to Linux Mint. Plus why we don't talk about Vivaldi even though it's quite good, why Félim was wrong about right click in PuTTY, and Will doesn't seem to understand Lemmy. Discoveries MQTT decode Cloudflare DNS was... Read More
Episode 455 avec Thierry W. et Xavier. Présentation : Sébastien S.La revue de presse :• A comme ASCII (1:44:00) : Une carte du monde originale. MapSCII: la carte du monde à explorer depuis un terminal. (Sources : korben.info et github.com) • C comme Chat (8:49:00) : Quand c'est compliqué de choisir pour son chat. Le difficile dilemne de bien choisir une caisse à chats autonettoyante électronique. (Sources : thesprucepets.com et francoischarron.com) • D comme Drones (15:35:00) : Quand le traffic aérien des drones se gère de manière autonome. Un modèle autonome gère le traffic aérien de 5000 drones. (Source : interestingengineering.com) • E comme e-books (24:29:00) : Quand ça devient chaud de pirater des livres. La lutte contre la «Z Library» se renforce en France. (Source : lematin.ch) • F comme Fintech (32:14:00) : Un nouveau moyen de paiement bientôt en test chez Carrefour. Carrefour veut permettre à ses clients de payer avec la main. (Source : clubic.com) • N comme Navigateur (39:58:00) : Quand Google se fout de nous tous comptes faits. Coup de tonnerre dans la publicité en ligne : Google renonce à la fin des cookies tiers. (Source : clubic.com) • S comme Start-up (49:56:00) : Gagner de l'argent en jouant à des jeux videos. La startup française Bogs vous fait gagner de l'argent en jouant aux jeux vidéos. (Sources : lefigaro.fr et bogsgaming.com) • S comme Suisse (57:59:00) : Quand la Suisse montre l'exemple via l'Open Source. La Suisse exige désormais des logiciels open source. (Sources : zdnet.fr, datenrecht.ch et admin.ch) Retrouvez toutes nos informations, liens, versions du podcast via notre site : LesTechnos.be
It's return guest season here at Latent Space! We last talked to Kanjun in October and Jonathan in May (and December post Databricks acquisition): Imbue and Databricks are back for a rare treat: a double-header interview talking about DBRX from Databricks and Imbue 70B, a new internal LLM that “outperforms GPT-4o” zero-shot on a range of reasoning and coding-related benchmarks and datasets, while using 7x less data than Llama 3 70B.While Imbue, being an agents company rather than a model provider, are not releasing their models today, they are releasing almost everything else: * Cleaned-up and extended versions of 11 of the most popular NLP reasoning benchmarks* An entirely new code-focused reasoning benchmark* A fine-tuned 70B model, built with Meta Llama 3, to identify ambiguity* A new dataset of 450,000 human judgments about ambiguity* Infrastructure scripts for bringing a cluster from bare metal to robust, high performance training* Our cost-aware hyperparameter optimizer, CARBS, which automatically and systematically fine-tunes all hyperparameters to derive optimum performance for models of any sizeAs well as EXTREMELY detailed posts on the infrastructure needs, hyperparameter search, and clean versions of the sorry state of industry standard benchmarks. This means for the FIRST TIME (perhaps since Meta's OPT-175B in 2022?) you have this level of educational detail into the hardware and ML nitty gritty of training extremely large LLMs, and if you are in fact training LLMs of this scale you now have evals, optimizers, scripts, and human data/benchmarks you can use to move the industry forward together with Imbue.We are busy running the sold-out AI Engineer World's Fair today, and so are unable to do our usual quality writeup, however, please enjoy our show notes and the excellent conversation! Thanks also to Kanjun, Ashley, Tom and the rest of team Imbue for setting up this interview behind the scenes.Video podTimestamps* [00:00:00] Introduction and catch up with guests* [00:01:55] Databricks' text to image model release* [00:03:46] Details about the DBRX model* [00:05:26] Imbue's infrastructure, evaluation, and hyperparameter optimizer releases* [00:09:18] Challenges of training foundation models and getting infrastructure to work* [00:12:03] Details of Imbue's cluster setup* [00:18:53] Process of bringing machines online and common failures* [00:22:52] Health checks and monitoring for the cluster* [00:25:06] Typical timelines and team composition for setting up a cluster* [00:27:24] Monitoring GPU utilization and performance* [00:29:39] Open source tools and libraries used* [00:32:33] Reproducibility and portability of cluster setup* [00:35:57] Infrastructure changes needed for different model architectures* [00:40:49] Imbue's focus on text-only models for coding and reasoning* [00:42:26] CARBS hyperparameter tuner and cost-aware optimization* [00:51:01] Emergence and CARBS* [00:53:18] Evaluation datasets and reproducing them with high quality* [00:58:40] Challenges of evaluating on more realistic tasks* [01:06:01] Abstract reasoning benchmarks like ARC* [01:10:13] Long context evaluation and needle-in-a-haystack tasks* [01:13:50] Function calling and tool use evaluation* [01:19:19] Imbue's future plans for coding and reasoning applications* [01:20:14] Databricks' future plans for useful applications and upcoming blog postsTranscriptSWYX [00:00:00]: Welcome to the Latent Space Podcast, another super special edition. Today, we have sort of like a two-header. John Frankel from Mosaic Databricks, or Databricks Mosaic, and Josh Albrecht from MBU. Welcome.JOSH [00:00:12]: Hey, glad to be here.SWYX [00:00:14]: Thank you for having us. Hey, so both of you are kind of past guests. Jonathan, you were actually one of the most popular episodes from last year talking about MPT7B. Remember the days when we trained large models and there was 7B?JONATHAN [00:00:30]: Yeah, back when reproducing LLAMA1-7B was considered a huge accomplishment for the field. Those are the good old days. I miss that.SWYX [00:00:38]: As the things have accelerated a lot. Actually, let's do a quick catch up and Josh, you can chime on in as well. So Databricks got acquired. I talked to you at New York.JONATHAN [00:00:45]: Mosaic got acquired, although sometimes it feels like Mosaic acquired Databricks because, you know, we're having a lot of fun being here. But, you know, yeah.SWYX [00:00:52]: Yeah. I mean, you are chief scientist now of Databricks.JONATHAN [00:00:55]: Chief AI scientist. Careful with the title. As much as I would love to understand how Spark works, I'm going to have to defer that to much smarter people than me.SWYX [00:01:03]: Got it. And I don't know about like what you would highlight so far as a post-acquisition, but the most recent news is that you guys released DBRX. Is that the thing that most people should be aware of?JONATHAN [00:01:13]: Actually, that's no longer the most recent news. Honestly, the most recent news, we announced this, but it was at our Data and AI Summit last week. So it was announced among like 100,000 other things, is that we finally released our text to image model, which has been a year in the making through a collaboration directly with Shutterstock. There was a lot of work put into finding a dataset that we were comfortable with working on and trying to build a model that honestly, I felt like I could trust and that others might be able to trust to put out in the world. So that model was released last week. It's unfortunately just available via API due to the fact that the data is quite sensitive and quite valuable. It's Shutterstock's entire business in a lot of ways, but I'm still really excited that there's now a model that is trained on a dataset where the provenance of every single image is known, and it's a damn good model. So I'm really proud of the team on that.SWYX [00:01:55]: Yeah, amazing. Josh, do you have any thoughts on image model questions?JOSH [00:01:59]: That is not my area of expertise, but I was excited to see the release of it last week as well, and very happy that you guys did a nice job on the data side of everything there. So that was cool to see.SWYX [00:02:09]: I think what's unusual is like, I think Shutterstock's doing multiple deals in multiple labs. So what is the Shutterstock model? Like, I guess, is this the house model for Shutterstock? Is this Databricks' version of the Shutterstock model? Like, what is this?JONATHAN [00:02:22]: The way that I would think about it is that Shutterstock is doing an amazing business in AI across the board. Their dataset is kind of widely known to be the best stock photos dataset in the world, the most comprehensive, the biggest. When you think about like, what dataset am I going to train a multimodal model on? You call Shutterstock. And I, at least I've heard in the news, like OpenAI, Google, Meta, Apple have all called Shutterstock and made those deals. So a lot of models have had Shutterstock data incorporated into them. But this is the only model I know of so far where it was, you know, exclusively and specifically trained just on the vanilla Shutterstock data. There was nothing else mixed in. We didn't go and scrape the web and find other data or combined datasets or anything like that. And so this is, in some sense, the house blend. But the other piece is that it's just a dataset where the provenance of every image is known in public. Where did the data come from? It is the Shutterstock collection. That's it. You know, nothing less, nothing more. And certainly being at Databricks, if I've learned one thing, I've learned about enterprise customers and what they want out of AI. And one of the things they ask for most is just, what can you tell me about the data the model was trained on? And here, especially for text to image models, where images are just tricky subject matter, there's been a lot of kind of legal conversation about images, especially. It's nice to just have something where I can point to it and say, you know, if you want to know where the images came from, these are what they are and this is how they got there.SWYX [00:03:36]: I will talk a little bit about Databricks because it's relevant to the rest of today's episode. So Databricks, sorry, I keep misspeaking. It's DBRX.JONATHAN [00:03:46]: DBRX, actually, there's been a pronunciation update. It is now D-B-Rex. So we have decided to add a dinosaur mascot because what model doesn't like a mascot? So literally, I wish I could pull it up. There is a little plush dinosaur that we had made. It's like the world's cutest dinosaur, but it is the official mascot of D-B-Rex. And there's a little dinosaur logo that, you know, you'll probably see around a little bit more because DBRX is a mouthful, but D-B-Rex, like, you know, it's just kind of...SWYX [00:04:13]: Rolls off the tongue. I love mascots. Like every company should have a mascot. And I think Hugging Face got it right. You need an emoji mascot because that's the minimal viable image.JONATHAN [00:04:21]: I probably shouldn't talk at all about, you know, Velociraptor, but, you know, that's a, maybe that's something we can talk about later in the summer. I'll just leave it at that.SWYX [00:04:28]: Okay. That's a hint to names. I feel like your names leak a lot of alpha. So just to quickly cover the headline details, DBRX, as Make Sure Experts model, that's fairly big, 132 billion total parameters, so 36 billion active on any input, pre-trained on 12 trillion tokens of text and code, and did really well on evals to the point where you had to dye your hair blue. That's my high level conclusion.JONATHAN [00:04:53]: Never make a bet with your team two weeks out from model launch, even when, you know, human eval is looking quite bad. Because if you set some bar, even if it's arbitrary and you think there's no way in hell they're going to hit it, apparently money doesn't motivate people anymore. Humiliating their boss motivates people. So Josh, you should really take a hint from this. You know, you cannot pay someone enough money to make up for you dyeing your hair blue.JOSH [00:05:15]: I'll keep that in mind for our next model.SWYX [00:05:17]: It works. So speaking of Imbue's next model, perhaps Josh, you want to actually just say hi to the general sort of latent space audience and talk about what we're releasing today. Yeah.JOSH [00:05:26]: I'm Josh, CTO of Imbue, and we're not releasing the model. We're not releasing the weights, but we are releasing a bunch of different things that should make it easier for other people to make their own models. So I think right now, training foundation models from scratch is like a very difficult, time-consuming, expensive, kind of risky endeavor, especially for smaller companies. And the things that we're releasing hopefully make that at least a little bit easier. So the things that we're releasing fall into kind of three different buckets. One is infrastructure and scripts for dealing with the kind of hardware and hardware failures and understanding how well is the actually lowest level of thing actually working so that you can actually do your training at all and at a reasonable speed without having to constantly restart, etc. So infrastructure and training scripts. A second set of things is around the evaluation. So after you've trained it, like how well is this actually working and how do you know how well it's working? We're releasing a whole bunch of different data there, a new benchmark about code, reasoning, understanding, as well as our own private versions of 11 different open source benchmarks. So things like pool queue or ANLI, where we've gone through and kind of cleaned up the data as much as possible by looking at all the ones that models get wrong or that are flagged for ambiguity and also our own kind of private reproductions of those where we've done like a kind of clean room black box, like, okay, this is what the data set is supposed to be. Here are some examples. Let's make our own version of this to make sure that there is no data contamination, etc. To make sure that we're actually, you know, not testing on train. And then I think a final thing that we're releasing there is around 450,000 human judgments about ambiguity and question quality, which we used in the process of cleaning these evaluations and we also hope will be helpful for other people training kind of similar models. And then the third thing is CARBS, our hyperparameter, our cost-aware hyperparameter optimizer, which was especially helpful for being able to experiment at much smaller scales and then scale those experiments up to the much larger scale kind of on the first try without having to retry it. You don't want to be training, you know, 10, 20 different 70B models. You really want to get these larger modelsSWYX [00:07:30]: right on the first try.JOSH [00:07:30]: And so the ability to kind of tune things very precisely and learn scaling laws, not just for, you know, the like data and flops, but also for learning rate and all the other hyperparameters and see like how should you scale these things up was extremely valuable to us as we were training the larger models. Yeah, that's a lot of stuff.SWYX [00:07:49]: Yeah, exactly. So there's a bunch of stuffJOSH [00:07:50]: we'll have to go through all of it.JONATHAN [00:07:52]: Yeah, I just want to throw in how excited I am about this. This is the stuff that nobody ever talks about. That is the difference between success and failure in this stuff. Like, can you get your cluster to run? Can you get software on your cluster? Can you figure out what broke? Because fault tolerance is still not really built into any of the fundamental primitives of training models. And so if something breaks, you have to go figure out what broke, your job stops, you have to restart your job. It is a nightmare just to get to the point where anything can train on the cluster. A basic MPI hello world that has the GPUs talk to each other is hard enough, let alone actually training a model, let alone getting good performance out of the GPUs, let alone actually getting a model that converges to anything interesting. There's so many levels of things you have to accomplish. This is the kind of stuff that matters. I think to a point that Josh made earlier, before we got on here, there are plenty of weights out there. Nobody's released this.JOSH [00:08:46]: Yeah, that was part of the motivation actually is that there are lots of other things that are complimentary, but I have not seen nearly as much discussion about some of these other things that we think are pretty important. I mean, in some sense,SWYX [00:08:56]: I'm very excited to have Jonathan on because this is a little bit, you're a bread and butter with Mosaic. And I think you've released some part with Composer. And I think it's just really interesting to see like a different take, basically a full stack take that's kind of open source today.JONATHAN [00:09:18]: Yeah, it's really kind of, it's been an ordeal to figure this out. And every time something changes, whether it's a new GPU or even a new driver update, you get new creative errors and new things go wrong. And, you know, we've dealt with the weirdest things from, you know, our InfiniBand cables getting stolen from the data center twice, like in boxes before they arrived at the data center. Like, you know, Porch Pirate basically had stolen our InfiniBand cables back when those were hard to come by. To like, you know, weird recalls of switches to like the strangest stuff has happened. I have my favorite GPU failures I've seen, like ones where the GPU doesn't fail, it has a correctable memory issue and the memory correction causes the GPU to become a straggler and hold up the whole job. Like weird stuff happens and figuring out how to not just identify all of that, but then eventually productize it, is in some sense, the entire story of Mosaic and now Databricks in terms of our ML offering. Really, the thing we offer is we have gone through this suffering and figured out how to even productize that. It has been a pain in the butt.SWYX [00:10:20]: Yeah, it's a lot of work.JOSH [00:10:20]: I think my favorite failure was GPU is just giving wrong math. Like if they give errors, great, because you can see the errors, but if they just give you the wrong math back, not so fun.SWYX [00:10:30]: When did they give you wrong math?JOSH [00:10:32]: Like literally you could just, you know, add two things. For example, the numbers come back. They're not the numbers that they're supposed to be.JONATHAN [00:10:40]: I think it's important to say at this stage, just because like it, I think it goes without saying for Josh and I, but it's worth saying here, this isn't to say that like anything is wrong with us. It's not like NVIDIA did a bad job or, you know, Mellanox did a bad job or the like the server builder, the data center operator, the cloud provider, like the million other parties that are involved in building this. We are running these insane chips that are huge and complicated and built on tiny transistors at insane frequencies with insane heat in data centers that for the most part, were not built remotely for this kind of power or heat and have been retrofitted for this. Like failures happen on a good day with normal CPUs. And this is not a good day and not a normal CPU for the most part. It's fun to joke about all the weird things we see. This is not to say anybody's done anything wrong. This is just kind of part and parcel of working on a massive cluster running at multiple megawatts of power at a time.SWYX [00:11:32]: It's crazy. Yeah.JONATHAN [00:11:33]: So optical cables, like all sorts, like everything.SWYX [00:11:37]: I'll take the opportunity to start going to the sort of infra piece. There's just like a description of the infra just to give people a sense of what we talk about when we talk about massive clusters. So I'm just going to read off the blog post here. This post is about one cluster that has 4,092 H100 GPUs spread across 511 computers. They use unified fabric manager nodes, which manage the infinite band network. And you talk a little bit about your networking. Is there anything unusual about this setup that you'll call out to people?JOSH [00:12:03]: Yeah, actually this particular cluster is a little bit non-standard. The normal, like vanilla setup for these large clusters as vanilla as it can be is what's normally like a 127 node cluster. So closer to like 1024 GPUs instead of 4,000. Here we have a larger cluster. As you start to get into the larger clusters, the networking becomes a little bit more custom. It's a little bit more, it's a little bit trickier. It's a little bit more difficult to get these things to all be able to talk to each other at the same speed. And so this has, in this particular case, this is a three tier network architecture instead of two tiers, kind of the normal one. So most of the clusters are a little bit smaller. As you get to even larger scales, then this becomes even much more complicated,SWYX [00:12:43]: much more expensive.JOSH [00:12:43]: So we chose this particular scale, kind of knowing our own workloads and kind of what we wanted to do. This was kind of the right size for us. But yeah, I think it's not exactly vanilla already. It's already getting into kind of the custom territory.SWYX [00:12:54]: So my understanding is that there, and is there any part of this that comes with the Voltage Park deal that you guys had? Is that part of the hardware that you got from the deal with them?JOSH [00:13:04]: Yeah, so we worked really closely with Voltage Park to set up all their clusters and infrastructure and everything and kind of decide even like what to order, how should the networking work? Like we were very involved in kind of the construction and bring up of this. And that's what this post is about, is about that process of like bringing up all these, there's like different clusters in different places of different scales. So in this particular post, we're talking about this one 4096 GPU, but there are other clusters that they have as well. And we were very closely involved with figuring out the exact architecture and kind of the trade-offs that go along with picking, you know, those exact components. You really don't want to like place the wrong order because it takes months to get it and it's very expensive. So yeah, we were happy to help out with that.JONATHAN [00:13:43]: And then your bit of good cables get stolen.SWYX [00:13:44]: Yeah, yeah, exactly.JOSH [00:13:47]: We wanted to make sure that we ended up with compute that would work for us and that would also work for their other customers. And so we kind of helped design something so that we would get exactly what we were looking for. We knew that these kinds of details would be super important and that getting down to the level of the hardware and like having these good scripts and everything was going to be a core part of like actually getting this to work. I'm very glad that we did that. I don't think that most companies kind of take that full stack approach, but for us, it certainly paid off.SWYX [00:14:12]: Yeah, it's basically sort of built to spec. It's interesting that relationship because you usually, for the rest of us who don't operate at your scale, we take whatever we can get from cloud providers, but you are basically co-designing from the single machine up. And you described that a little bit. Do you want to take us through the process that you described here?JOSH [00:14:27]: Yeah, so for the actual, like the blog post and kind of bringing these machines online.SWYX [00:14:32]: Yeah.JOSH [00:14:32]: So yeah, I think the process, as we have it broken down in the blog post, there's kind of a few different layers. First is like getting the individual machines to work at all and then getting the machines to actually be able to talk to each other. So getting the InfiniBand networking to work and then getting to a point where, you know, not just the machines are working and they can talk to each other, but everything is actually working correctly. There's a big gap between like it's working at all to it's working perfectly correctly. And then after you have all this stuff working perfectly correctly, nice and healthy, then now you get into kind of the software data, like training issues. And then after that, you're still not done. Like now, even once you're training at full speed, things are going to fail over time. Things are going to change. There's going to be new, you know, firmware updates. Like how do you kind of deal with this change and flux over time without going crazySWYX [00:15:16]: and pulling your hair out,JOSH [00:15:16]: trying to like reproduce things or understand why there were regressions. And so there's a lot of work to kind of automate the infrastructure tooling as well. And kind of the first step, like bringing these things online in the first place, you know, you have hundreds of machines at this point. So you don't necessarily want to be like walking around with like a CD-ROM or a USB drive, like plugging it in with your keyboard, like hitting next, next, next on the OS install. That's not how this works. You do that for one machine. And then you use, we use this thing called Metal as a Service to bring up all the other machines. So it's a kind of server that can kind of install the operating system on these other machines. So most like when you're talking about these machines, like each machine is, you know, on the order of hundreds of thousands of dollars. So they usually come with a kind of out-of-band management interface as well. So they don't, they have their InfiniBand networking. They have their normal 100 gigabit per second Ethernet networking. These are like dual, redundant, et cetera. And then you also have this extra out-of-band management network. So you can log in and you can see like the boot screen or you can see the blue screen of death. You can like get in there and actually see what was wrong, which is pretty fun. And it makes it like possible to automate a lot of this work. So the beginning of that, and the blog post goes into much more detail about like exactly how we set these up and kind of the other errors that we ran into. When you're bringing these online, you'll definitely have failures. Even if they all worked in the factory, they get shipped, some parts come loose, something fails, something goes wrong. So when you're bringing them online, there'll be some that don't quite work for all sorts of reasons. As you start to be working with machines at this scale, like if something happens one in a thousand times, you're like pretty likely to see it. And so you can get pretty rare, weird things, especially since we had fairly early builds and fairly early versions of this hardware. Like these are some of the like first machines that were ever produced, some of the first GPUs. So you've got some extra special things there. We definitely worked with Dell, for example, on making fixes in the firmware level to be like, okay, like this thing is wrong. Like we need to update this at the firmware to like actually fix this particular thing. So we worked pretty closely with Dell and Nvidia. Yeah, that's what I'm saying. Like this stuff gets complicated. And the thing is like, you know, taking a step back, the whole reason we're doing this, right, is that we knew that this was going to be complicated. There would be these kinds of failures. And if we're just using, you know, AWS or some other cloud provider, these errors are still gonna be there and you're gonna have no way to know and no way to debug this and no way to diagnose what's going wrong. And so we would much rather be able to like call up Dell and say, hey, this isn't working. And they're like, yep, okay, cool. Let's debug it together. Oh, I see. Yeah, cool. We'll ship a firmware update and actually fix this for you. That was a much better experience than like, great, just magically fails. I guess we restart and hope that that machine goes away. Like that's not a very good place to be. So yeah, that's kind of the first place is getting to a place where like GPU training is working on your single node machines. You can observe stuff. We have tons of tooling around like, you know, Prometheus and all sorts of other tools for understanding what's going on in these machines because you don't want to be like logging into each one and looking at the temperature or something you really need to have tooling to collect all these metrics, et cetera. Unfortunately, all of the scripts that we have for this are like for this entire cluster and for all this infrastructure are a little bit like special purpose for our particular thing. So it's not that every script that we have, it's not that you can just like take this and plug this in. Even if we did open source all the tooling that we have, you'd still have to do like a lot of work to open source it. What we are releasing is as many of the things that we can that are going to be useful for other people. You're still going to have to have some way of kind of managing these things, making your own like logging aggregators, et cetera, et cetera. So that's kind of bringing them up to the like, you know, the single nodes that are working. From there, it goes into, I'm happy to keep going if you want. Well, I just want to leave the opportunity for JohnSWYX [00:18:53]: to comment if there's anything that's different from how he runs things.JONATHAN [00:18:57]: Oh, I mean, all I'll say is I'll endorse this and say this s**t is hard. Like this is really, really hard. And, you know, I have a special props to, you know, the folks in Vue because they were building this from the ground up. You know, at Databricks and at Mosaic, we typically work with cloud providers because some of this stuff is just, there's too much to handle. It's complicated. There's a lot to deal with. And this doesn't even get into things like physical security, you know, securing power if you're the data center operator. Like this gets infinitely complicated and you have to abstract somewhere. Like, you know, and then you get to the folks who are literally building their own custom chips and like, good God.SWYX [00:19:36]: Like, oh my God, that's, you know,JONATHAN [00:19:38]: if you're one of those folks, you're having, you know, pour one out for the infra people at some of the AI chip startups who are having a really, really interesting time right now. But this stuff is really hard. And I don't think we talk about it much because there's so many other things that are hard. But the other hard things, I think everybody's becoming pretty familiar with at this point. This is something that I don't think there's ever really been a comprehensive discussion of, at least not that I've seen.SWYX [00:20:00]: Yeah, so my impression is that you guys, Mosaic, have your own software for sort of spinning up and down machines, just like Imbue had to build. But Imbue probably, it sounds like Imbue, you guys went fuller stack. I don't know how to describe it. Like Mosaic is not working with Dell on like their firmware.JONATHAN [00:20:21]: No, no, we're typically working with like, you know, pick your cloud provider on their Dell firmware or what have you. Like, it's kind of, I think one of the things, I don't know, Josh, you can correct me on this. It's kind of impossible if you're doing training to not go all the way through the entire stack, regardless of what happens. Like somehow I'm still chatting with cloud providers about power contracts, even though the whole point of dealing with the cloud provider is not to have to think about power contracts. Somehow I'm still asking them about which InfiniBand provider they used this time to see if this is part of the bad batch of cables I encountered on that cloud provider or what have you. Or like, we're still talking about a firmware update from pick your provider. You can't not do this. It's convenient that they have data center staff who are worrying about what to send back to which provider when, and they have people who can go and wait for the InfiniBand cables so they don't get stolen outside. But, you know, it's kind of, it's impossible not to really go full stack if you're thinking about the infrastructure at all. I don't know, Josh, correct me. No, I think that's right.JOSH [00:21:17]: That's what we expected from the beginning as well, is that we would inevitably have to get into the details here. And I'm glad that we kind of just planned for it. I think it made it a lot easier from our perspective to have direct control over this. Instead of having to go to the cloud provider that goes to the data center, that goes to the supplier, we could just go direct to NVIDIA or DellSWYX [00:21:37]: or the data center,JOSH [00:21:37]: whoever was responsible and be like, hey, this thing needs to change. And they're like, oh, okay. Yeah, that is our responsibility. Great, we can fix that. So it was just a lot easier for us to fix these bugs than if we had to go through an extra layer of email.SWYX [00:21:48]: Something we discussed in the pre-show was that you had a rule of thumb for your cluster of reliability. You say here in the post, by and large, you expect around 3% of your machines to break every week. So you're basically going to turn through all your machines in a year.JOSH [00:22:04]: As it says in the post. So that would be true if it was a uniform failure like that. But as it says in the post, it's usually these kind of problematic nodes. And to be clear, that is the number that we've heard from other people is like they're having about 3%. I don't think we're experiencing failure rates that are that high. I think ours is actually quite a bit lower than that, probably because we've taken the time to like dig into a large, maybe larger number than we should have of these failures and get to the root cause of it and be like, oh, okay, like that's exactly what's going wrong.SWYX [00:22:33]: How do we fix this?JOSH [00:22:33]: How do we prevent this from happening? How do we make automated checks for this so that if it does happen, it just goes back to whoever owns that particular part of the process and they can fix it immediately.SWYX [00:22:43]: And that's part of what you're also open sourcing, which is the health checks, right? You got the NIC health checks, GPU health check, this space health check, Docker D message. I don't know what that is.JOSH [00:22:52]: That one is just a lot of stuff.SWYX [00:22:54]: Yeah.JOSH [00:22:55]: That one is one where we realized that actually like when these machines boot, sometimes they wouldn't actually boot cleanly all the way. Or when they rebooted, they had problems that they didn't have when they were working before, which was kind of frustrating. Like usually if you restart your computer,SWYX [00:23:08]: it gets better.JOSH [00:23:08]: Here you restart. It did not get better.SWYX [00:23:10]: It got worse.JOSH [00:23:10]: That was very frustrating. So this health check looks at every particular line we've ever seen from the boot, like in D message, like every single log line that your computer emitsSWYX [00:23:21]: and says like,JOSH [00:23:21]: have we ever seen this before?SWYX [00:23:23]: Is this expected?JOSH [00:23:23]: Is this in the right order? Or is there something out of place? If there's anything out of place, let me say, okay, great. Like now it goes into this, like longer, more triage list of like, all right, great. Like, is this acceptable?SWYX [00:23:33]: Should we flag this?JOSH [00:23:33]: Like, should someone take a look at this? So we're looking down at a very, very granular detail level, what's happening on these computers to make sure that nothing is out of place. And that's critical because without that, if you're running your training, as Jonathan said, and this thing is slow, like what are you supposed to do? Right?SWYX [00:23:49]: Like you really,JOSH [00:23:49]: you really want to be very certain that like all 4,000 of these GPUs are working like they're supposed to.SWYX [00:23:54]: We know that.JOSH [00:23:54]: And so if it's slow, it's because like we messed up the config or something else and not because of this earlier thing that's like really hard to detect in software later.JONATHAN [00:24:01]: Yeah. I think the, I'm just curious to ask,SWYX [00:24:03]: like, you know,JONATHAN [00:24:03]: suppose you were to set up another, let's say another H100 cluster and it were at a different data center. And instead of the vendor being Dell, it was super micro or what have you. How much of this would be repeatable? And how much of this would you have to redo? I, you know, I genuinely don't know.SWYX [00:24:18]: A decent amount.JOSH [00:24:19]: I think it would go a lot faster the second time. I think there's lots of learnings that we had. And also the blog post,SWYX [00:24:24]: you know, yes,JOSH [00:24:24]: we are releasing the health checks, releasing some scripts, but a lot of the valuable stuff is also in the blog post itself, in the details and kind of the, you know, the learnings that we've had and the sort of errors that we run into. We tried to as much as possible surface those to other peopleSWYX [00:24:36]: could learn from thoseJOSH [00:24:36]: and avoid the same mistakes or failures as well. But I think it would go a lot faster.SWYX [00:24:41]: Although, yes,JOSH [00:24:41]: there would certainly be some things that'd be a little bit different. I mean, there'd probably be different CPUsSWYX [00:24:46]: or whatever,JOSH [00:24:46]: but I think a lot of that stuff is less,SWYX [00:24:49]: it's less,JOSH [00:24:49]: that's the like, that's less variable. I think most of it would apply the second time around. Although I'm sure next timeSWYX [00:24:56]: we're building one,JOSH [00:24:56]: it'll probably be, you know, at a scale that's 10x as big with a different chip or something like this.SWYX [00:25:00]: And then who knows?JOSH [00:25:01]: Yeah, with Kinect X8,JONATHAN [00:25:02]: that will have its own fun behavior and all that good stuff. Yeah.SWYX [00:25:06]: Perhaps there's something that people don't discuss about, and you don't even talk about this in the blog, but I always wonder is what is the timeline that's like kind of reasonable for this amount of work, at least the initial stages? And also what does the team composition look like for setting up a cluster, right? Like what are the mix of skills that you typically would require to get all this going?JOSH [00:25:27]: I'm, I can't really speak to typical. One thing I am very proud of is how much we accomplished with such a ridiculously small team. Like our infrastructure team is like, you know, fluctuates from week to week, depending on like how many things are on fire and how much we need to build. But it's like between like three and six people, like it's small. It's not like some huge team of like tons and tons of engineers. But those people are very, very good at what they do. And so that has allowed us to get a lot of mileage out of out of these things. I think it's not that we're building everything, right? It's not that three to six people build this whole thing. I definitely want to like, you know, say thanks very much to Dell and H5 and NVIDIA and the other people that have done a lot of the work, like to bring up this cluster, you know, with 4000 GPUs and three tier networking, networking architecture, you have 12,000 cables. So that's 24,000 things that need to be plugged in. Like that's just a lot of stuff to plug in, right? And you don't want to mess it up. Like each one needs to be done correctly. Like it's a little bit loose. Like it doesn't really work.SWYX [00:26:23]: If you break it,JOSH [00:26:23]: you need to replace it. Like there's a lot of workSWYX [00:26:26]: that goes into this.JOSH [00:26:27]: Yeah.SWYX [00:26:28]: And then, you know,JOSH [00:26:28]: that's just like that's it. That's if you were to do everything right the first time.SWYX [00:26:32]: And if you didn'tJOSH [00:26:32]: have to fix anything. But inevitably, you know, you will have to replace something, which means like taking all the wires out, pulling the thing out, taking all the GPUs out, going and fixing some cable, putting it all back correctly, putting it back in, doing this every time. So there were a lot of people at Dell, NVIDIA and at H5 that all helped a ton with this stuff. I don't know the exact size of the Dell team. It also fluctuated over time.SWYX [00:26:55]: Yeah, excellent. And then, you know, you so you have all the hardware set up and now you're firing it up for a single node. There's a long description that you guys have about just like monitoring the MFU, right? And what each situation might look might be indicative of. One of the most interesting things to me that I saw from here is like, you know, if training immediately starts off at 60 to 80% MFU, something's wrong.SWYX [00:27:24]: But like, you know, like what what are like, you know, some anecdotes or, you know, notable scenarios here that you might you might call out as maybe counterintuitive or super interesting.JOSH [00:27:36]: There's just so many of them. I mean, one of them, which I think is probably pretty common, like common knowledge by this point. But like we did have a sort of likeSWYX [00:27:46]: which one was this exactly?JOSH [00:27:47]: I think for the MFU, like gradually getting worse over time. I think that one, when we saw that the first time we were like, what the heck is going on? Like, why does it get just like a little bit worse? This is so strange. Like, what is it getting lazy or tired or something? Like, is it heat? Like what's going on? And in this particular case, it was memory fragmentation. Because you have hundreds of machines, they're doing garbage collection slightly different times. And then they get slightly further apart and slightly more and more jittered until eventually they're all happening kind of at random times. And just like really messing up each one of your steps. So you just turn off garbage collection and call it a day, basically,SWYX [00:28:20]: to be honest.JOSH [00:28:20]: There's other things you can do if you want to be a little bit more sophisticated about it. But you can also just manuallyJONATHAN [00:28:25]: have it all garbage collect on some interval. Like that's what we've done. We just have a garbage collection callback that just runs. But I've seen the exact same thing.JOSH [00:28:33]: Yeah, yeah, exactly. So I thought that one was kind of funny. And we did trace that one down and look and we did find the actual call. Like, again, this goes to like having good tools. So we had really good tools where we could look at a bunch of like actual traces in C and be like, OK, cool. This is the thing that's taking a lot of time. Or like, you know, this is the thing that doesn't quite line up here. Like, oh, I guess it's garbage collection. OK, cool.SWYX [00:28:52]: Interesting.JOSH [00:28:52]: Yeah, let's just try taking it off.SWYX [00:28:54]: OK, great.JOSH [00:28:54]: That's what it was. Now we can fix it. So for each of them, like basically bugs are not hard if you have good tools. But if you don't have good tools, bugs can be very, very hard. So similarly for like heat, another thing that we saw was like, oh, you know, the CPU is getting throttled. OK, well, it's easy to see if you're monitoring the CPU throttling or monitoring the heat. If you're not monitoring that, it's really hard to know why it's just suddenly one of them is going slower. I noticed also in the pieceSWYX [00:29:17]: that you mentioned FSDP with 0.3. Actually, we met, I went to iClear and Guanhua from the DSP team was there presenting 0++. I was wondering if you want to make any call outs to, you know, particular open source or open library or open whatever implementation teams that were super helpful in your process. I think we ended up actuallyJOSH [00:29:39]: pulling from a whole bunch of different ones to pull things in into our own particular pipeline. So we use things from NVIDIA's, you know, Megatron stuff. We use stuff from probably DeepSpeed. I think we pulled in a bunch of different pieces from a bunch of different places. So it was really nice to see all these working open source like examples. I think I really appreciate all the effort that has gone into actually tuning these things because you can tune them, but it's a lot of work to like tune this stuff and do all this stuff from scratch. It's really nice to have like a working example. I think those are probably the two biggest ones, DeepSpeed and Megatron alone, but there are probably other ones as well.SWYX [00:30:13]: Is there a particular thing in the ecosystem where you would call out as like, you know, there should be something here that is open source, but like it's not really, it's like everyone kind of builds it on their own. I want to say something with the file system because everyone talks about the file system eventually.JOSH [00:30:28]: The file system actually was,SWYX [00:30:30]: I mean, we did somethingJOSH [00:30:31]: kind of dumb there. Like we have our own sort of local mirror so that we can, you know, like a crappy version of S3SWYX [00:30:38]: that's local,JOSH [00:30:38]: but it's just a pretty simple script, right?SWYX [00:30:41]: Like I think we run likeJOSH [00:30:41]: a little web server that just like serves files and then, you know, it can upload themSWYX [00:30:45]: and download them.JOSH [00:30:45]: Okay, great. And part of the reason we did that is that our internet connectionSWYX [00:30:50]: in the beginningJOSH [00:30:50]: was not the like full speedSWYX [00:30:52]: one that we wouldJOSH [00:30:52]: eventually have. And so we are a little bit more kind of bottlenecked in terms of internet bandwidth. And so we had this. I think we looked at a bunch of services out there like Minio and some other ones, but a lot of these like come with a lot of extra overhead and maintenance. And since we already have so much infrastructureSWYX [00:31:09]: to deal with,JOSH [00:31:09]: we kind of didn't want to, you know, bring in a whole other like cloud provider, virtualize something, something.SWYX [00:31:14]: We just wanted something simple.JOSH [00:31:14]: So we went with that, which has been quite helpful. Like our toolsSWYX [00:31:19]: are usually quite simple.JOSH [00:31:19]: It's like Bash and Python and SSH and Docker. Like we'd like to keep things simple so that's easier to debug, like less layers of infrastructure, less layers of abstraction, make it a lot easier to work with. Like we don't use Kubernetes,SWYX [00:31:30]: for example,JOSH [00:31:30]: and we just directly launch these things. And it's just been much easier to debug this way. One tool actually that does come into mind that I will call out is Kraken from Uber. That was great. We love that tool. We were a little bit skeptical. What is it?SWYX [00:31:44]: I'm sorry. Yeah.JOSH [00:31:45]: So Kraken is this, yeah, it's a distributed like Docker registry, basically, that uses BitTorrent to like transfer things between the machines in a sort of nice optimal way. Like in the very beginning, the naive way is like you have this one Docker registry, which was outside of the cluster. So every time we change an image, you know, there's many gigabytes that each of the 500 machines needs to download.SWYX [00:32:07]: So that just takesJOSH [00:32:07]: a really long time. So what this thing does is like just one of them downloads it and then like they all sort of broadcast all the pieces to each other. And it was just like a really nice, fast way of getting these images down. And it was very robust.SWYX [00:32:19]: Like there's a lotJOSH [00:32:19]: going on under the hood, but I think it's a pretty cool tool that we haven't really had any bugs with it at all. Amazing.SWYX [00:32:26]: Yeah. I mean, that's all my questions, I guess, for the info piece. I don't know if, John, you had something that you were sort of burning to ask or.JONATHAN [00:32:33]: No, all I can say is just sameSWYX [00:32:36]: in a lot of places, like, you know, and they're done thatJONATHAN [00:32:38]: seeing this plus one. I think the one big difference, you know, perhaps in philosophies is we've tried to basically standardize on as much commodity stuff as possible, just because, you know, I think the reason I asked about trying to do thisSWYX [00:32:50]: on multiple differentJONATHAN [00:32:50]: pieces of infrastructure is like, I think we're running on like six or seven different clouds right now. And everybody has done something slightly different. And my gosh, the little differences add up as you know, you've seen. And so, you know,SWYX [00:33:04]: our philosophy has been like, whatever the hellJONATHAN [00:33:05]: we can standardize, please let's standardize it. Like vanilla off the shelf FSDB.SWYX [00:33:10]: And like, you know,JONATHAN [00:33:10]: we wrote our own data loader, but we've tried to make that as much of a standard as we can across our infrastructure and in Databricks, because things just start getting really complicatedSWYX [00:33:18]: or like we useJONATHAN [00:33:18]: Kubernetes extensively because it at least gives us a uniform set of APIs. Like that's our hardware abstraction layer to a certain extent for everything else. So it's just, you know, a difference in philosophy there. But otherwise, like, yeah, this stuff is really, really hard. And I feel like we take for granted how much of this, you know, is done for us when you go and you just query chat GPT, for example. Like, oh my God, everything going on underneath that, you know, it's kind of a miracle that the machines boot up, let alone that you can like query a giant language model that's probably doing inference across multiple machines and was trained across thousands of machines. Like, you know, minor miracle.SWYX [00:33:54]: Yeah, it is an awesome amount of power that we invoke with a single API call that we take for granted these days. It's absurd. Yeah, I mean, like Kubernetes, like that point about Kubernetes, I will say as a former AWS employee, like it seems like it would be ideal for imbue to at some point make it more abstracted or agnostic because you're going to want to, you know, replicate your setup. We do have our ownJOSH [00:34:19]: sort of replacement. It's just a much simpler version of Kubernetes. Kubernetes is really designed for running services, not for running experiments. Like that's not its like main architecture. And so for us, like we have everything that's like, cool, you're going to run an experiment. So you want it to run to completion, right?SWYX [00:34:34]: OK, great.JOSH [00:34:34]: Like the primitives are sort of built around a slightly different style. And that makes it a lot easier, like just a lot simpler to fit that the nature of like these machines are going to disappear. They will need to be rebooted for infrastructure upgrades. They will like something will happen to the GPUs. Failure is like baked into this as like a core part of our infrastructure. So it's not that we don't have an abstraction. It's that it's a sort of simpler, more tailored abstraction for the particular work that we're doing.JONATHAN [00:34:58]: Yeah, I think it all depends on what your goals are. And like, I think the challenge in a lot of the deep learning stuff right now is that people are trying to like, people often build things that are more complicated than necessary to get the job done. And the complication is the enemy of everything. You know, don't use a fancier parallelism strategy than you have to. Don't use a fancier set of libraries than you have to.SWYX [00:35:18]: Don't do anythingJONATHAN [00:35:18]: that you don't have to do because it's hard enough as it is. Like, don't overcomplicateSWYX [00:35:23]: your own life.JONATHAN [00:35:23]: Don't try to bring in more tools or more fancy architecture tweaks if you absolutely don't have to.SWYX [00:35:29]: Like getting to the minimumJONATHAN [00:35:30]: necessary to get the job done. And it's really tempting to want to try to use everything. So like, I totally understand that one.SWYX [00:35:37]: I think the last piece I'll maybe call out is that I'm just going to weave this in just because I see the opportunity to do it. Are there any infrastructure shifts that need to be, that need to rise because of changing architecture? So I think, for example,SWYX [00:35:57]: you're announcing a dense model, a 70B dense model, whereas John just worked on DBRX and the image-to-text model, which presumably has different bottlenecks.JONATHAN [00:36:10]: That's correct for us. You know, we train both dense and mixture of expert models. The one we happened to, you know, kind of get permission to open source was a mixture of expert model. And those models are very demanding when it comes to network bandwidth, at least if you're training them in kind of FSTP 03 style, where there's just a lot of parameters getting shuffled back and forth. And your ratio of kind of compute to amount of data that you have to shuffle back and forth becomes a lot worse because you're now, you know, you're only using a fraction of the parameters for every token instead of all the parameters. And so we had to really push the envelope on getting all the stuff to the right places on time. And so actually the networking part of DBRX was the single hardest thing, I think, of the entire process. Just get MOE training, working at scale across a big cluster. We still managed to, I think, do it all with commodity parts, which was very exciting. You know, we were using FSTP and we eventually used HSTP so that we could have HSTP as a version of FSTP where you have multiple smaller replicas and you're doing data parallel within those replicas. And that helped a lot with network latency issues that we were running into just because we were transmitting so much data, you know, for every single part of the process. I think it actually, like, it was instructive for how Google designs their hardware and software together personally. Their training, as far as I understand, using kind of a 03 style of training and have been for a while. They also train mixture of expert models. TPUs have a very different network bandwidth to compute ratio. They have a lot more bandwidth just objectively. And TPUs per chip tend to be a little bit less compute intensive and have a little bit less memory. You know, it's just a different design choice. So the ratio of flops to bandwidth is very different. And that means that it's much easier for Google to be able to pull offSWYX [00:37:54]: some of this stuff.JONATHAN [00:37:54]: They also have interesting, you know, Torus style network architecture or Torus style, like, literal network architectureSWYX [00:38:00]: is not like the model,JONATHAN [00:38:00]: but the network.SWYX [00:38:02]: Is this the sort of block attention? I forgot what you call it. So this is just more or the,JONATHAN [00:38:07]: yeah, this is more, not the ring attention, but these are the ring all reduces. Like you have three different dimensions of rings because they kind of put you in these three dimensional Toruses from what I understand. And so like, you know, Google's infrastructure in some sense is kind of, I wouldn't say built for this, but maybe the way that Google trains models is built for a slightly different bit of infrastructure they have. And it's kind of neat to think about that. You know, as one thing that I think NVIDIA announced for, you know, for, for both the GH200 and the GB200 is this hybrid networking where you'll have blocks of NVLink network chips. I think for the GB200, I think it's like groups of 72 GPUs will all have NVLink to each other. So higher bandwidth, then you'll have normal networking of some kind, InfiniBand or Rocky or what have you between these blocks. And that's kind of a, you know, it's a change due to the fact that, you know, it's hard to build really high bandwidth networks over very large groups, but it is now a blocked networking. And you have to think about how you architect your model and your parallelism differently. You also have to think about fault tolerance differently because it now matters where you lose a GPU, whereas it didn't before. So, you know, it's, it's, it's just all really interesting and really fun speaking personally, but it's going to mean new nightmares when we all move to that generation and have to think about, you know, new versions of these problems.JOSH [00:39:20]: As you go up to larger scales, it gets quite different. Like right now, you know, if you're experiencing, let's say, for example, you experience a GPU failure every day, that's fine.SWYX [00:39:31]: Just restart.JOSH [00:39:31]: If you make your thing 24 times as big, now it's once an hour. Now it stops being quite as easy to just restart, right? So now you have to kind of break, like bake in this sort of redundancy that you didn't have before. So I think as you go up in scale, you end up running into like a lot of really interesting problems that also inform the, the actual like design. Yeah, I mean, as an orchestration guy,SWYX [00:39:52]: this is why I always emphasize like very cheap storage or very fast storage. So you can checkpoint more, but I don't think that's probably not the best solution to for fast, you know, training.JONATHAN [00:40:05]: Which works fine when you're doing language and then you move to vision or video. And then, you know, you have multi petabyte datasetsSWYX [00:40:12]: and getting, you know,JONATHAN [00:40:13]: cheap, fast multi petabyte storage starts to bite. Like I've certainly encountered issues where the literal data center where my GPUs were did not have enough, you know, object store to fit the datasets that people wanted to bring into that data center from whichever users were, were trying to bring them in. And then you get to a wholeSWYX [00:40:31]: different world of hurtJONATHAN [00:40:31]: where you have to keep your data in a different region because the region is just out of storage. So things get fun really fast.SWYX [00:40:39]: Speaking of vision, Josh, actually, you know, Embu is an agents company, but you're only, you're announcing a text-only model. What, where does, where does the vision side come in?JOSH [00:40:49]: I think we've actually done a lot of work in the past and people can see kind of our blog posts about sort of self-supervised learning and some other kind of vision-related stuff in the past as well. So we're very familiar with, with that stuff. But I think our main focus right now is on kind of, as we say, coding and reasoning. And there, there's certainly a visual component to some problems. But, you know, it's not necessarily required for all problems. And actually we found that for most of the kind of like code writing and, and reasoning problems that we care about, the visual part isn't really a huge important part of it. Sometimes if you really need to, you can maybe describeSWYX [00:41:24]: the thing.JOSH [00:41:24]: There are other like, you know, multimodal models that you can use off the shelf to sort of plug in for those particular piecesSWYX [00:41:30]: that you need, right?JOSH [00:41:30]: Like if something is driving a browser or whatever, like you can sometimes get away with not having to have that baked into the original model. So our folk were, you know, in a sense, we kind of do a lot across the stack. We're working on our own infrastructure and pre-training and RL and fine tuning and products and everything. But in another sense, we're very narrowly focused on the application side. So all of the stuff across the stack is kind of going toward a very particular purpose. And so that particular purpose right now doesn't really need vision. So we think that people are going to make all sorts of really cool image modelsSWYX [00:42:00]: like Jonathan, right?JOSH [00:42:00]: And all sorts of interesting multimodal models into the future. We'll let them go do that. That's great. We'll take advantage of that, partner with those people in the future. And right now we're really focused on kind of the core reasoning and coding capabilities and aspects of the model.SWYX [00:42:14]: I wanted to go into carbs since that's kind of the next layer of the stack. We talked about carbs in the first episode with Kanjin because you've actually had a blog post about it like a couple of years ago. Maybe let's introduce it.JONATHAN [00:42:26]: Has that been a couple of years now?JOSH [00:42:28]: No, it must have been at least one year. Hopefully it's not multiple years.SWYX [00:42:32]: Sorry, I'm counting AI time. Yeah, yeah. Yeah, I was going to sayJONATHAN [00:42:35]: you're making me feel really old right now.SWYX [00:42:39]: I count everything before the generally intelligent rename as like, you know, prehistory. Yeah. And now sort of modernity, right? So I actually thought carbs was more about hyperparameter optimization in a sense of like sort of parameters, hyperparameter search. Whereas, you know, when you introduced it, especially in this blog post, it's more about scaling laws and predictability of like, are we sort of in the right ballpark before we scale things up? Maybe sort of recount the history of carbs.JOSH [00:43:10]: Yeah, so it really is a little bit of both. So carbs is, it's maybe a backronym, but it's for cost aware Pareto region Bayesian search. So this is about technically how it works, but carbs is like, you know, we like pastries and stuff.SWYX [00:43:26]: So great, why not? But the point is thatJOSH [00:43:29]: it's a cost aware hyperparameter tuner. So most hyperparameter tuners, you kind of say, OK, here's this objective function. I want you to make this number as big as possible or as small as possible, whichever direction you want to go. So yeah, just go make this number, you know, as small as possible. OK, so it'll try a bunch of differentSWYX [00:43:46]: hyperparameters,JOSH [00:43:46]: a bunch of different configurationsSWYX [00:43:48]: to figure out, like,JOSH [00:43:48]: how do I tweak your network and architecture, et cetera, to get the kind of best performance I possibly can. That's usually saying, like, you know, almost all of these hyperparameter configurations are, let's say they're all going to use the same number of GPUs or the same number of nodes.SWYX [00:44:01]: So it's going to runJOSH [00:44:01]: for the same amount of time.SWYX [00:44:03]: So you can do that.JOSH [00:44:03]: You can get a number out and that's great. But what carbs does is it says,SWYX [00:44:07]: OK, actually,JOSH [00:44:07]: what if we relax that constraint? What if we say each of these different points, we're going to model how expensive it will be to sample this configuration. So if what if we train with just one one hundredth of the data? Like, how well can we do?SWYX [00:44:19]: What if we trainJOSH [00:44:19]: with one tenth of the data? What if we train with all the data? That way you can understand, like, as we get more and more data, as we spend more and more compute,SWYX [00:44:26]: as we make a biggerJOSH [00:44:26]: and bigger network, how does performance change with these things that change? Like how expensive it is to even explore this data point. So by doing that, we can see the scaling laws for not just, you know,SWYX [00:44:36]: the scaling lawsJOSH [00:44:36]: from like the, you know, Chantilla paper, the scaling laws for all parameters. We can see how does how does the number of layers change with this? How does the, you know, the learning rate change? How do the like, you know, various types of regularization change? So you can see these nice scaling laws. And as you're going across costs, like how should this be changing as you're scaling up your model? So that, coupled with the kind of metric that we chose, which is a very precise way of measuring performance, allowed us to really like hone in on parameters that worked really wellSWYX [00:45:05]: and understand, like,JOSH [00:45:05]: how do we want to scale those up, especially as we're changingSWYX [00:45:08]: things about the network?JOSH [00:45:08]: Like one of the things that we did is we used a custom tokenizer. As we change this tokenizer, changes a bunch of other things about the model. So how should we scale up this entirely new tokenizer? Like no one has ever made a model this large with this tokenizer before. And so how do we want toSWYX [00:45:22]: change all these things?JOSH [00:45:22]: Harps kind of shows you, like, look, as you change these parameters, like these other ones are kind of dependent on this.SWYX [00:45:28]: Like this is the, these areJOSH [00:45:28]: the relationships between them. So you can better understand, like, OK, if I'm going to scale this up 10x or 100x, like, where do I want to be? I can only go so far. And so, you know, we did run, like, I think maybe it was like a 14b one or somethingSWYX [00:45:40]: like that to check.JOSH [00:45:41]: But and so we had a bunch of like 1b or 14b and then at 70b. I don't think we had a, I think we just did like one at 14b. So you can, we get to check that like, oh, is this on the curve? Like, is this where we expect? It was like right there. So then great, go on to the next one. Yeah, I mean, that makes a lot of sense.SWYX [00:45:56]: I wonder if, so one of the key questions, and correct me if I'm wrong, but like usually people do search or do their evals just based on loss. But you actually evaluate based on, you know, the sort of end state evals that people might expect, like HellaSwag and Lombata, whatever. What is the norm here? Is there a norm?JOSH [00:46:20]: Yeah, I don't know if there's a hundred percent.SWYX [00:46:21]: I don't know. I only see loss on most people's reports.JOSH [00:46:25]: I think it's easy to, like, loss is very nice because it's very precise. It will tell you, like, very fine grained differences between like really small changes in your hyperparameters or network architecture. Whereas, especially at the smaller scales, if you're looking at like accuracy, it's very noisy. Like it might be zero or a hundred or like, you know, fluctuating by like 10 or 20 percentage points, which makes it really hard to tell, like, did that change actually mean anything? So our loss is sort of a combination of these two. Instead of saying, like, let's just look at perplexity, we say, let's look at perplexity on the tasks that we care about for multiple choice questions effectively.SWYX [00:47:00]: So we're saying like, yes,JOSH [00:47:00]: this is formulated as a multiple choice question, and we're going to look at the, like, you know, the loss of perplexity for this particular answer token. And that ends up being something that's like both targeted to what you actually care about and also very precise. The nice thing about this though is that it's independent of the data that you train on. One thing that's annoying about perplexity or about loss is that as you change your data set, this is really obnoxious because now it fundamentally changes your loss, right? And so you can't tell, like, how do I tweak my data set? But because we have this held out evaluation dat
It's the final episode of Season 7! Experience time and space condensing into a singularity with host Angel Leon and Producer Brian as they guide you through ASCII's rendition of the Multiverse!We'll revisit some of our favorite episodes and guests from Season 7 and have some laughs along the way.Thank you for tuning in this season and we'll see you in August for Season 8!
When I invited Roberto Mayer of São Paulo Brazille to be a guest on Unstoppable Mindset I did not foresee the scope and far-ranging directions our conversation would go. Let me first tell you a bit about him. Roberto spent his life in São Paulo. Even at an early age he was teaching and tutoring classmates in math and Science. While in College he in the late 70s he learned about Microcomputers and helped bring them to South America. While at São Paulo University he also held a full-time job working at a bank computerizing the organization. For the past twenty years he has owned and operated his own consultant organization. He also volunteers for several organizations and he even finds time to relax playing in-door volleyball. Roberto, as you will see, is a deep thinker and a philosopher. During our time we discuss computers of course including the future of AI, religion vs spirituality and drugs, alcohol drugs and addiction. I find Roberto to be a humble and thoughtful person. I trust you will find him to be the same and that you will value our time together. About the Guest: Roberto pioneered microcomputers' introduction in South America as a teenager, in the late 70s. After some years as a corporate employee, he started working as an entrepreneur, and has not stopped to this day. In parallel, he developed an academic career in Maths and Computer Science, at São Paulo University, for many years. During his long career, Roberto always worked as a volunteer, across many organizations. His participation in IT Trade Associations evolved from local to worldwide. Hence, when life presented challenged related to drug addiction in his family, he entered the world of mutual help groups. Roberto's writing skills turned into several books over time - covering various aspects of his rich career. Ways to connect with Roberto: Website: https://robertocmayer.com.br LinkedIn: https://www.linkedin.com/in/rocmayer Facebook: https://web.facebook.com/roberto.c.mayer.br Instagram: https://www.instagram.com/roberto.c.mayer.br YouTube: https://www.youtube.com/rocmayer About the Host: Michael Hingson is a New York Times best-selling author, international lecturer, and Chief Vision Officer for accessiBe. Michael, blind since birth, survived the 9/11 attacks with the help of his guide dog Roselle. This story is the subject of his best-selling book, Thunder Dog. Michael gives over 100 presentations around the world each year speaking to influential groups such as Exxon Mobile, AT&T, Federal Express, Scripps College, Rutgers University, Children's Hospital, and the American Red Cross just to name a few. He is Ambassador for the National Braille Literacy Campaign for the National Federation of the Blind and also serves as Ambassador for the American Humane Association's 2012 Hero Dog Awards. https://michaelhingson.com https://www.facebook.com/michael.hingson.author.speaker/ https://twitter.com/mhingson https://www.youtube.com/user/mhingson https://www.linkedin.com/in/michaelhingson/ accessiBe Links https://accessibe.com/ https://www.youtube.com/c/accessiBe https://www.linkedin.com/company/accessibe/mycompany/ https://www.facebook.com/accessibe/ Thanks for listening! Thanks so much for listening to our podcast! If you enjoyed this episode and think that others could benefit from listening, please share it using the social media buttons on this page. Do you have some feedback or questions about this episode? Leave a comment in the section below! Subscribe to the podcast If you would like to get automatic updates of new podcast episodes, you can subscribe to the podcast on Apple Podcasts or Stitcher. You can also subscribe in your favorite podcast app. Leave us an Apple Podcasts review Ratings and reviews from our listeners are extremely valuable to us and greatly appreciated. They help our podcast rank higher on Apple Podcasts, which exposes our show to more awesome listeners like you. If you have a minute, please leave an honest review on Apple Podcasts. Transcription Notes: Michael Hingson ** 00:00 Access Cast and accessiBe Initiative presents Unstoppable Mindset. The podcast where inclusion, diversity and the unexpected meet. Hi, I'm Michael Hingson, Chief Vision Officer for accessiBe and the author of the number one New York Times bestselling book, Thunder dog, the story of a blind man, his guide dog and the triumph of trust. Thanks for joining me on my podcast as we explore our own blinding fears of inclusion unacceptance and our resistance to change. We will discover the idea that no matter the situation, or the people we encounter, our own fears, and prejudices often are our strongest barriers to moving forward. The unstoppable mindset podcast is sponsored by accessiBe, that's a c c e s s i capital B e. Visit www.accessibe.com to learn how you can make your website accessible for persons with disabilities. And to help make the internet fully inclusive by the year 2025. Glad you dropped by we're happy to meet you and to have you here with us. Michael Hingson ** 01:21 Hi, there, I'm your host, Mike hingson. And welcome to another episode of unstoppable mindset. Today, we get to interview Roberto Carlos Mayer, and Roberto lives in Sao Paulo, Brazil, and has a really interesting story to tell I'm sure in a lot of ways, one of the things I learned from reading his bio, is that he brought microcomputers to South America as a teenager in the late 70s. That must be kind of fun. But Roberto has had a long career as an entrepreneur, working with a lot of different kinds of fields. And we'll get to that. He's also a writer, and has been an entrepreneur, as I said most of his life. So Roberto, welcome to unstoppable mindset. We're really glad you're here. Roberto Carlos Mayer ** 02:08 Thanks, Michael. I'm very glad for your invitation, and hope to share a little bit of my long story. Well, Michael Hingson ** 02:17 why don't we start at the beginning of your long story. So why don't you tell us a little bit about you growing up and all that. Roberto Carlos Mayer ** 02:24 Okay, I, I started my involvement with computers, as you mentioned in the early 70s. Now I at that time, I was in college, and the chemistry professor told me that his brother had brought some micro computers from the United States here. And he was gathering people to try to understand what they did, how they could be programmed and so on. In school, I was always a very good student in math and other scientific subjects. So I accepted that invitation. And from that time on, I started working with computers up to this day, I did change my mind Michael Hingson ** 03:20 worked out pretty well. Well. So go back a little bit further. Have you always lived in Sao Paulo? Roberto Carlos Mayer ** 03:28 Yes, in fact, I have lived in San Paulo, all my life. Michael Hingson ** 03:35 So you're your What did your parents do? And how did that shape what you do? Roberto Carlos Mayer ** 03:44 Well, in fact, I have been always independent, I started working very early. I think I was the time 11 or 12 years old when I started lecturing some colleagues in school in hours after school, and I so I developed my independence very, very early in life, and always managed to do many things simultaneously. I think that's my characteristic. And besides my work with computers, I've always managed to bring them together. Studying and social activities and volunteering activities is very, very early. Ah, Michael Hingson ** 04:40 well when you were 11 and 12. And you said you were lecturing to some of your classmates, what did you lecture about? Roberto Carlos Mayer ** 04:49 Well, in fact, I lectured about math about physics, about chemistry, about English. Many, there were some classmates who He had very difficulty in some of the subjects and the teachers always considered these people to be the those that would not be able to learn it. But I managed to teach them and to pass the exams. So there are parents who are very satisfied with my work. And so this was a tie for me a significant income source. It also allowed me to decide to what to do with my money, which normally is even those times was not the standard behavior for teenagers. Michael Hingson ** 05:44 No, I certainly certainly wasn't. So did your parents encourage you to do this? In Roberto Carlos Mayer ** 05:51 fact, my, my father was never very involved with me. But my mother, in fact, encouraged this, because she knew that it, it was the thing I like to do. Michael Hingson ** 06:07 And so she encouraged you to develop your talents. Did she work? Did she work? Roberto Carlos Mayer ** 06:13 Yes. She, she worked as a secretary at the big corporation. Michael Hingson ** 06:19 And what did your father do? Roberto Carlos Mayer ** 06:21 My father was an eternal student, he was involved in some very exotic subjects, which I never got to understand the 100%. But he didn't have a, as far as I know, irregular or working skills for long. Michael Hingson ** 06:45 But you were always interested in math and science and technology, which is, which is kind of cool. And you learn to program these computers that your, your chemistry professor told you about? So What languages did you program in? What did you learn? Roberto Carlos Mayer ** 07:02 Well, the first language I learned to program in was the basic basic Yeah, Michael Hingson ** 07:10 I remember based on Roberto Carlos Mayer ** 07:11 that, but then I, I started studying the organization of the microprocessors, and teach myself to program in assembler also. Ah, yeah. So I learned the assembler for the apple, two chip for ADHD chip, and many others, I don't remember. Michael Hingson ** 07:38 Well, so you, you did that in college. And when you left college, what did you? Well, when you graduate, you graduated? What did you get a degree in? Roberto Carlos Mayer ** 07:51 Well, in fact, the I don't know, this educational system here in Brazil is a little bit different. We get a standard nomination just for completing our studies as teenagers. And then we get into the university main factor, but when I left school, I started working. And due to this involvement with computers, first as a freelancer, and then in a very short time period, I managed to start working for a very huge local bank here in Brazil, where I was responsible for introducing this microcomputer culture. That was at the beginning of the 80s. And so I had the challenge to once again to manage my university studies simultaneously to this professional work, which was obviously was all day Michael Hingson ** 09:02 what were networks like back then, so you talked about using micro computers, but they they had to in one way or another communicate with each other, I would assume, right? Roberto Carlos Mayer ** 09:13 Well, in fact, communication was very, very restricted. Yeah. We had some communication through serial cables. I remember Rs 232. Michael Hingson ** 09:25 I know. Roberto Carlos Mayer ** 09:30 And another experiment I was involved, which is also uncommon. At that time, there were no printers for microcomputers. So we adopted telex machine to be used as a printer for microcomputers. But the don't the Telex machines don't use the ASCII character system. So we had to study how the Telex machines codes the characters they print, and then develop a routine to do the translation from the computer ASCII set character set to the set used by telex machines, which Alex Baldo was invented by a French mathematician called Bobo. Michael Hingson ** 10:21 So, basically, when you printed something the the process was that the microcomputer whatever computer you were using would send the ASCII characters to a translating computer, which would translate and then send it to the printer. Roberto Carlos Mayer ** 10:43 Now, it was all running on the same computer. Okay, okay, we developed a co developed language, which was running behind the this high level programming language. Yeah. And we connected the Telex machine to the serial port. So it was all running on a single micro computer with 8k of RAM memory. Michael Hingson ** 11:13 You didn't even have a parallel cable, huh? Roberto Carlos Mayer ** 11:15 No, yeah, I'm not. Michael Hingson ** 11:19 Well, when I went to college at the University of California at Irvine, one of the things that I didn't have access to was any kind of a braille printer. They didn't really have much of any of those things back then. And one of the people in the computer science department, who I got to know very well Dick Rubinstein found a place that could well that had developed a sort of a way of making a braille printer it was using one of the wasn't an IBM Selectric. It was one of the computers with the little print cylinders, or one of the printers with the little print cylinders. And somebody had developed a routine that and they with a modified version of the cylinder that had some Braille dots on it in certain positions. And in certain rows. The, if I wanted to print something, the printer was actually connected to a PDP eight computer that did the translation. So I could have my print my compute Well, my keyboard and my system connected through a modem 1200 baud, and then this PDP eight would actually do the translation so I could actually get Braille print out. So it was a pretty fascinating sort of thing. And it worked. But, you know, that was back in 1971 1972. And 73 and beyond. But technology has changed a little bit since then, hasn't it? It Roberto Carlos Mayer ** 13:05 hasn't changed by many orders of magnitude. Michael Hingson ** 13:09 Yeah, being sarcastic. Yeah. So you went to work for a bank? And what did you primarily do for them? Roberto Carlos Mayer ** 13:18 Well, in fact, today, he had bought some micro computers and didn't know exactly how to apply them in practice. So my my first job there was to develop the needed application software's in order to make these micro computers useful. And I started when then this was completed in a couple of months. Then they started buying more and more micro computers, and we needed more and more people. So I was at the time 20 something. And I had to manage a huge team. And to develop a group of new programmers which I had to train me I stayed there until 1986. And at the time I left I was 25. It was managing a team of 40 people. Michael Hingson ** 14:22 Now when you were working at the bank, were you also doing work at Sao Paulo University. Roberto Carlos Mayer ** 14:29 Yes, in fact, at that time, I was a student now i i was studying at San Paolo university, because I was my wish to continue to study something related to math and science and computers. But at that time at the public university here in San Paolo the the only course available with lectures at night was a computer A Course, which was intended to build math professors. So that was the only choice I had. I went after it. And I, I decided to take that course. In fact, when I finished that course, that was one year after I left the bank, I had already started working on my own. Thanks to that, then I was able to start doing my course in as a master's in science, in computer science and applied math. And that took me another five years at the university. And after one year, and a couple of months, I was invited to become a professor at the computer science department stayed there for almost 12 years. Michael Hingson ** 16:00 When you were studying and working at the bank, and then after you left the bank, you I think you started your own consulting and went out on your own right? Yep. Okay, how did you do all of that at the same time, because being a student is pretty much a full time job typically. And working at the bank had to be a full time job. That was a lot to do at once. Roberto Carlos Mayer ** 16:23 Yes, I think that's one of the my abilities I developed over all my life. And managing to balance these very different things requires, in first place, a lot of discipline. And on the other the other thing is, as I was studying many things I, which were, for me relatively easy. studying maths for me was never a problem in attending. Classes was enough for me to be able to pass the exams, net exercises, were just the task professors put on us, but they weren't for me learning to. Now I remember when I was in a very young child in six plus years, 10 years old. There was a professor basics Elementary School. Anyway, he didn't want to teach. He wrote a lot of math exercises, for class to solve. And when he, he ended up writing up all his exercises, I had already solved all but the last one. She took my piece of paper and use it to correct the exercises of the others. And I use this time, I had three inside class to do my other homework for the other. So the this was an example of how I was able to manage various things at the same time. So Michael Hingson ** 18:07 you worked at the bank during the day, right? Yeah. Roberto Carlos Mayer ** 18:11 Well, so Brian, in the morning to 6pm. Michael Hingson ** 18:15 So classes were mostly at night for you then because yesterday started Roberto Carlos Mayer ** 18:19 about 7pm and went until 10 3011. In the night, yeah. Wow. Michael Hingson ** 18:28 I should do homework. Roberto Carlos Mayer ** 18:31 Well, I the same way I learned to read in school, inside the class. Michael Hingson ** 18:37 Okay. Can you? Have you ever been able to teach other people to develop those same skills? Have you ever tried to do that? Roberto Carlos Mayer ** 18:48 Well, in fact, that's one of my current projects. I'm involved in its structure in this as a methodology to teach others to be able to do the same and multitask. Michael Hingson ** 19:03 Yeah, and then be efficient. How's that working out? How is it working? Okay, are you getting? Are you having success of teaching other people to do it? Roberto Carlos Mayer ** 19:15 Well, in fact, I am starting in structuring materials I am not ready to as a public to offer this to the public at this moment. I hope to do this over the next 12 or 15 mil. Michael Hingson ** 19:30 Well, it it sounds like it'd be a very fascinating thing to to do. And if you can actually develop a program and a process and teach people to do it. That would certainly be a beneficial thing. At the same time, you know, people do need to take some time to relax. Do you ever take time to relax? Roberto Carlos Mayer ** 19:50 Yes, of course. Michael Hingson ** 19:51 Okay, just checking Roberto Carlos Mayer ** 19:56 about my life the best way I I like to relax is traveling. And and this is also a subject I have developed very uncommon experiences due to many other works. Now another way of relaxing, I always say relaxing doesn't mean doing something relaxing means doing something different from what you are doing that is changing your brain operation to a completely different area. This can involve something like traveling, I like very much to travel by car to plan travels to get to know people in the way they live, and not the way us tourist packages are normally offered. So to know people in fact, and another way of relaxing, let's say I developed also very early when I started with this at the time I was at the bank is in doing voluntary work, which involves promoting a course and provides a way to know a lot of other people which are interested in the same course which have the same goals. But which is different from the working and studying space. So switching from one environment to the other is a very efficient way to relax. Another arena I'm involved now for over 10 years is in sports. So that's another way of relaxing and I take this very seriously. Why is my schedule reserved for that? Doesn't matter how much it rains or whatever happens? What kind of sports? But I'm playing volleyball for 10 years Michael Hingson ** 22:04 volleyball? indoor or outdoor? Roberto Carlos Mayer ** 22:09 Indoor indoor? Yeah, well, then Michael Hingson ** 22:11 you get away from the rain. Okay. That's how you do that. Okay, I understand. Well, but even so, I hear what you're saying. And then you You really said something that I have felt for a long time. The problem with a lot of the guided tours and the tours that people buy is that you, you go somewhere and you're on a very strict schedule, and you don't really get to know people and you don't really get the same flavor of, of the environment that gives you a deeper knowledge and understanding and I'm buying with you I'd rather go somewhere and get a chance to meet people and spend some real time. My wife was a travel agent for a few years. Back when we first got married, and we would take occasional trips, familiarization trips, and again, they were they're well organized. But you didn't get to spend a lot of time it was as you would say today very touristy. And so we found that it was a lot more fun when we took our own trips and and really got to spend more time and get to know things a lot better than just the organized tours did. Roberto Carlos Mayer ** 23:27 Yes, I fully agree with that. I always try to do it that way. Obviously, when you have a very short scheduled, you have some meetings, for work or for some organization where I volunteer and you have to fly out and back in just one or two days, you're obviously cannot involve a lot of time to do that kind of exploration. But when I have at least a week to be at some place, I always like to reserve some time for these kinds of local incursions. Michael Hingson ** 24:08 One of the things that I also do is try to find, of course, for me only knowing English it has to be in English, but local radio stations for example that I can listen to, to really get a little bit more of a flavor. But yeah, I think you're right. And as a as a speaker, oftentimes, I will go somewhere and not be able to spend a lot of time because it's like one or two days, and then I'm off again, or I come home. And so I don't get to know things as well as I would like. But I really enjoy it when I do have the time to spend a few days somewhere to get to know people and to get to know the country. It is so wonderful to be able to have that opportunity. Yes, Roberto Carlos Mayer ** 24:56 uh huh. Radio stations you mentioned are very interest thing strategy I also use during my my travels, I speak obviously, Portuguese, I speak English, I fluent in Spanish in German. So this allows me to, to communicate in many countries, but when I'm in a country where I don't know the language, the first thing I do is if I rented a car is hearing the radio. So accustomed the ear to the local language, and it obviously depends which country you are in, had, in some cases, it will be relatively easy. Let's say for example, when I was hearing the radio in the Netherlands, now understanding Dutch, if you're no English and German is not that difficult, once you will get a through the filter of the accent. On the other sides, you have languages, which are so complicated in their organization, that you can hear radio or even television for hours or days, and not be able to know the difference if you are hearing the news, or the transmission of a sports event. Yes. Chinese. To me, that happened to me in Poland, and Poland. In Poland, yes, the Polish language is very complicated, because it's, it's a language, which has roots in Slavic in Latin, and in the old German languages, like German and English. So you have for each word you have to know from which of these roots is word comes from. So it's very, very difficult. Well, Michael Hingson ** 26:52 then you also have languages like Chinese, which are extremely complex and extremely different. From this, the civil ensign, and all aspects of it are significantly different from what we're all used to. Roberto Carlos Mayer ** 27:09 He has of course, the you know, you have languages like Chinese, or Japanese or Hebrew or languages, like the Armenian which use each have different writing structures and different sentence organization. But in this case, for example, if you look at written polish, they use the Latin alphabet, but it's not. It's not understandable. I spent more than a week in Poland and managed to learn the basics, but it's very, very difficult. Yeah, not least I was able to enter a restaurant and ask for sprinkling water or non sprinkling water correctly. Michael Hingson ** 27:56 Yes, or, or carbonated water or not carbonated water? Roberto Carlos Mayer ** 28:01 That was too much. Yeah, yeah. Michael Hingson ** 28:04 Well, I hear you, and, but it's, it is fun to go to different places. And I've had the joy of traveling to all 50 states in the United States over the years. And you know, there are different customs in different states. And it's fascinating just in this country. And you, you see some of it, of course, being around different countries in South America, and certainly one of the larger ones. And, again, the same thing, different customs, and it's fun and fascinating to to meet people who observe different customs, and we're used to, Roberto Carlos Mayer ** 28:42 yes, like that considered other privilege. I think it's something which I got back from my volunteering. I, when I started as an entrepreneur, I started to volunteer in it trade associations. And due to my ability to speak in various languages, in a couple of years, I was allocated to international relations. So I started to get involved in International Federations in this area. And due to this, I had the opportunity to, to travel a lot, mainly in in the American area, from Canada to Argentina and in Europe. But in all, in almost 50 countries have driven cars and 29 of them. You Michael Hingson ** 29:39 You've certainly had a wonderful golden opportunity to experience a lot I I've been to a few countries, not 50 but I've I've been to a number and really enjoy the people and I think that's part of it is that we have to recognize that not everybody's exactly The same way we are and we shouldn't be disappointed if things aren't just the way we are used to hear or in your case where you are because people and different civilizations are different cultures are are different. And we should respect that. And I sometimes I've seen tourists who don't, which is unfortunate. Roberto Carlos Mayer ** 30:22 But in fact, the more civilizations and different cultures, you know, you'll have a, I think you'll have a better understanding of how human life in works. In fact, I think most humanity problems come from those people who live in a single culture, maybe due to religious beliefs due to some autocratic government, which are restrained into a very single position. But I think most most humans in our in, in fact, good people, even those involved in autocratic regimes. I, I want tell the guy's name. But for example, I had the opportunity to chat for hours and hours with a guy in Kuba, which was part of the official Communist Party. In Kubernetes, every couple of years now, you can have private businesses, but the licenses are only given out to members of the party. And I, it was my second time in Cuba. So I knew that I would be allowed to travel alone through the country, I went to visit a National Park, which is about 300 kilometers north of Nevada. And then I in the evening, I got to a very scenic city on on the shore. And this guy had who had the license to operate, small hotel and restaurant there. So he invited me to obviously pay to have dinner there. And then we started chatting I came in, it was still day, and when I left his place, it was already after midnight, to drive back to LA bhana. Another three hours, wow, come back to the hotel, because the Congress, the conference, I would I was participating got started next day. But it was a very interesting chat, and after some some doses of coupon room, he lost any restrictions on his talk. And then he, he told me about his real life. Michael Hingson ** 33:06 And that's, that's the whole point is to get to know people well enough to really have the opportunity to understand. So it's, it's a lot of fun to do. Well, you so you continue to this day to do math and, and deal obviously with science and so on. But when you left the bank, what what did you start to do from a consulting an entrepreneurial standpoint? Although obviously, you had an entrepreneurial spirit before then, but what did you start to do to earn an income and so on after leaving the bank? Roberto Carlos Mayer ** 33:42 Well, in the first years, I worked as a consultant, I did some programming and I did a lot of teaching other people to learn to program at the time, the C language was on the market. And here in Brazil, there were very few people who were able to teach to other programmers. Yeah. So at that time, I, I started teaching and also writing I published some technical books in the programming arena, the time also was invited to translate some of the of the American authors which were writing about those subjects at that time. So, I, I had a lot of involvement and then when, at the university, I went into the working my thesis then I started to develop a project about the development of user interfaces. Now that was at a time where not even Windows three was on the map. market. And that was the the keystone to set up my my first former business. Yeah. That was 1990. Yeah. Michael Hingson ** 35:18 Yep, Windows was was around. I loved MS DOS. But I also understand the value of windows and graphic interfaces and all the other things that Windows brought. But for a while MS DOS was a much more accessible language or system operating system for me to use then windows that wasn't really something that worked well with screen readers for blind people. And that evolved over time. Roberto Carlos Mayer ** 35:50 Technology always, always evolves. Basically, companies reframe recycling what they do in the you have to reinvent yourself every couple of years to stay on the market. And you have at this time, no, no it product you can buy, which is on the market for more than 10 years. Michael Hingson ** 36:18 If that long, but yeah, and you're right. And and look, there are some things that although the products change, the basic concepts are things that have been around for a while, and it's just that they evolve. I mean, look at integrated circuits, what are they, they're, they're made up in part of a lot of transistors that that came around first, and transistors came from tubes. And although the theory is a little bit different, basically what they do, ultimately is the same thing, but we're getting faster and smaller and more efficient in everything that we do. Roberto Carlos Mayer ** 36:56 Yeah, in fact, that happens on the electronic arena and happens also on the on the math Friday, if you look at the papers written by mathematicians like poster from Neumann in the 30s and 40s the structure of current computers still obeys the basic ideas they put on paper. And the thing the what we are now seeing being developed, which changes this is what is called the quantum computers that right that will change the the theoretical background, but they are still very, very limited and needs to use standard computers as an interface because they have no interface of their own up to this moment. Right. So maybe that in the future, they there will be just add ons with very capable processors to do something with standard computers do not. But there is no no clear way for them to to gain the the main market for us to have these kinds of computers at home or in standard business. Right? Michael Hingson ** 38:13 Not yet. But it will happen, it will happen. No, no doubt that it will happen at some point. Well, so going on that same discussion point. What about artificial intelligence, I actually listened to an interview with someone recently who said that the time is going to definitely come and maybe not in the too distant future. But the time is going to come that computers will be able to truly create on their own and truly have the potential to overwhelming what we do you think that's true? Roberto Carlos Mayer ** 38:56 I don't, I don't either. I don't artificial intelligence is a very old subject. I remember I was still a student at University. We were visited by a Japanese professors, which were coming down here to tell us about what other time was called the fifth generation computer project to develop artificial intelligence that's 40 years ago. So and we had a lot of press coverage during the last 12 months due to this kind of generative AI, which Chad GPG provides. And in fact, the algorithms which are based inside these kinds of plugs are known in the computer science arena for decades. The main point is computing power available at the time wasn't enough to build big enough models so that they can simulate being humans. That is the I think the main difference nowadays. But this doesn't change the basic conceptual fact that they are just reproducing a combination of facts and knowledge which they collected from other humans. And creativity is very different from neural networks are from other AI, so called algorithms, Michael Hingson ** 40:40 so do you. So you don't think that with, let's bring back into a quantum computers and so on, that take processing to a whole new level, you don't think that will give computers the opportunity to become creative in their own right and compete for experiences? Roberto Carlos Mayer ** 41:05 I think we won't see this in our generation, I think the if you look at the human brain, in detail, science has still not explained how it works. how humans are, in fact, able to connect ideas which have been stored in your brain for decades. Like I'm using know my brain in order to answer your question. And what's happening in my brain in order to module the words I'm saying to you, that's not yet explained. So it would be very, very difficult to have something simulating something we don't know how it works. Yeah, that's about the, the number of neurons we have inside the brain of every human is still bigger than any computer ever built. The other point is economical, I think there's another factor which people are not looking after that this very huge AI models need a lot of computing power. So they are restricted to very huge organizations. And, in fact, we are seeing that the capacity of data centers, which are being used for by these kinds of models, is restricted to what's called by the President, the big tech companies. And smaller companies are just reminded to pay them to use their capacity. The other point is, the amount of electric power. And the impact on environment, this will all have could also be a limitation over time for the usage of this kind of computing. The same way. For example, it has been happening with some of these crypto currencies, which was also a church promise for big changes for humanity a couple of decades ago, and it still hasn't happened. In fact, we have obviously, you have a range of people using this kind of stuff. But it has not got mainstream mainstream is still standard money. Banks continue to exist. International trade is still conducted using standard money. Michael Hingson ** 43:48 Well, and cryptocurrency took some big hits over the last year or two as well. And it is not the panacea that everyone said it was going to be. Roberto Carlos Mayer ** 43:58 Yeah, exactly. That's called culminate in it. Right. We frequently have this kind of huge promises, which then do not deliver. Metaverse, for example, is another example that was very huge in hype in marketing a couple of years ago. And it seems also to have been these appearing just days behind AI. Michael Hingson ** 44:26 Yeah, yeah, we are. We're very fickle as a as a race. We just go by the latest thing or the thing that people start to publicize and we forget the other things and that that's a problem. We don't focus very well, especially over the long term. Roberto Carlos Mayer ** 44:47 Yes, the that requires the capacity to at first to remember all what has happened. And most people prefer to do Forget, yes, Michael Hingson ** 45:00 we do not learn from history nearly as well as we ought to. Roberto Carlos Mayer ** 45:07 And so that we are condemned to repeat it. Michael Hingson ** 45:11 Good point. Well, Roberto Carlos Mayer ** 45:15 someone wrote this before me, I'm just repeating it. I don't remember who wrote this. Michael Hingson ** 45:19 No, I know what you're saying though. I, I've heard that too. So what made you decide to, in addition to work, in addition to working and to being in school and being an academic, now, are you still doing things at South Paulo University? Roberto Carlos Mayer ** 45:35 No, I left university at the end of the 90s. So you're just do my involvement in the I trade associations. Plus, at the time, I had little children, two boys to care for. So that was too much to synchronize on to manage all of this even for me, so I had to step down from university. People they didn't want me to live. It was a battle for almost two years to be be able to live better in the end i i left Michael Hingson ** 46:17 children do take time, don't they? Roberto Carlos Mayer ** 46:19 Oh, yes. When they are small, especially. Yeah. Michael Hingson ** 46:25 Well, but as they grow older, you have other challenges. Yeah. Roberto Carlos Mayer ** 46:31 You need less time, but resources, you will will still have too Michael Hingson ** 46:36 many some less time. But it's got to be quality time. Yeah. Now, are you still married? Roberto Carlos Mayer ** 46:44 Yes. But I'm in a second marriage. Marriage went, Michael Hingson ** 46:52 went went a different way. But it's good to have somebody to share with you as of course. Now, have you taught her to multitask and be as organized as you Roberto Carlos Mayer ** 47:05 think? Similar maybe not to the same level, but But I think when we get older we will learn to to see value in these kinds of abilities in other people's. Michael Hingson ** 47:21 Yeah. Which is great. Why did you start volunteering and doing some of that in the first place? Roberto Carlos Mayer ** 47:31 Well, my, I had started volunteering, when I was still at the bank to organize user groups to foster the introduction of microcomputers here. And the time I was involved with the was called the Microsoft User Group, which Michael Hingson ** 47:56 was, I remember that, yeah. Roberto Carlos Mayer ** 47:58 And I even had the opportunity to, to interact in person with Bill Gates when he was just a couple of millions words, not billings, Michael Hingson ** 48:12 you mean that guy who said we never need any more than what was it? 64k of memory? Yeah. Okay. Roberto Carlos Mayer ** 48:19 She traveled here to Brazil for the first time in 1987. And at that time, due to my English, I was in charge to helping him out with the lectures, you was going to provide our meetings. And I also had a long conversation. One evening, in fact, one night, it was the, there was a huge meeting at the house of the guy who at the time was the president of the user group. And this guy had also commercial interests in representing Microsoft in Brazil, and he invited many politicians and other businessman and they were all on Bill Gates. sides the whole evening, and I remember it was always midnight, the owner of the house, called me in to decide and asked me if I was able to have a bit and bite conversation in English. I said, Yeah, of course. And then he said me it is. Bill Gates is already tired of speaking about economics, politics and business. He's asking for someone to talk about technical subjects. So I had the privilege to sit before the on a sofa line in in a room during that big house with Bill Gates. For almost two hours, chatting about technical subjects at that time, Microsoft was developing what was called the Quick family of programming languages, which then became the visual family, which is still on the market today in Visual Basic, and maybe the most normal. So I think the that was a privileged situation. Getting back to what you were calling about the volunteering, and you all to all these experiences, I also started writing as a volunteer for some magazines, some newspapers, regular columns, and due to this publicity, then people were the time leaders for the IT trade associations came after me and invited me to participate. And I, in that arena have a very long, very, very long training in on the person on the state level, then on the national level. And then on the international level. I so much that about eight years ago, I wrote a book about all these experience. Michael Hingson ** 51:25 What's it called? Well, Roberto Carlos Mayer ** 51:27 it's written in Portuguese. Yes, the title translating into English, it will be something like, together, we are more, in fact doing. And you'll gather, a basic idea is when you gather together people which are after the same course, then you have a lot of techniques you can apply in order to influence public opinion, governments and to create relations about the communities you are connecting. Because business is always between people. So when you want to do international trade, for example, you have to develop in first place relations in second place, trust with other people. Otherwise, you can travel a lot, spend a lot of money, but you want to be able to sell anything. Go Michael Hingson ** 52:25 back to Bill Gates for just a quick second, would you? Would you say that Bill Gates is clearly one of the leading visionaries of our time. Roberto Carlos Mayer ** 52:37 I don't think so, at current time, but he was at that time. Here and Steve Jobs said up infrastructure for change in the IT arena, which we are still experiencing. They're the consequences of what they set up. Michael Hingson ** 52:58 What would you say are the leading visionaries today in in all of that? Roberto Carlos Mayer ** 53:04 While I think we don't have some someone we could call a very big visionary, some people, many people are trying to to be this person. But it doesn't matter if you look at Elon Musk or not the guy from Oracle that they are not presenting anything, which in fact will bring in us huge changes. As these two guys we were talking before half. Michael Hingson ** 53:33 My My thought is Elon Musk's should have stayed with with the Tesla vehicles. He's done more to change and bring about and could do more to bring about change regarding vehicles and electric vehicles and so on and going into the technology world. Yeah, I think there are some issues there. Roberto Carlos Mayer ** 53:57 Yes, of course, but I but electric vehicles are not a new invention. In fact, electric vehicles existed before details which are powered on oil. So that is the first experiments done in German at the end of the 90s. In the late 19th century, were electrical vehicles. And then the oil based motors obviously showed much more power, so they replace them and that got into production. I think this is a an evolutionary process. What I think I've seen, yes, what is now called the traditional carmakers like Mr. Ford or Honda or the others. They have the capacity to produce similar products there is no invention and no patents and nothing which To avoid makes the Tesla production unique. Michael Hingson ** 55:05 I guess I guess what I'm saying, though, is that I think he stood and stands a bigger chance of having a greater impact if he had stuck with that than going into to some of the computer stuff where he clearly does not. But, you know, everybody makes their own choices. Roberto Carlos Mayer ** 55:28 Yeah, of course, I think if you look at his his work at Twitter, then exactly. You're back. He's been able to, at least that's the way I see it. Yeah. But there has basically been destroyed by Yeah, he's his policies inside the company. Yeah, I think that's the people who have created the code or have left the company changing the name to make any good? No, Michael Hingson ** 56:03 that makes no sense and doesn't doesn't help anything at all. Well, so you, you've been writing what are some of the more recent books that you've written? Roberto Carlos Mayer ** 56:15 Well, after this, this book for the IT trade association experience, I started working on another book in a very different arena, I I got involved in multiple support groups, for people and families, which are involved with addiction due to a problem in my current family. And due to all this experience, I had previously in in other voluntary movements, I was telling you before, then, I was able to understand the significance of this and also to ask questions, which most participants had never made before. So I was led to get to get in touch with the founders, the leaders and I myself, decided to research subjects which had not been researched before. Maybe you are the audience have heard about the Serenity Prayer which aims in the surface due to Alcoholics Anonymous, which is used in most of mutual support groups are most people just repeat it in a very mechanical way. And don't think about it at all. Think what it really means. Yeah. I had the that was another very interesting coincidence. One of the founders of the movement, I participated at the time, was an American priest, the father issues with Father, which was American, he was born in southern Texas near to the Mexican border and came here to Brazil at the end of the 60s, he lived pulled up 200 years and nine months in age. And during his last, let's say, five or six years of life, in fact, I, I had a lot of interaction with him. And he is has written the foreword to this book I wrote about the Serenity Prayer. He even instigated me to publish this book in the United States, put me in contact with some Jesuits in America. But then the pandemic came in. So this is still on the my to do list. Michael Hingson ** 59:03 I hope it does get published in the United States, I think it would be very beneficial to do it, what got you involved in the whole issue of religion and, you know, in spirituality and so on, Roberto Carlos Mayer ** 59:17 but in fact, it's not. Religion and Spirituality are mixed up with you two interests or by many people, but in fact, they in my vision, they are two very different concepts. I was born in a Jewish family so I, I have this this word view since a child but I've never been orthodox. So I've always been open to to understand other people and even over time, participated in many other cultures. But the main fact is, when you look at religions, they try to explain how you have to behave or what's expected in order for you to get some kind of reward. Maybe in this world, or I suppose the next word, or will be after our, that's us, physically, humans. And Spirituality, in my view is something very different that spirituality is, in fact, a couple of rules, which teach you how to behave, how to act, so that you can benefit from that, and others are not damaged, by the way you are acting. And it's about interaction and action. And this is very different from religion, if you look at human history, doesn't matter. If you look at Western civilization, like the crusades in the middle age, or what's happening over the centuries in India, there are a lot of human wars, which have been fought just for religious differences. So that's a very, very complicated subject, which we could be talking about for hours, hours. Yeah. Well, I have even a whole speech about the subject, telling you this history of religions and how spirituality is different, is a very interesting subject. And it's, it is the subjects I touched in this last book. What Michael Hingson ** 1:01:53 is so unfortunate is God is God, everywhere. But every religion thinks that it's the only way to get to God. And it's, it's, and God just supports that religion. And neither of those is true. Roberto Carlos Mayer ** 1:02:11 In fact, the most religious leaders tried tried to use this as a way of, in some way gaining power. Yeah. That's what history has, has shown us. Michael Hingson ** 1:02:28 Yeah, it's it's not that way at all. Roberto Carlos Mayer ** 1:02:33 Of course, but the I think the this process of people understanding this and acting in a way, which is collectively positive for the whole of humanity, and it is, in fact, something which is still in its beginnings, we still have wars, for religious reasons. Michael Hingson ** 1:03:01 Why Well, or we have wars and people, some of the people try to say it's for religious reasons, but it's not I mean, look at we've experienced over, you know, a little while the whole issue with Israel and Hamas and Israel, and and I'm not gonna say the Muslim world, because I think it isn't. It doesn't need to be that way. If you deal with the fact that in reality, that's the same God. But some people try to use it again for their own purposes, rather than really being very spiritual about it at all. Roberto Carlos Mayer ** 1:03:39 Yeah, the fact that the moment you fire doesn't matter if it's a rifle or a missile, or a bomb, you are damaging another human. So yeah, at that moment, you have stopped having a spiritual behavior, right, because you're out there in one direction you are sending in a nation. Yeah. Michael Hingson ** 1:04:02 You mentioned mutual support groups. Tell me more about that. Roberto Carlos Mayer ** 1:04:08 Well, the this is what I mentioned, who wrote the foreword to my book. He was responsible, his name is was Harold ROM, and he brought to Brazil an American movement called the Townsville app to help families of people involved with addiction. That's got some kind of adaptation here in Brazil. And after a couple of years, this is movement is still active, but I participated there also. But I had some, some problems with it after this. This book came out I At some very difficult problems there. I think this, they were very, very stuck at what they had made up and didn't want to change anything. And I think the main reason behind is this, the contents I set up in this book that we're showing something was really needed. Now over any, any human invention needs to be adapted over time, because we are not, God, now we are not perfect. Makeup up can always be entered. And so now for it's now almost four years, we have set up a new organization called Conscious laughs translating it from Portuguese, which has the same purpose. But we have done a lot of updates to the methodology and having expanded it to cover not only addiction, but also other kinds of very difficult situations people can have in life, like, for example, people who have children with strong disabilities like autism, or, or others, which are really difficult to handle. So, Michael Hingson ** 1:06:25 have you had any addiction issues in your family? Yeah. So that brings a personal and a little bit closer to home? Roberto Carlos Mayer ** 1:06:35 Yes, of course, the the addiction in society is still kind of taboo. And you know, most people don't know what's happening. Most people don't want to learn about it. And it's very, very, at least here in Brazil, most people who are not informed about the subject tend to do some moral judgment, while in fact, it's a disease. Yeah. Michael Hingson ** 1:07:12 I know. And there are a lot of people who drink a lot of alcohol. I've never liked the taste of alcohol, I can drink wine, and I can occasionally have a drink. But I've seen people drunk. And I just don't ever want to be in that position. It doesn't help. I've seen how people behave. And some of the times it's not been from a person who's an alcoholic, they just overindulged once, when I was in college, there was one. One colleague, who just drank to excess one night, he wasn't an alcoholic, he never did it again. But he got really sick from all the drinking. He never did that, again, least in the time that I knew him. But you know, it's, it's a problem. And we, we also try to use some of those things to cover up our own fears. And we don't learn to deal with those either. Roberto Carlos Mayer ** 1:08:13 In fact, for whatever it is, most people are in this situation you are mentioning, they get sequenced, they consider it very, very bad to be in that situation. And don't repeat it. But that's another arena where science is still in depth with humanity. And there is a small group of people, which go into addiction very easily that is the but stay saints after using alcohol or other substances is so important for them that they transform themselves in a kind of slaves. Repeat this experience, again and again and again. And medicine already knows that when you repeat this process, the amount of alcohol or other substances, you need to provoke the same result in in your body gets bigger and bigger. So that's the reason why people who start to drink regularly then drink every time more as the in general, this brings huge health problems for people when they don't stop and it beings from other other kinds of what's called the more heavy drugs. In general, are people's people stop earlier because the consequences come up rapidly and Michael Hingson ** 1:09:52 for the people who don't want to face the consequences, and it's not only a problem for them, but it becomes more of a problem for all of us. Yes, Roberto Carlos Mayer ** 1:09:58 and for people who We live with them. That's the point. Yeah. Every single person who's in addiction provokes problems for at least four other people around them. And that's the reason why these support groups exist, because supporting these people is not as a standard public policy, up to this moment in any country in the world, I know. Yeah, governments are into what's called the drug wars, and not about the process of healing families. Some health organizations around the world, help people who are in addiction, but the families around them have very little support. And Michael Hingson ** 1:10:51 so they don't know what to do about it. And when well Roberto Carlos Mayer ** 1:10:55 not really know what to do, but it's so that the addiction changes people's right, you're very radically, right. This is it, this creates emotion, very strong emotions inside us when you live together with up to the point you think you are the worst person in the world, you're having a church problems that nobody else have passed through this. And this is not true. In fact, everyone who goes through this process has the same kind of behavior, but at this is taboo, you have no access to this information, then you are put into this obviously, the first thing we say in support groups, when you come in as you are not alone. There are a lot of people who have gone through the same process. Michael Hingson ** 1:11:49 And that's the real point. And that's the value of support groups is that there are people who have been there they've been they've done that. And if you let them into your lives, and you learn a lot more about how to deal with it, and how to address it. Well, what kind of activities and initiatives do you have coming up? What's next for you? Roberto Carlos Mayer ** 1:12:11 Well, I, I'm I told you at the beginning of our conversation, I am into transforming my abilities in time management and discipline into a methodology is become probably another book will become, obviously, a lot of teachings. And structuring this kind of thing is very, it's a very, has to be done very carefully. Because you are you are involved directly with people's life. So the idea is helping people to live more significantly to balance all areas of life. It's customary that people say I don't have time to do that, and that, but it's just a matter of choices. No, every day, every moment we can choose what we want to do. Michael Hingson ** 1:13:14 Always a bad choice. Roberto Carlos Mayer ** 1:13:15 Yeah, exactly. And choice. So this has to be done very carefully. And I think this this many experiences I've been telling you about has put me into a situation where I can understand the impact of this is it's very different when you talk about something like this with people like us in American scenario or if you look at people in other cultures. So this has to be in respected, but at the same time, humans are although there are differences, we have also similarities which can be explored if we are carefully to to deliver this, I believe worldwide. But this is a huge pretension and I am doing it carefully. So that it really goes through it. I'm not in a hurry to to produce this publicly. But I'm already developed some speeches with some parts of this. I think people are liking it. Well, Michael Hingson ** 1:14:35 I hope it gets translated into English as it gets done and I can I would love to read it. Roberto Carlos Mayer ** 1:14:42 In fact, we'll do the work we are doing in the cultures of movement. We are already developing many things in various languages. And while you were asking me the previous question, I was remembering a phrase from Elizabeth Gilbert now, he wrote that about their share experience traveling in the Middle East and then to the Far East. He was into the film, maybe you heard about her. And she was also a person which addiction problems. And there's a phrase I remember too, when you were talking about religion and spirituality, and he, she says that religions are the way they promise you to save you from hell. And spirituality is for those who have already been in hell. Michael Hingson ** 1:15:42 That point? Well, I want to thank you for being with us. We, we've done well over an hour. And that's fine. That means we've, we've enjoyed it. And I hope everyone listening has enjoyed it. And I really appreciate you being here. And I hope that you listening, enjoyed this and found it useful and inspiring and helpful as well. Love to hear your thoughts. So how can people reach out to you learn about what you do as a consultant and so on? And if they'd like to reach out how do they do that? Roberto Carlos Mayer ** 1:16:19 Well, the easiest way is, I have a website. That personal my personal website is ROberto C. Meyer, my name.com.br is spelled out that I have a QR code projected here in my background where people can access this directly. Michael Hingson ** 1:16:38 Could you go ahead and spell the website? Yes, Roberto Carlos Mayer ** 1:16:42 it's the domain name is. My name is Roberto our R O B E R, T. O. C, which is the initial of my middle name. Mayer my surname M q y e r.com.br. From Brazil, Brazil, right. Michael Hingson ** 1:17:06 Okay. Well, I hope people will reach out. I very much enjoyed this and also want to keep in touch, we can certainly explore that. But I want to thank you. And I also want to thank you for listening. If you'd like to reach out to me any one you're welcome to do that. I'd love to get your thoughts and comments. Feel free to email me at Michaelhi m i c h a e l h i at accessibe A c c e s s i b e.com. Or go to our website, www dot Michael hingson.com/podcast. And hingson is h i n g s o n So www dot Michael hingson.com/podcast. Wherever you're listening, please give us a five star rating. We love those ratings. And we really value them and appreciate them and all of the comments that you want to make. So please give us a five star rating and review the podcast and hope you'll listen to other episodes if you haven't if you just discovered us. Welcome I hope to see you on more of these. And Roberto one last time I want to thank you for being with us and spending all your time. Roberto Carlos Mayer ** 1:18:10 Thanks to you, Michael for your invitation. Michael Hingson ** 1:18:23 You have been listening to the Unstoppable Mindset podcast. Thanks for dropping by. I hope that you'll join us again next week, and in future weeks for upcoming episodes. To subscribe to our podcast and to learn about upcoming episodes, please visit www dot Michael hingson.com slash podcast. Michael Hingson is spelled m i c h a e l h i n g s o n. While you're on the site., please use the form there to recommend people who we ought to interview in upcoming editions of the show. And also, we ask you and urge you to invite your friends to join us in the future. If you know of any one or any organization needing a speaker for an event, please email me at speaker at Michael hingson.com. I appreciate it very much. To learn more about the concept of blinded by fear, please visit www dot Michael hingson.com forward slash blinded by fear and while you're there, feel free to pick up a copy of my free eBook entitled blinded by fear. The unstoppable mindset podcast is provided by access cast an initiative of accessiBe and is sponsored by accessiBe. Please visit www.accessibe.com . AccessiBe is spelled a c c e s s i b e. There you can learn all about how you can make your website inclusive for all persons with disabilities and how you can help make the internet fully inclusive by 2025. Thanks again for Listening. Please come back and visit us again next week.
This week we meet Eder (spelled Edér if you're not an ASCII imperialist, like I obviously am). Also we meet problematic fave Durance.
PODCAST: This Week in Amateur Radio Edition #1315 - Full Version Release Date: May 11, 2024 Here is a summary of the news trending This Week in Amateur Radio. This week's edition is anchored by Chris Perrine, KB2FAF, Dave Wilson, WA2HOY, Don Hulick, K2ATJ, Eric Zitell, KD2RJX, Marvin Turner, W0MET, George Bowen, W2XBS, and Jessica Bowen, KC2VWX. Produced and edited by George Bowen, W2XBS. Approximate Running Time: 1:27:25 Trending headlines in this week's bulletin service: Podcast Download: https://bit.ly/TWIAR1315 Trending headlines in this week's bulletin service 1. AMSAT: Chang'e-6 Successfully Launches: China's Historic Lunar Mission Begins 2. AMSAT: NASA Reveals SpaceX's Innovative Plan For Starship Refueling In Orbit 3. AMSAT: Satellite Shorts From All Over 4. ARRL Learning Center Features Two New Emergency Communication Training Courses 5. ARRL: Focus On Public Safety Relationship Building At The 2024 ARRL National Convention 6. ARRL: ARRL Volunteers Obtain Ham Exemption To Pennsylvania Handsfree Law 7. ARRL: YL Summits On The Air Event Queens Of The Mountain Coming Up 8. ARRL: International Museums Weekends 2024 Will Take Place June 15 - 16 and 22 - 23 9. ARRL: Dick Rutan, KB6LQS, Record Setting Pilot, Has Become A Silent Key 10. ARRL: Fair Radio Sales, Electronic Military Surplus Store In Lima, Ohio, Officially Closing On June 28th, 2024 11. GreenCube Satellite Saved BY AMSAT Italia 12. Successful Bluetooth Connection With A Satellite Announced 13. HamVention Weekend To Feature TAPR, D-STAR and The Voice of America 14. One Year Anniversary In Orbit For A Pico Balloon 15. Trafficked Woman In India Is Returned Thanks To Amateur Radio 16. Tuskegee Airmen To Be Honored By New York Special Event 17. HamSci To Showcase Eclipse Findings During HamVention 18. Upcoming Radio Sport Contests and Conventions 19. Drone Maker DJI is facing a possible ban in the United States 20. ARRL: CQ Publisher Dick Ross K2MGA has become a silent key 21. ARRL: Students to promote collegiate amateur radio at the 2024 ARRL National Convention at HamVention 22. ARRL: Changes announced in the ARRL San Joaquin Valley Section 23. ARRL: Amateur Radio is ready for the upcoming severe storms and tornado season 24. ARRL: FCC is seeking to hire people for the role of Electronics Engineer Field Agent in the Los Angeles area 25. Proposed distracted driving law in Pennsylvania has amateur radio operators there concerned 26. KOTA: Kids On The Air - Newly formed organization to makes its national debut at the upcoming HamVention 27. ARRL: ARRL Field Day Swag is now available 28. FCC: FCC Volunteer Monitor Program Report Plus these Special Features This Week: * Working Amateur Radio Satellites with Bruce Paige, KK5DO - AMSAT Satellite News * Foundations of Amateur Radio with Onno Benschop VK6FLAB, asks if your shack has a place for everything, and everything in its place. * The DX Corner with Bill Salyers, AJ8B with news on DXpeditions, DX, upcoming contests and more. * Weekly Propagation Forecast from the ARRL * Bill Continelli, W2XOY - The History of Amateur Radio. This week, Bill takes us aboard The Wayback Machine to the late seventies and early 1980's where we find WARC-79 coming to an end with hams on top as they keep all of their current bands and receive additional allocations at 10, 18, and 24 MegaHertz. And the FCC approves ASCII on the amateur bands. ----- Website: https://www.twiar.net X: https://x.com/TWIAR Facebook: https://www.facebook.com/groups/twiari RSS News: https://twiar.net/?feed=rss2 Automated (Full): https://twiar.net/TWIARHAM.mp3 (Static file, updated weekly) Automated (1-hour): https://www.twiar.net/TWIAR1HR.mp3 (Static file, updated weekly) ----- Visit our website at www.twiar.net for program audio, and daily for the latest amateur radio and technology news. You can air This Week in Amateur Radio on your repeater! Built in identification breaks every 10 minutes or less. This Week in Amateur Radio is heard on the air on nets and repeaters as a bulletin service all across North America, and all around the world on amateur radio repeater systems, weekends on WA0RCR on 1860 (160 Meters), and more. This Week in Amateur Radio is portable too! The bulletin/news service is available and built for air on local repeaters (check with your local clubs to see if their repeater is carrying the news service) and can be downloaded for air as a weekly podcast to your digital device from just about everywhere. This Week in Amateur Radio is also carried on a number of LPFM stations, so check the low power FM stations in your area. You can also stream the program to your favorite digital device by visiting our web site www.twiar.net. Or, just ask Siri, Alexa, or your Google Nest to play This Week in Amateur Radio! This Week in Amateur Radio is produced by Community Video Associates in upstate New York, and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. If you would like to volunteer with us as a news anchor or special segment producer please get in touch with our Executive Producer, George, via email at w2xbs77@gmail.com. Also, please feel free to follow us by joining our popular group on Facebook, and follow our feed on X! Thanks to FortifiedNet.net for the server space! Thanks to Archive.org for the audio space.
We're talking everything graphics in games! What's the purpose of a game's graphics, and which do we like most? We analyze the technological leaps, art styles, and gameplay experiences only possible due to certain types of visuals. Guest co-host, our good friend Andy! 0:00 - Our first guest! Welcome Andy, AKA @Dractactics. How Zac and Andy know each other, their early shared games. Halo, Morrowind, Mass Effect. The Bethesda/BioWare Venn diagram. 11:40 - Graphics, the "video" in video game. What's the current value of visuals in games? The earliest graphics: teletype, oscilloscope, ASCII. 19:30 - Drawing a parallel between game graphics and other art forms, like film or fine art. The interplay between realism/representational art and more creative graphical styles. 31:07 - When graphics were the appeal of arcade games, then home console technology outpaced arcades in late 90s/early 00s. PS2 being a sweet spot "Goldilocks" of just the right amount of graphical power. Mister Mosquito, Katamari Damacy. 40:43 - Graphics allowing storytelling and worldbuilding. Closing the gap between creator intent and player experience. Uncharted, Grand Theft Auto, The Last of Us. 48:26 - The uncanny valley: making characters that feel real, or missing the mark. Final Fantasy VII Rebirth, Detroit: Become Human. 54:02 - Graphics in games of various budgets. Console vs. handheld, AAA vs. indie. Wider variety starting in the Xbox 360 era. Similar phenomenon in film/TV. The hand-drawn art style in Cuphead. 1:02:37 - Gaming experiences only available with graphics: Spatial puzzle games like Portal, Viewfinder, Superliminal. Sports games. Virtual "traveling" or exploring physical spaces: Assassin's Creed, BioShock, Subnautica, Uncharted, DOOM & DOOM "VFR." 1:13:00 - Next developments in graphics: raytracing, reflections, lighting. The smaller generational leaps of today compared to yesteryear. Cyberpunk 2077. FFVII Remake vs. Rebirth. Alan Wake 2. 1:26:34 - When graphical limitations work in favor of game design or innovation. Superman 64 (sarcasm). The fog in Silent Hill. Jumping Flash. 1:35:27 - Lofi art styles. Silent Hill 3. Shadow of the Colossus. Penny's Big Breakaway. Return of the Obra Dinn. 1:41:04 - What do WE value in graphics? Love for PS1 coziness and modern graphics alike. Go hug a game developer! Bryan - @analogdarling on Twitch, Twitter, and Instagram Xander - @xanwithaplan on Twitch and Twitter Zac - @zacaroniandcheez on Twitch, @GaijinWota on Twitter and Instagram Andy - @DracTactics on Twitch Contact and Episode Suggestions - GameDeep.fun Theme Song by Robotprins
Oof, no episode in April, huh? Yeah, we're getting close to Python 3.13 beta 1. PyCon US is also coming up real soon. Let's use this opportunity then to talk about a feature we're teaming up on: a better interactive interpreter! ## Outline (00:00:00) INTRO (00:01:53) PART 1: History of Terminals (00:03:20) /dev/tty (00:04:51) The first cool word (00:05:45) Chrząszcz (00:06:20) Control code characters in ASCII (00:11:54) PART 2: Python REPL Today (00:12:34) There is no REPL (00:15:28) So what is there instead? (00:19:13) readline (00:25:38) Source in the REPL (00:31:13) Implementing a REPL from scratch? Prepare to support arg: 5 (00:36:09) PART 3: PR OF THE WEEK (00:37:09) Introducing: Complaining Pablo (00:38:23) Tests are always green if you skip them (00:39:57) Getting dirty with escape sequences (00:41:28) Typing finds bugs (00:42:29) Shiny new features of the new REPL (00:45:55) Contributing back to PyPy (00:48:10) We still have two weeks, right? (00:49:59) Is Python synthwave enough? (00:51:57) Do we have a bug? (00:55:31) What's lurking in pydoc? (00:59:38) PART 4: WHAT'S HAPPENING IN CPYTHON? (01:02:39) PEP 744: The JIT (01:06:05) Incremental GC is now actually in (01:08:21) Tier 2 interpreter updates (01:10:29) Python supported on iOS with PEP 730 (01:13:11) Better error messages for name shadowing (01:15:17) Queue.shutdown() (01:17:14) ctypes adopts heap types (01:18:26) Free-threading updates (01:20:14) Dataclass creation is faster (01:20:44) OUTRO
This week… gpt2-chatbot is a brand new LLM that is VERY good and no one knows where it came from. Is it GPT-5? Probably not but it might be from OpenAI. Then, rumor has it that Apple has new AI silicon on the way and will be dropping it into their new iPads, Vidu is China's new SORA competitor & the Rabbit R1 had a very, very bad week. Plus… AI Town is an installable way to watch AI agents, DemonFlyingFox made an awesome 1950s version of the Simpsons, and memory has come to ChatGPT! FINALLY. Plus, an extensive chat with the one and only Pirate Software! Thor and the guys talk about streaming 12 hours a day, how he sees AI, what it means for our jobs and what it might be able to do for us. And finally, our AI co-host is Tech Drip, the world's first humanoid robot obsessed rapper who for some reason has decided to bring Gavin into his crosshairs. It's an endless cavalcade of ridiculous and informative AI news, AI tools, and AI entertainment cooked up just for you. Follow us for more AI discussions, AI news updates, and AI tool reviews on X @AIForHumansShow Join our vibrant community on TikTok @aiforhumansshow For more info, visit our website at https://www.aiforhumans.show/ /// Show links /// Gavin & Kevin on The Kevin Rose Show https://youtu.be/MwdNfEUr2xY?si=-pzqrmuzLEo8u1bS Mysterious ‘gpt2-chatbot' appears https://arstechnica.com/information-technology/2024/04/rumors-swirl-about-mystery-gpt2-chatbot-that-some-think-is-gpt-5-in-disguise/ Is gpt2-chatbot GPT-2 from 2019 finetuned? https://twitter.com/albfresco/status/1784964830887104999 gpt2-chatbot does better ASCII images https://twitter.com/phill__1/status/1784969111430103494 New Apple AI Silicon in iPads Soon? https://www.bloomberg.com/news/newsletters/2024-04-28/apple-rivals-retool-to-compete-with-iphone-and-vision-pro-ios-18-and-ai-details-lvjhucsv China's New ‘Vidu' Video Model https://www.marktechpost.com/2024/04/27/chinas-vidu-challenges-sora-with-high-definition-16-second-ai-video-clips-in-1080p/ Rabbit R1 Has Rough Launch https://www.fastcompany.com/91113926/the-rabbit-r1-is-ais-favorite-toy-so-why-isnt-it-more-fun Marques Brownlee's Rabbit R1 Video Review https://youtu.be/ddTV12hErTc?si=vq0eFeClejpti30S Dave2D's Rabbit R1 Video https://www.youtube.com/watch?v=ZMqhE9r5JuI Treyarch Stirs Up Trouble For Generative AI Job Post https://twitter.com/charlieINTEL/status/1784591242040352849 Catholic Group Defrocks AI Priest After It Gave Strange Answers https://futurism.com/catholics-defrock-ai-priest-hallucinations Six Flags Gets AI Makeover https://www.fastcompany.com/91115050/six-flags-generative-ai-digital-makeover-app-website Figure 01 on 60 Minutes https://x.com/SmokeAwayyy/status/1784778461003034909 Demon Flying Fox on YouTube https://www.youtube.com/@demonflyingfox Memory Comes to ChatGPT (GPT-4) https://twitter.com/OpenAI/status/1784992796669096181 AI Town One-Click Launcher https://x.com/cocktailpeanut/status/1784599385877176405 Pirate Software's YouTube Page https://www.youtube.com/@PirateSoftware Pirate Software on Twitch https://www.twitch.tv/piratesoftware
The Boys are BACK this week with an unusual episode of ARG Presents, as we take a look at Games ENTIRELY Made with Text Graphics like ASCII and PETSCII. Join Amigo Aaron and THE BRENT as we tackle Digiloi for the C64 and Plus 4, and CANDY BOX for your Browser of choice!
The Boys are BACK this week with an unusual episode of ARG Presents, as we take a look at Games ENTIRELY Made with Text Graphics like ASCII and PETSCII. Join Amigo Aaron and THE BRENT as we tackle Digiloi for the C64 and Plus 4, and CANDY BOX for your Browser of choice!
Foundations of Amateur Radio The other day I had an interesting exchange with a contest manager and it's not the first time I've had this dance. As you might know, pretty much every weekend marks at least one on-air amateur radio contest. Following rules set out by a contest the aim is to make contact or a QSO with stations, taking note of each, in a process called logging. Using logging software is one way to keep track of who you talked to, a piece of paper is another. If your station is expecting to make less than a dozen contacts per hour, paper is a perfectly valid way of keeping track, but it's likely that most contests expect you to transcribe your scribbles into electronic form. Which electronic form is normally explicitly stated in the rules for that contest. While I mention rules, you should check the rules for each contest you participate in. Rules change regularly, sometimes significantly, often subtly with little edge cases captured in updated requirements. On the software side, using electronic logging, even transcribing your paper log, can get you to unexpected results. I participated in a local contest and logged with a tool I've used before, xlog. Contests often specify that you must submit logs using something like Cabrillo or ADIF. There are contests that provide a web page where you're expected to paste or manually enter your contacts in some specific format. Using xlog I exported into each of the available formats, Cabrillo, ADIF, Tab Separated Values or TSV and a format I've never heard of, EDI. The format, according to a VHF Handbook I read, Electronic Data Interchange, was recommended by the IARU Region 1 during a meeting of the VHF/UHF/Microwave committee in Vienna in 1998 and later endorsed by the Executive Committee. The contest I participated in asked for logs in Excel, Word, ASCII text or the output of electronic logging programs. Based on that I opened up the Cabrillo file and noticed that the export was gibberish. It had entries that bore no relation to the actual contest log entries, so I set about fixing them, one line at a time, to ensure that what I was submitting was actually a true reflection of my log. So, issue number one is that xlog does not appear to export Cabrillo or ADIF properly. The TSV and EDI files appear, at least at first glance, to have the correct information, and the xlog internal file also contains the correct information. Much food for head-scratching. I'm running the latest version, so I'll dig in further when I have a moment. In any case, I received a lovely email from the contest manager who apologised for not being able to open up my submitted log because they didn't have access to anything that could open up a Cabrillo file. We exchanged a few emails and I eventually sent a Comma Separated Values, or CSV file, and my log was accepted. What I discovered was that their computer was "helping" in typical unhelpful "Clippy" style, by refusing to open up a Cabrillo file, claiming that it didn't have software installed that could read it. Which brings me to issue number two. All these files, Cabrillo, ADIF, TSV, CSV, EDI, even xlog's internal file are all text files. You can open them up in any text editor, on any platform, even Windows, which for reasons only the developers at Microsoft understand, refuses to open a text file if it has the wrong file extension. This "helpful" aspect of the platform is extended into their email service, "Outlook.com" previously called "Hotmail", which refuses to download "unknown" files, like the Cabrillo file with a ".cbr" extension. With the demise of Windows Notepad, another annoying aspect has been removed, that of line-endings. To signify the end of a line MacOS, Windows and Linux have different ideas on how to indicate that a line of text has come to an end. In Windows-land, and DOS before it, use Carriage Return followed by Linefeed. Unix, including Linux and FreeBSD use Linefeed only; OS X also uses Linefeed, but classic Macintosh used Carriage Return. In other words, if you open up a text file and it all runs into one big chunk of text, it's likely that line-endings are the cause. It also means that you, and contest managers, can rename files with data in Cabrillo, ADIF, CSV, TSV, EDI and plenty of other formats like HTML, CSS, JS, JSON, XML and KML to something ending with "TXT" and open it in their nearest text editor. If this makes you giddy, a KMZ file is actually a ZIP file with a KML file inside, which is also true for several other file formats like DOCX to name one. Of course, that doesn't fix the issues of broken exports like xlog appears to be doing, but at least it gets everyone on the same page. Word of caution. In most of these files individual characters matter. Removing an innocuous space or quote might completely corrupt the file for software that is written for that file format. So, tread carefully when you're editing. What other data wrangling issues have you come across? I'm Onno VK6FLAB
A tough Friday workout for the little gray cells, thanks to a debut by Jake Bunch with some nice diversions, obfuscation, and plumes of humor. We were delighted to see 40D, Computer acronym since the 1960s, ASCII; 48D, Fancy, WANT (of course!); 33A, Solving puzzles, e.g., PASTIME (still amazed that only has 1 T ...); and 43A, One of the Minecraft protagonists, STEVE (hi, STEVE!). It is Friday, which, as long time listeners can attest, means it's time for Fun Fact Friday, and we have a fine one in store for you. Check it (and everything else that you wanted to know about today's crossword) out in today's episode, hot off the mike (but we promise it'll be cool by the time it gets to your ears).Show note imagery: STEVE, a Minecraft characterContact Info:We love listener mail! Drop us a line, crosswordpodcast@icloud.com.Also, we're on FaceBook, so feel free to drop by there and strike up a conversation!
TL;DR: You can now buy tickets, apply to speak, or join the expo for the biggest AI Engineer event of 2024. We're gathering *everyone* you want to meet - see you this June.In last year's the Rise of the AI Engineer we put our money where our mouth was and announced the AI Engineer Summit, which fortunately went well:With ~500 live attendees and over ~500k views online, the first iteration of the AI Engineer industry affair seemed to be well received. Competing in an expensive city with 3 other more established AI conferences in the fall calendar, we broke through in terms of in-person experience and online impact.So at the end of Day 2 we announced our second event: the AI Engineer World's Fair. The new website is now live, together with our new presenting sponsor:We were delighted to invite both Ben Dunphy, co-organizer of the conference and Sam Schillace, the deputy CTO of Microsoft who wrote some of the first Laws of AI Engineering while working with early releases of GPT-4, on the pod to talk about the conference and how Microsoft is all-in on AI Engineering.Rise of the Planet of the AI EngineerSince the first AI Engineer piece, AI Engineering has exploded:and the title has been adopted across OpenAI, Meta, IBM, and many, many other companies:1 year on, it is clear that AI Engineering is not only in full swing, but is an emerging global industry that is successfully bridging the gap:* between research and product, * between general-purpose foundation models and in-context use-cases, * and between the flashy weekend MVP (still great!) and the reliable, rigorously evaluated AI product deployed at massive scale, assisting hundreds of employees and driving millions in profit.The greatly increased scope of the 2024 AI Engineer World's Fair (more stages, more talks, more speakers, more attendees, more expo…) helps us reflect the growth of AI Engineering in three major dimensions:* Global Representation: the 2023 Summit was a mostly-American affair. This year we plan to have speakers from top AI companies across five continents, and explore the vast diversity of approaches to AI across global contexts.* Topic Coverage: * In 2023, the Summit focused on the initial questions that the community wrestled with - LLM frameworks, RAG and Vector Databases, Code Copilots and AI Agents. Those are evergreen problems that just got deeper.* This year the AI Engineering field has also embraced new core disciplines with more explicit focus on Multimodality, Evals and Ops, Open Source Models and GPU/Inference Hardware providers.* Maturity/Production-readiness: Two new tracks are dedicated toward AI in the Enterprise, government, education, finance, and more highly regulated industries or AI deployed at larger scale: * AI in the Fortune 500, covering at-scale production deployments of AI, and* AI Leadership, a closed-door, side event for technical AI leaders to discuss engineering and product leadership challenges as VPs and Heads of AI in their respective orgs.We hope you will join Microsoft and the rest of us as either speaker, exhibitor, or attendee, in San Francisco this June. Contact us with any enquiries that don't fall into the categories mentioned below.Show Notes* Ben Dunphy* 2023 Summit* GitHub confirmed $100m ARR on stage* History of World's Fairs* Sam Schillace* Writely on Acquired.fm* Early Lessons From GPT-4: The Schillace Laws* Semantic Kernel* Sam on Kevin Scott (Microsoft CTO)'s podcast in 2022* AI Engineer World's Fair (SF, Jun 25-27)* Buy Super Early Bird tickets (Listeners can use LATENTSPACE for $100 off any ticket until April 8, or use GROUP if coming in 4 or more)* Submit talks and workshops for Speaker CFPs (by April 8)* Enquire about Expo Sponsorship (Asap.. selling fast)Timestamps* [00:00:16] Intro* [00:01:04] 2023 AI Engineer Summit* [00:03:11] Vendor Neutral* [00:05:33] 2024 AIE World's Fair* [00:07:34] AIE World's Fair: 9 Tracks* [00:08:58] AIE World's Fair Keynotes* [00:09:33] Introducing Sam* [00:12:17] AI in 2020s vs the Cloud in 2000s* [00:13:46] Syntax vs Semantics* [00:14:22] Bill Gates vs GPT-4* [00:16:28] Semantic Kernel and Schillace's Laws of AI Engineering* [00:17:29] Orchestration: Break it into pieces* [00:19:52] Prompt Engineering: Ask Smart to Get Smart* [00:21:57] Think with the model, Plan with Code* [00:23:12] Metacognition vs Stochasticity* [00:24:43] Generating Synthetic Textbooks* [00:26:24] Trade leverage for precision; use interaction to mitigate* [00:27:18] Code is for syntax and process; models are for semantics and intent.* [00:28:46] Hands on AI Leadership* [00:33:18] Multimodality vs "Text is the universal wire protocol"* [00:35:46] Azure OpenAI vs Microsoft Research vs Microsoft AI Division* [00:39:40] On Satya* [00:40:44] Sam at AI Leadership Track* [00:42:05] Final Plug for Tickets & CFPTranscript[00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co host Swyx, founder of Small[00:00:16] Intro[00:00:16] swyx: AI. Hey, hey, we're back again with a very special episode, this time with two guests and talking about the very in person events rather than online stuff.[00:00:27] swyx: So first I want to welcome Ben Dunphy, who is my co organizer on AI engineer conferences. Hey, hey, how's it going? We have a very special guest. Anyone who's looking at the show notes and the title will preview this later. But I guess we want to set the context. We are effectively doing promo for the upcoming AI Engineer World's Fair that's happening in June.[00:00:49] swyx: But maybe something that we haven't actually recapped much on the pod is just the origin of the AI Engineer Summit and why, what happens and what went down. Ben, I don't know if you'd like to start with the raw numbers that people should have in mind.[00:01:04] 2023 AI Engineer Summit[00:01:04] Ben Dunphy: Yeah, perhaps your listeners would like just a quick background on the summit.[00:01:09] Ben Dunphy: I mean, I'm sure many folks have heard of our events. You know, you launched, we launched the AI Engineer Summit last June with your, your article kind of coining the term that was on the tip of everyone's tongue, but curiously had not been actually coined, which is the term AI Engineer, which is now many people's, Job titles, you know, we're seeing a lot more people come to this event, with the job description of AI engineer, with the job title of AI engineer so, is an event that you and I really talked about since February of 2023, when we met at a hackathon you organized we were both excited by this movement and it hasn't really had a name yet.[00:01:48] Ben Dunphy: We decided that an event was warranted and that's why we move forward with the AI Engineer Summit, which Ended up being a great success. You know, we had over 5, 000 people apply to attend in person. We had over 9, 000 folks attend, online with over 20, 000 on the live stream.[00:02:06] Ben Dunphy: In person, we accepted about 400 attendees and had speakers, workshop instructors and sponsors, all congregating in San Francisco over, two days, um, two and a half days with a, with a welcome reception. So it was quite the event to kick off kind of this movement that's turning into quite an exciting[00:02:24] swyx: industry.[00:02:25] swyx: The overall idea of this is that I kind of view AI engineering, at least in all my work in Latent Space and the other stuff, as starting an industry.[00:02:34] swyx: And I think every industry, every new community, needs a place to congregate. And I definitely think that AI engineer, at least at the conference, is that it's meant to be like the biggest gathering of technical engineering people working with AI. Right. I think we kind of got that spot last year. There was a very competitive conference season, especially in San Francisco.[00:02:54] swyx: But I think as far as I understand, in terms of cultural impact, online impact, and the speakers that people want to see, we, we got them all and it was very important for us to be a vendor neutral type of event. Right. , The reason I partnered with Ben is that Ben has a lot of experience, a lot more experience doing vendor neutral stuff.[00:03:11] Vendor Neutral[00:03:11] swyx: I first met you when I was speaking at one of your events, and now we're sort of business partners on that. And yeah, I mean, I don't know if you have any sort of Thoughts on make, making things vendor neutral, making things more of a community industry conference rather than like something that's owned by one company.[00:03:25] swyx: Yeah.[00:03:25] Ben Dunphy: I mean events that are owned by a company are great, but this is typically where you have product pitches and this smaller internet community. But if you want the truly internet community, if you want a more varied audience and you know, frankly, better content for, especially for a technical audience, you want a vendor neutral event. And this is because when you have folks that are running the event that are focused on one thing and one thing alone, which is quality, quality of content, quality of speakers, quality of the in person experience, and just of general relevance it really elevates everything to the next level.[00:04:01] Ben Dunphy: And when you have someone like yourself who's coming To this content curation the role that you take at this event, and bringing that neutrality with, along with your experience, that really helps to take it to the next level, and then when you have someone like myself, focusing on just the program curation, and the in person experience, then both of our forces combined, we can like, really create this epic event, and so, these vendor neutral events if you've been to a small community event, Typically, these are vendor neutral, but also if you've been to a really, really popular industry event, many of the top industry events are actually vendor neutral.[00:04:37] Ben Dunphy: And that's because of the fact that they're vendor neutral, not in spite of[00:04:41] swyx: it. Yeah, I've been pretty open about the fact that my dream is to build the KubeCon of AI. So if anyone has been in the Kubernetes world, they'll understand what that means. And then, or, or instead of the NeurIPS, NeurIPS for engineers, where engineers are the stars and engineers are sharing their knowledge.[00:04:57] swyx: Perspectives, because I think AI is definitely moving over from research to engineering and production. I think one of my favorite parts was just honestly having GitHub and Microsoft support, which we'll cover in a bit, but you know, announcing finally that GitHub's copilot was such a commercial success I think was the first time that was actually confirmed by anyone in public.[00:05:17] swyx: For me, it's also interesting as sort of the conference curator to put Microsoft next to competitors some of which might be much smaller AI startups and to see what, where different companies are innovating in different areas.[00:05:27] swyx: Well, they're next to[00:05:27] Ben Dunphy: each other in the arena. So they can be next to each other on stage too.[00:05:33] Why AIE World's Fair[00:05:33] swyx: Okay, so this year World's Fair we are going a lot bigger what details are we disclosing right now? Yeah,[00:05:39] Ben Dunphy: I guess we should start with the name why are we calling it the World's Fair? And I think we need to go back to what inspired this, what actually the original World's Fair was, which was it started in the late 1700s and went to the early 1900s.[00:05:53] Ben Dunphy: And it was intended to showcase the incredible achievements. Of nation states, corporations, individuals in these grand expos. So you have these miniature cities actually being built for these grand expos. In San Francisco, for example, you had the entire Marina District built up in absolutely new construction to showcase the achievements of industry, architecture, art, and culture.[00:06:16] Ben Dunphy: And many of your listeners will know that in 1893, the Nikola Tesla famously provided power to the Chicago World's Fair with his 8 seat power generator. There's lots of great movies and documentaries about this. That was the first electric World's Fair, which thereafter it was referred to as the White City.[00:06:33] Ben Dunphy: So in today's world we have technological change that's similar to what was experienced during the industrial revolution in how it's, how it's just upending our entire life, how we live, work, and play. And so we have artificial intelligence, which has long been the dream of humanity.[00:06:51] Ben Dunphy: It's, it's finally here. And the pace of technological change is just accelerating. So with this event, as you mentioned, we, we're aiming to create a singular event where the world's foremost experts, builders, and practitioners can come together to exchange and reflect. And we think this is not only good for business, but it's also good for our mental health.[00:07:12] Ben Dunphy: It slows things down a bit from the Twitter news cycle to an in person festival of smiles, handshakes, connections, and in depth conversations that online media and online events can only ever dream of replicating. So this is an expo led event where the world's top companies will mingle with the world's top founders and AI engineers who are building and enhanced by AI.[00:07:34] AIE World's Fair: 9 Tracks[00:07:34] Ben Dunphy: And not to mention, we're featuring over a hundred talks and workshops across[00:07:37] swyx: nine tracks. Yeah, I mean, those nine tracks will be fun. Actually, do we have a little preview of the tracks in the, the speakers?[00:07:43] Ben Dunphy: We do. Folks can actually see them today at our website. We've updated that at ai.[00:07:48] Ben Dunphy: engineer. So we'd encourage them to go there to see that. But for those just listening, we have nine tracks. So we have multimodality. We have retrieval augmented generation. Featuring LLM frameworks and vector databases, evals and LLM ops, open source models, code gen and dev tools, GPUs and inference, AI agent applications, AI in the fortune 500, and then we have a special track for AI leadership which you can access by purchasing the VP pass which is different from the, the other passes we have.[00:08:20] Ben Dunphy: And I won't go into the Each of these tracks in depth, unless you want to, Swyx but there's more details on the website at ai. engineer.[00:08:28] swyx: I mean, I, I, very much looking forward to talking to our special guests for the last track, I think, which is the what a lot of yeah, leaders are thinking about, which is how to, Inspire innovation in their companies, especially the sort of larger organizations that might not have the in house talents for that kind of stuff.[00:08:47] swyx: So yeah, we can talk about the expo, but I'm very keen to talk about the presenting sponsor if you want to go slightly out of order from our original plan.[00:08:58] AIE World's Fair Keynotes[00:08:58] Ben Dunphy: Yeah, absolutely. So you know, for the stage of keynotes, we have talks confirmed from Microsoft, OpenAI, AWS, and Google.[00:09:06] Ben Dunphy: And our presenting sponsor is joining the stage with those folks. And so that presenting sponsor this year is a dream sponsor. It's Microsoft. It's the company really helping to lead the charge. And into this wonderful new era that we're all taking part in. So, yeah,[00:09:20] swyx: you know, a bit of context, like when we first started planning this thing, I was kind of brainstorming, like, who would we like to get as the ideal presenting sponsors, as ideal partners long term, just in terms of encouraging the AI engineering industry, and it was Microsoft.[00:09:33] Introducing Sam[00:09:33] swyx: So Sam, I'm very excited to welcome you onto the podcast. You are CVP and Deputy CTO of Microsoft. Welcome.[00:09:40] Sam Schillace: Nice to be here. I'm looking forward to, I was looking for, to Lessio saying my last name correctly this time. Oh[00:09:45] swyx: yeah. So I, I studiously avoided saying, saying your last name, but apparently it's an Italian last name.[00:09:50] swyx: Ski Lache. Ski[00:09:51] Alessio: Lache. Yeah. No, that, that's great, Sean. That's great as a musical person.[00:09:54] swyx: And it, it's also, yeah, I pay attention to like the, the, the lilt. So it's ski lache and the, the slow slowing of the law is, is what I focused[00:10:03] Sam Schillace: on. You say both Ls. There's no silent letters, you say[00:10:07] Alessio: both of those. And it's great to have you, Sam.[00:10:09] Alessio: You know, we've known each other now for a year and a half, two years, and our first conversation, well, it was at Lobby Conference, and then we had a really good one in the kind of parking lot of a Safeway, because we didn't want to go into Starbucks to meet, so we sat outside for about an hour, an hour and a half, and then you had to go to a Bluegrass concert, so it was great.[00:10:28] Alessio: Great meeting, and now, finally, we have you on Lanespace.[00:10:31] Sam Schillace: Cool, cool. Yeah, I'm happy to be here. It's funny, I was just saying to Swyx before you joined that, like, it's kind of an intimidating podcast. Like, when I listen to this podcast, it seems to be, like, one of the more intelligent ones, like, more, more, like, deep technical folks on it.[00:10:44] Sam Schillace: So, it's, like, it's kind of nice to be here. It's fun. Bring your A game. Hopefully I'll, I'll bring mine. I[00:10:49] swyx: mean, you've been programming for longer than some of our listeners have been alive, so I don't think your technical chops are in any doubt. So you were responsible for Rightly as one of your early wins in your career, which then became Google Docs, and obviously you were then responsible for a lot more G Suite.[00:11:07] swyx: But did you know that you covered in Acquired. fm episode 9, which is one of the podcasts that we model after.[00:11:13] Sam Schillace: Oh, cool. I didn't, I didn't realize that the most fun way to say this is that I still have to this day in my personal GDocs account, the very first Google doc, like I actually have it.[00:11:24] Sam Schillace: And I looked it up, like it occurred to me like six months ago that it was probably around and I went and looked and it's still there. So it's like, and it's kind of a funny thing. Cause it's like the backend has been rewritten at least twice that I know of the front end has been re rewritten at least twice that I know of.[00:11:38] Sam Schillace: So. I'm not sure what sense it's still the original one it's sort of more the idea of the original one, like the NFT of it would probably be more authentic. I[00:11:46] swyx: still have it. It's a ship athesia thing. Does it, does it say hello world or something more mundane?[00:11:52] Sam Schillace: It's, it's, it's me and Steve Newman trying to figure out if some collaboration stuff is working, and also a picture of Edna from the Incredibles that I probably pasted in later, because that's That's too early for that, I think.[00:12:05] swyx: People can look up your LinkedIn, and we're going to link it on the show notes, but you're also SVP of engineering for Box, and then you went back to Google to do Google, to lead Google Maps, and now you're deputy CTO.[00:12:17] AI in 2020s vs the Cloud in 2000s[00:12:17] swyx: I mean, there's so many places to start, but maybe one place I like to start off with is do you have a personal GPT 4 experience.[00:12:25] swyx: Obviously being at Microsoft, you have, you had early access and everyone talks about Bill Gates's[00:12:30] Sam Schillace: demo. Yeah, it's kind of, yeah, that's, it's kind of interesting. Like, yeah, we got access, I got access to it like in September of 2022, I guess, like before it was really released. And I it like almost instantly was just like mind blowing to me how good it was.[00:12:47] Sam Schillace: I would try experiments like very early on, like I play music. There's this thing called ABC notation. That's like an ASCII way to represent music. And like, I was like, I wonder if it can like compose a fiddle tune. And like it composed a fiddle tune. I'm like, I wonder if it can change key, change the key.[00:13:01] Sam Schillace: Like it's like really, it was like very astonishing. And I sort of, I'm very like abstract. My background is actually more math than CS. I'm a very abstract thinker and sort of categorical thinker. And the, the thing that occurred to me with, with GPT 4 the first time I saw it was. This is really like the beginning, it's the beginning of V2 of the computer industry completely.[00:13:23] Sam Schillace: I had the same feeling I had when, of like a category shifting that I had when the cloud stuff happened with the GDocs stuff, right? Where it's just like, all of a sudden this like huge vista opens up of capabilities. And I think the way I characterized it, which is a little bit nerdy, but I'm a nerd so lean into it is like everything until now has been about syntax.[00:13:46] Syntax vs Semantics[00:13:46] Sam Schillace: Like, we have to do mediation. We have to describe the real world in forms that the digital world can manage. And so we're the mediation, and we, like, do that via things like syntax and schema and programming languages. And all of a sudden, like, this opens the door to semantics, where, like, you can express intention and meaning and nuance and fuzziness.[00:14:04] Sam Schillace: And the machine itself is doing, the model itself is doing a bunch of the mediation for you. And like, that's obviously like complicated. We can talk about the limits and stuff, and it's getting better in some ways. And we're learning things and all kinds of stuff is going on around it, obviously.[00:14:18] Sam Schillace: But like, that was my immediate reaction to it was just like, Oh my God.[00:14:22] Bill Gates vs GPT-4[00:14:22] Sam Schillace: Like, and then I heard about the build demo where like Bill had been telling Kevin Scott this, This investment is a waste. It's never going to work. AI is blah, blah, blah. And come back when it can pass like an AP bio exam.[00:14:33] Sam Schillace: And they actually literally did that at one point, they brought in like the world champion of the, like the AP bio test or whatever the AP competition and like it and chat GPT or GPT 4 both did the AP bio and GPT 4 beat her. So that was the moment that convinced Bill that this was actually real.[00:14:53] Sam Schillace: Yeah, it's fun. I had a moment with him actually about three weeks after that when we had been, so I started like diving in on developer tools almost immediately and I built this thing with a small team that's called the Semantic Kernel which is one of the very early orchestrators just because I wanted to be able to put code and And inference together.[00:15:10] Sam Schillace: And that's probably something we should dig into more deeply. Cause I think there's some good insights in there, but I I had a bunch of stuff that we were building and then I was asked to go meet with Bill Gates about it and he's kind of famously skeptical and, and so I was a little bit nervous to meet him the first time.[00:15:25] Sam Schillace: And I started the conversation with, Hey, Bill, like three weeks ago, you would have called BS on everything I'm about to show you. And I would probably have agreed with you, but we've both seen this thing. And so we both know it's real. So let's skip that part and like, talk about what's possible.[00:15:39] Sam Schillace: And then we just had this kind of fun, open ended conversation and I showed him a bunch of stuff. So that was like a really nice, fun, fun moment as well. Well,[00:15:46] swyx: that's a nice way to meet Bill Gates and impress[00:15:48] Sam Schillace: him. A little funny. I mean, it's like, I wasn't sure what he would think of me, given what I've done and his.[00:15:54] Sam Schillace: Crown Jewel. But he was nice. I think he likes[00:15:59] swyx: GDocs. Crown Jewel as in Google Docs versus Microsoft Word? Office.[00:16:03] Sam Schillace: Yeah. Yeah, versus Office. Yeah, like, I think, I mean, I can imagine him not liking, I met Steven Snofsky once and he sort of respectfully, but sort of grimaced at me. You know, like, because of how much trauma I had caused him.[00:16:18] Sam Schillace: So Bill was very nice to[00:16:20] swyx: me. In general it's like friendly competition, right? They keep you, they keep you sharp, you keep each[00:16:24] Sam Schillace: other sharp. Yeah, no, I think that's, it's definitely respect, it's just kind of funny.[00:16:28] Semantic Kernel and Schillace's Laws of AI Engineering[00:16:28] Sam Schillace: Yeah,[00:16:28] swyx: So, speaking of semantic kernel, I had no idea that you were that deeply involved, that you actually had laws named after you.[00:16:35] swyx: This only came up after looking into you for a little bit. Skelatches laws, how did those, what's the, what's the origin[00:16:41] Sam Schillace: story? Hey! Yeah, that's kind of funny. I'm actually kind of a modest person and so I'm sure I feel about having my name attached to them. Although I do agree with all, I believe all of them because I wrote all of them.[00:16:49] Sam Schillace: This is like a designer, John Might, who works with me, decided to stick my name on them and put them out there. Seriously, but like, well, but like, so this was just I, I'm not, I don't build models. Like I'm not an AI engineer in the sense of, of like AI researcher that's like doing inference. Like I'm somebody who's like consuming the models.[00:17:09] Sam Schillace: Exactly. So it's kind of funny when you're talking about AI engineering, like it's a good way of putting it. Cause that's how like I think about myself. I'm like, I'm an app builder. I just want to build with this tool. Yep. And so we spent all of the fall and into the winter in that first year, like Just trying to build stuff and learn how this tool worked.[00:17:29] Orchestration: Break it into pieces[00:17:29] Sam Schillace: And I guess those are a little bit in the spirit of like Robert Bentley's programming pearls or something. I was just like, let's kind of distill some of these ideas down of like. How does this thing work? I saw something I still see today with people doing like inference is still kind of expensive.[00:17:46] Sam Schillace: GPUs are still kind of scarce. And so people try to get everything done in like one shot. And so there's all this like prompt tuning to get things working. And one of the first laws was like, break it into pieces. Like if it's hard for you, it's going to be hard for the model. But if it's you know, there's this kind of weird thing where like, it's.[00:18:02] Sam Schillace: It's absolutely not a human being, but starting to think about, like, how would I solve the problem is often a good way to figure out how to architect the program so that the model can solve the problem. So, like, that was one of the first laws. That came from me just trying to, like, replicate a test of a, like, a more complicated, There's like a reasoning process that you have to go through that, that Google was, was the react, the react thing, and I was trying to get GPT 4 to do it on its own.[00:18:32] Sam Schillace: And, and so I'd ask it the question that was in this paper, and the answer to the question is like the year 2000. It's like, what year did this particular author who wrote this book live in this country? And you've kind of got to carefully reason through it. And like, I could not get GPT 4 to Just to answer the question with the year 2000.[00:18:50] Sam Schillace: And if you're thinking about this as like the kernel is like a pipelined orchestrator, right? It's like very Unix y, where like you have a, some kind of command and you pipe stuff to the next parameters and output to the next thing. So I'm thinking about this as like one module in like a pipeline, and I just want it to give me the answer.[00:19:05] Sam Schillace: I don't want anything else. And I could not prompt engineer my way out of that. I just like, it was giving me a paragraph or reasoning. And so I sort of like anthropomorphized a little bit and I was like, well, the only way you can think about stuff is it can think out loud because there's nothing else that the model does.[00:19:19] Sam Schillace: It's just doing token generation. And so it's not going to be able to do this reasoning if it can't think out loud. And that's why it's always producing this. But if you take that paragraph of output, which did get to the right answer and you pipe it into a second prompt. That just says read this conversation and just extract the answer and report it back.[00:19:38] Sam Schillace: That's an easier task. That would be an easier task for you to do or me to do. It's easier reasoning. And so it's an easier thing for the model to do and it's much more accurate. And that's like 100 percent accurate. It always does that. So like that was one of those, those insights on the that led to the, the choice loss.[00:19:52] Prompt Engineering: Ask Smart to Get Smart[00:19:52] Sam Schillace: I think one of the other ones that's kind of interesting that I think people still don't fully appreciate is that GPT 4 is the rough equivalent of like a human being sitting down for centuries or millennia and reading all the books that they can find. It's this vast mind, right, and the embedding space, the latent space, is 100, 000 K, 100, 000 dimensional space, right?[00:20:14] Sam Schillace: Like it's this huge, high dimensional space, and we don't have good, um, Intuition about high dimensional spaces, like the topology works in really weird ways, connectivity works in weird ways. So a lot of what we're doing is like aiming the attention of a model into some part of this very weirdly connected space.[00:20:30] Sam Schillace: That's kind of what prompt engineering is. But that kind of, like, what we observed to begin with that led to one of those laws was You know, ask smart to get smart. And I think we've all, we all understand this now, right? Like this is the whole field of prompt engineering. But like, if you ask like a simple, a simplistic question of the model, you'll get kind of a simplistic answer.[00:20:50] Sam Schillace: Cause you're pointing it at a simplistic part of that high dimensional space. And if you ask it a more intelligent question, you get more intelligent stuff back out. And so I think that's part of like how you think about programming as well. It's like, how are you directing the attention of the model?[00:21:04] Sam Schillace: And I think we still don't have a good intuitive feel for that. To me,[00:21:08] Alessio: the most interesting thing is how do you tie the ask smart, get smart with the syntax and semantics piece. I gave a talk at GDC last week about the rise of full stack employees and how these models are like semantic representation of tasks that people do.[00:21:23] Alessio: But at the same time, we have code. Also become semantic representation of code. You know, I give you the example of like Python that sort it's like really a semantic function. It's not code, but it's actually code underneath. How do you think about tying the two together where you have code?[00:21:39] Alessio: To then extract the smart parts so that you don't have to like ask smart every time and like kind of wrap them in like higher level functions.[00:21:46] Sam Schillace: Yeah, this is, this is actually, we're skipping ahead to kind of later in the conversation, but I like to, I usually like to still stuff down in these little aphorisms that kind of help me remember them.[00:21:57] Think with the model, Plan with Code[00:21:57] Sam Schillace: You know, so we can dig into a bunch of them. One of them is pixels are free, one of them is bots are docs. But the one that's interesting here is Think with the model, plan with code. And so one of the things, so one of the things we've realized, we've been trying to do lots of these like longer running tasks.[00:22:13] Sam Schillace: Like we did this thing called the infinite chatbot, which was the successor to the semantic kernel, which is an internal project. It's a lot like GPTs. The open AI GPT is, but it's like a little bit more advanced in some ways, kind of deep exploration of a rag based bot system. And then we did multi agents from that, trying to do some autonomy stuff and we're, and we're kind of banging our head against this thing.[00:22:34] Sam Schillace: And you know, one of the things I started to realize, this is going to get nerdy for a second. I apologize, but let me dig in on it for just a second. No apology needed. Um, we realized is like, again, this is a little bit of an anthropomorphism and an illusion that we're having. So like when we look at these models, we think there's something continuous there.[00:22:51] Sam Schillace: We're having a conversation with chat GPT or whatever with Azure open air or like, like what's really happened. It's a little bit like watching claymation, right? Like when you watch claymation, you don't think that the model is actually the clay model is actually really alive. You know, that there's like a bunch of still disconnected slot screens that your mind is connecting into a continuous experience.[00:23:12] Metacognition vs Stochasticity[00:23:12] Sam Schillace: And that's kind of the same thing that's going on with these models. Like they're all the prompts are disconnected no matter what. Which means you're putting a lot of weight on memory, right? This is the thing we talked about. You're like, you're putting a lot of weight on precision and recall of your memory system.[00:23:27] Sam Schillace: And so like, and it turns out like, because the models are stochastic, they're kind of random. They'll make stuff up if things are missing. If you're naive about your, your memory system, you'll get lots of like accumulated similar memories that will kind of clog the system, things like that. So there's lots of ways in which like, Memory is hard to manage well, and, and, and that's okay.[00:23:47] Sam Schillace: But what happens is when you're doing plans and you're doing these longer running things that you're talking about, that second level, the metacognition is very vulnerable to that stochastic noise, which is like, I totally want to put this on a bumper sticker that like metacognition is susceptible to stochasticity would be like the great bumper sticker.[00:24:07] Sam Schillace: So what, these things are very vulnerable to feedback loops when they're trying to do autonomy, and they're very vulnerable to getting lost. So we've had these, like, multi agent Autonomous agent things get kind of stuck on like complimenting each other, or they'll get stuck on being quote unquote frustrated and they'll go on strike.[00:24:22] Sam Schillace: Like there's all kinds of weird like feedback loops you get into. So what we've learned to answer your question of how you put all this stuff together is You have to, the model's good at thinking, but it's not good at planning. So you do planning in code. So you have to describe the larger process of what you're doing in code somehow.[00:24:38] Sam Schillace: So semantic intent or whatever. And then you let the model kind of fill in the pieces.[00:24:43] Generating Synthetic Textbooks[00:24:43] Sam Schillace: I'll give a less abstract like example. It's a little bit of an old example. I did this like last year, but at one point I wanted to see if I could generate textbooks. And so I wrote this thing called the textbook factory.[00:24:53] Sam Schillace: And it's, it's tiny. It's like a Jupyter notebook with like. You know, 200 lines of Python and like six very short prompts, but what you basically give it a sentence. And it like pulls out the topic and the level of, of, from that sentence, so you, like, I would like fifth grade reading. I would like eighth grade English.[00:25:11] Sam Schillace: His English ninth grade, US history, whatever. That by the way, all, all by itself, like would've been an almost impossible job like three years ago. Isn't, it's like totally amazing like that by itself. Just parsing an arbitrary natural language sentence to get these two pieces of information out is like almost trivial now.[00:25:27] Sam Schillace: Which is amazing. So it takes that and it just like makes like a thousand calls to the API and it goes and builds a full year textbook, like decides what the curriculum is with one of the prompts. It breaks it into chapters. It writes all the lessons and lesson plans and like builds a teacher's guide with all the answers to all the questions.[00:25:42] Sam Schillace: It builds a table of contents, like all that stuff. It's super reliable. You always get a textbook. It's super brittle. You never get a cookbook or a novel like but like you could kind of define that domain pretty care, like I can describe. The metacognition, the high level plan for how do you write a textbook, right?[00:25:59] Sam Schillace: You like decide the curriculum and then you write all the chapters and you write the teacher's guide and you write the table content, like you can, you can describe that out pretty well. And so having that like code exoskeleton wrapped around the model is really helpful, like it keeps the model from drifting off and then you don't have as many of these vulnerabilities around memory that you would normally have.[00:26:19] Sam Schillace: So like, that's kind of, I think where the syntax and semantics comes together right now.[00:26:24] Trade leverage for precision; use interaction to mitigate[00:26:24] Sam Schillace: And then I think the question for all of us is. How do you get more leverage out of that? Right? So one of the things that I don't love about virtually everything anyone's built for the last year and a half is people are holding the hands of the model on everything.[00:26:37] Sam Schillace: Like the leverage is very low, right? You can't turn. These things loose to do anything really interesting for very long. You can kind of, and the places where people are getting more work out per unit of work in are usually where somebody has done exactly what I just described. They've kind of figured out what the pattern of the problem is in enough of a way that they can write some code for it.[00:26:59] Sam Schillace: And then that that like, so I've seen like sales support stuff. I've seen like code base tuning stuff of like, there's lots of things that people are doing where like, you can get a lot of value in some relatively well defined domain using a little bit of the model's ability to think for you and a little, and a little bit of code.[00:27:18] Code is for syntax and process; models are for semantics and intent.[00:27:18] Sam Schillace: And then I think the next wave is like, okay, do we do stuff like domain specific languages to like make the planning capabilities better? Do we like start to build? More sophisticated primitives. We're starting to think about and talk about like power automate and a bunch of stuff inside of Microsoft that we're going to wrap in these like building blocks.[00:27:34] Sam Schillace: So the models have these chunks of reliable functionality that they can invoke as part of these plans, right? Because you don't want like, if you're going to ask the model to go do something and the output's going to be a hundred thousand lines of code, if it's got to generate that code every time, the randomness, the stochasticity is like going to make that basically not reliable.[00:27:54] Sam Schillace: You want it to generate it like a 10 or 20 line high level semantic plan for this thing that gets handed to some markup executor that runs it and that invokes that API, that 100, 000 lines of code behind it, API call. And like, that's a really nice robust system for now. And then as the models get smarter as new models emerge, then we get better plans, we get more sophistication.[00:28:17] Sam Schillace: In terms of what they can choose, things like that. Right. So I think like that feels like that's probably the path forward for a little while, at least, like there was, there was a lot there. I, sorry, like I've been thinking, you can tell I've been thinking about it a lot. Like this is kind of all I think about is like, how do you build.[00:28:31] Sam Schillace: Really high value stuff out of this. And where do we go? Yeah. The, the role where[00:28:35] swyx: we are. Yeah. The intermixing of code and, and LMS is, is a lot of the role of the AI engineer. And I, I, I think in a very real way, you were one of the first to, because obviously you had early access. Honestly, I'm surprised.[00:28:46] Hands on AI Leadership[00:28:46] swyx: How are you so hands on? How do you choose to, to dedicate your time? How do you advise other tech leaders? Right. You know, you, you are. You have people working for you, you could not be hands on, but you seem to be hands on. What's the allocation that people should have, especially if they're senior tech[00:29:03] Sam Schillace: leaders?[00:29:04] Sam Schillace: It's mostly just fun. Like, I'm a maker, and I like to build stuff. I'm a little bit idiosyncratic. I I've got ADHD, and so I won't build anything. I won't work on anything I'm bored with. So I have no discipline. If I'm not actually interested in the thing, I can't just, like, do it, force myself to do it.[00:29:17] Sam Schillace: But, I mean, if you're not interested in what's going on right now in the industry, like, go find a different industry, honestly. Like, I seriously, like, this is, I, well, it's funny, like, I don't mean to be snarky, but, like, I was at a dinner, like, a, I don't know, six months ago or something, And I was sitting next to a CTO of a large, I won't name the corporation because it would name the person, but I was sitting next to the CTO of a very large Japanese technical company, and he was like, like, nothing has been interesting since the internet, and this is interesting now, like, this is fun again.[00:29:46] Sam Schillace: And I'm like, yeah, totally, like this is like, the most interesting thing that's happened in 35 years of my career, like, we can play with semantics and natural language, and we can have these things that are like sort of active, can kind of be independent in certain ways and can do stuff for us and can like, reach all of these interesting problems.[00:30:02] Sam Schillace: So like that's part of it of it's just kind of fun to, to do stuff and to build stuff. I, I just can't, can't resist. I'm not crazy hands-on, like, I have an eng like my engineering team's listening right now. They're like probably laughing 'cause they, I never, I, I don't really touch code directly 'cause I'm so obsessive.[00:30:17] Sam Schillace: I told them like, if I start writing code, that's all I'm gonna do. And it's probably better if I stay a little bit high level and like, think about. I've got a really great couple of engineers, a bunch of engineers underneath me, a bunch of designers underneath me that are really good folks that we just bounce ideas off of back and forth and it's just really fun.[00:30:35] Sam Schillace: That's the role I came to Microsoft to do, really, was to just kind of bring some energy around innovation, some energy around consumer, We didn't know that this was coming when I joined. I joined like eight months before it hit us, but I think Kevin might've had an idea it was coming. And and then when it hit, I just kind of dove in with both feet cause it's just so much fun to do.[00:30:55] Sam Schillace: Just to tie it back a little bit to the, the Google Docs stuff. When we did rightly originally the world it's not like I built rightly in jQuery or anything. Like I built that thing on bare metal back before there were decent JavaScript VMs.[00:31:10] Sam Schillace: I was just telling somebody today, like you were rate limited. So like just computing the diff when you type something like doing the string diff, I had to write like a binary search on each end of the string diff because like you didn't have enough iterations of a for loop to search character by character.[00:31:24] Sam Schillace: I mean, like that's how rough it was none of the browsers implemented stuff directly, whatever. It's like, just really messy. And like, that's. Like, as somebody who's been doing this for a long time, like, that's the place where you want to engage, right? If things are easy, and it's easy to go do something, it's too late.[00:31:42] Sam Schillace: Even if it's not too late, it's going to be crowded, but like the right time to do something new and disruptive and technical is, first of all, still when it's controversial, but second of all, when you have this, like, you can see the future, you ask this, like, what if question, and you can see where it's going, But you have this, like, pit in your stomach as an engineer as to, like, how crappy this is going to be to do.[00:32:04] Sam Schillace: Like, that's really the right moment to engage with stuff. We're just like, this is going to suck, it's going to be messy, I don't know what the path is, I'm going to get sticks and thorns in my hair, like I, I, it's going to have false starts, and I don't really, I'm going to This is why those skeletchae laws are kind of funny, because, like, I, I, like You know, I wrote them down at one point because they were like my best guess, but I'm like half of these are probably wrong, and I think they've all held up pretty well, but I'm just like guessing along with everybody else, we're just trying to figure this thing out still, right, and like, and I think the only way to do that is to just engage with it.[00:32:34] Sam Schillace: You just have to like, build stuff. If you're, I can't tell you the number of execs I've talked to who have opinions about AI and have not sat down with anything for more than 10 minutes to like actually try to get anything done. You know, it's just like, it's incomprehensible to me that you can watch this stuff through the lens of like the press and forgive me, podcasts and feel like you actually know what you're talking about.[00:32:59] Sam Schillace: Like, you have to like build stuff. Like, break your nose on stuff and like figure out what doesn't work.[00:33:04] swyx: Yeah, I mean, I view us as a starting point, as a way for people to get exposure on what we're doing. They should be looking at, and they still have to do the work as do we. Yeah, I'll basically endorse, like, I think most of the laws.[00:33:18] Multimodality vs "Text is the universal wire protocol"[00:33:18] swyx: I think the one I question the most now is text is the universal wire protocol. There was a very popular article, a text that used a universal interface by Rune who now works at OpenAI. And I, actually, we just, we just dropped a podcast with David Luan, who's CEO of Adept now, but he was VP of Eng, and he pitched Kevin Scott for the original Microsoft investment in OpenAI.[00:33:40] swyx: Where he's basically pivoting to or just betting very hard on multimodality. I think that's something that we don't really position very well. I think this year, we're trying to all figure it out. I don't know if you have an updated perspective on multi modal models how that affects agents[00:33:54] Sam Schillace: or not.[00:33:55] Sam Schillace: Yeah, I mean, I think the multi I think multi modality is really important. And I, I think it's only going to get better from here. For sure. Yeah, the text is the universal wire protocol. You're probably right. Like, I don't know that I would defend that one entirely. Note that it doesn't say English, right?[00:34:09] Sam Schillace: Like it's, it's not, that's even natural language. Like there's stuff like Steve Luko, who's the guy who created TypeScript, created TypeChat, right? Which is this like way to get LLMs to be very precise and return syntax and correct JavaScript. So like, I, yeah, I think like multimodality, like, I think part of the challenge with it is like, it's a little harder to access.[00:34:30] Sam Schillace: Programatically still like I think you know and I do think like, You know like when when like dahly and stuff started to come Out I was like, oh photoshop's in trouble cuz like, you know I'm just gonna like describe images And you don't need photos of Photoshop anymore Which hasn't played out that way like they're actually like adding a bunch of tools who look like you want to be able to you know for multimodality be really like super super charged you need to be able to do stuff like Descriptively, like, okay, find the dog in this picture and mask around it.[00:34:58] Sam Schillace: Okay, now make it larger and whatever. You need to be able to interact with stuff textually, which we're starting to be able to do. Like, you can do some of that stuff. But there's probably a whole bunch of new capabilities that are going to come out that are going to make it more interesting.[00:35:11] Sam Schillace: So, I don't know, like, I suspect we're going to wind up looking kind of like Unix at the end of the day, where, like, there's pipes and, like, Stuff goes over pipes, and some of the pipes are byte character pipes, and some of them are byte digital or whatever like binary pipes, and that's going to be compatible with a lot of the systems we have out there, so like, that's probably still And I think there's a lot to be gotten from, from text as a language, but I suspect you're right.[00:35:37] Sam Schillace: Like that particular law is not going to hold up super well. But we didn't have multimodal going when I wrote it. I'll take one out as well.[00:35:46] Azure OpenAI vs Microsoft Research vs Microsoft AI Division[00:35:46] swyx: I know. Yeah, I mean, the innovations that keep coming out of Microsoft. You mentioned multi agent. I think you're talking about autogen.[00:35:52] swyx: But there's always research coming out of MSR. Yeah. PHY1, PHY2. Yeah, there's a bunch of[00:35:57] Sam Schillace: stuff. Yeah.[00:35:59] swyx: What should, how should the outsider or the AI engineer just as a sort of final word, like, How should they view the Microsoft portfolio things? I know you're not here to be a salesman, but What, how do you explain You know, Microsoft's AI[00:36:12] Sam Schillace: work to people.[00:36:13] Sam Schillace: There's a lot of stuff going on. Like, first of all, like, I should, I'll be a little tiny bit of a salesman for, like, two seconds and just point out that, like, one of the things we have is the Microsoft for Startups Founders Hub. So, like, you can get, like, Azure credits and stuff from us. Like, up to, like, 150 grand, I think, over four years.[00:36:29] Sam Schillace: So, like, it's actually pretty easy to get. Credit you can start, I 500 bucks to start or something with very little other than just an idea. So like there's, that's pretty cool. Like, I like Microsoft is very much all in on AI at, at many levels. And so like that, you mentioned, you mentioned Autogen, like, So I sit in the office of the CTO, Microsoft Research sits under him, under the office of the CTO as well.[00:36:51] Sam Schillace: So the Autogen group came out of somebody in MSR, like in that group. So like there's sort of. The spectrum of very researchy things going on in research, where we're doing things like Phi, which is the small language model efficiency exploration that's really, really interesting. Lots of very technical folks there that are building different kinds of models.[00:37:10] Sam Schillace: And then there's like, groups like my group that are kind of a little bit in the middle that straddle product and, and, and research and kind of have a foot in both worlds and are trying to kind of be a bridge into the product world. And then there's like a whole bunch of stuff on the product side of things.[00:37:23] Sam Schillace: So there's. All the Azure OpenAI stuff, and then there's all the stuff that's in Office and Windows. And I, so I think, like, the way, I don't know, the way to think about Microsoft is we're just powering AI at every level we can, and making it as accessible as we can to both end users and developers.[00:37:42] Sam Schillace: There's this really nice research arm at one end of that spectrum that's really driving the cutting edge. The fee stuff is really amazing. It broke the chinchella curves. Right, like we didn't, that's the textbooks are all you need paper, and it's still kind of controversial, but like that was really a surprising result that came out of MSR.[00:37:58] Sam Schillace: And so like I think Microsoft is both being a thought leader on one end, on the other end with all the Azure OpenAI, all the Azure tooling that we have, like very much a developer centric, kind of the tinkerer's paradise that Microsoft always was. It's like a great place to come and consume all these things.[00:38:14] Sam Schillace: There's really amazing stuff ideas that we've had, like these very rich, long running, rag based chatbots that we didn't talk about that are like now possible to just go build with Azure AI Studio for yourself. You can build and deploy like a chatbot that's trained on your data specifically, like very easily and things like that.[00:38:31] Sam Schillace: So like there's that end of things. And then there's all this stuff that's in Office, where like, you could just like use the copilots both in Bing, but also just like daily your daily work. So like, it's just kind of everywhere at this point, like everyone in the company thinks about it all the time.[00:38:43] Sam Schillace: There's like no single answer to that question. That was way more salesy than I thought I was capable of, but like, that is actually the genuine truth. Like, it is all the time, it is all levels, it is all the way from really pragmatic, approachable stuff for somebody starting out who doesn't know things, all the way to like Absolutely cutting edge research, silicon, models, AI for science, like, we didn't talk about any of the AI for science stuff, I've seen magical stuff coming out of the research group on that topic, like just crazy cool stuff that's coming, so.[00:39:13] Sam Schillace: You've[00:39:14] swyx: called this since you joined Microsoft. I point listeners to the podcast that you did in 2022, pre ChatGBT with Kevin Scott. And yeah, you've been saying this from the beginning. So this is not a new line of Talk track for you, like you've, you, you've been a genuine believer for a long time.[00:39:28] swyx: And,[00:39:28] Sam Schillace: and just to be clear, like I haven't been at Microsoft that long. I've only been here for like two, a little over two years and you know, it's a little bit weird for me 'cause for a lot of my career they were the competitor and the enemy and you know, it's kind of funny to be here, but like it's really remarkable.[00:39:40] On Satya[00:39:40] Sam Schillace: It's going on. I really, really like Satya. I've met a, met and worked with a bunch of big tech CEOs and I think he's a genuinely awesome person and he's fun to work with and has a really great. vision. So like, and I obviously really like Kevin, we've been friends for a long time. So it's a cool place.[00:39:56] Sam Schillace: I think there's a lot of interesting stuff. We[00:39:57] swyx: have some awareness Satya is a listener. So obviously he's super welcome on the pod anytime. You can just drop in a good word for us.[00:40:05] Sam Schillace: He's fun to talk to. It's interesting because like CEOs can be lots of different personalities, but he is you were asking me about how I'm like, so hands on and engaged.[00:40:14] Sam Schillace: I'm amazed at how hands on and engaged he can be given the scale of his job. Like, he's super, super engaged with stuff, super in the details, understands a lot of the stuff that's going on. And the science side of things, as well as the product and the business side, I mean, it's really remarkable. I don't say that, like, because he's listening or because I'm trying to pump the company, like, I'm, like, genuinely really, really impressed with, like, how, what he's, like, I look at him, I'm like, I love this stuff, and I spend all my time thinking about it, and I could not do what he's doing.[00:40:42] Sam Schillace: Like, it's just incredible how much you can get[00:40:43] Ben Dunphy: into his head.[00:40:44] Sam at AI Leadership Track[00:40:44] Ben Dunphy: Sam, it's been an absolute pleasure to hear from you here, hear the war stories. So thank you so much for coming on. Quick question though you're here on the podcast as the presenting sponsor for the AI Engineer World's Fair, will you be taking the stage there, or are we going to defer that to Satya?[00:41:01] Ben Dunphy: And I'm happy[00:41:02] Sam Schillace: to talk to folks. I'm happy to be there. It's always fun to like I, I like talking to people more than talking at people. So I don't love giving keynotes. I love giving Q and A's and like engaging with engineers and like. I really am at heart just a builder and an engineer, and like, that's what I'm happiest doing, like being creative and like building things and figuring stuff out.[00:41:22] Sam Schillace: That would be really fun to do, and I'll probably go just to like, hang out with people and hear what they're working on and working about.[00:41:28] swyx: The AI leadership track is just AI leaders, and then it's closed doors, so you know, more sort of an unconference style where people just talk[00:41:34] Sam Schillace: about their issues.[00:41:35] Sam Schillace: Yeah, that would be, that's much more fun. That's really, because we are really all wrestling with this, trying to figure out what it means. Right. So I don't think anyone I, the reason I have the Scalache laws kind of give me the willies a little bit is like, I, I was joking that we should just call them the Scalache best guesses, because like, I don't want people to think that that's like some iron law.[00:41:52] Sam Schillace: We're all trying to figure this stuff out. Right. Like some of it's right. Some it's not right. It's going to be messy. We'll have false starts, but yeah, we're all working it out. So that's the fun conversation. All[00:42:02] Ben Dunphy: right. Thanks for having me. Yeah, thanks so much for coming on.[00:42:05] Final Plug for Tickets & CFP[00:42:05] Ben Dunphy: For those of you listening, interested in attending AI Engineer World's Fair, you can purchase your tickets today.[00:42:11] Ben Dunphy: Learn more about the event at ai. engineer. You can purchase even group discounts. If you purchase four more tickets, use the code GROUP, and one of those four tickets will be free. If you want to speak at the event CFP closes April 8th, so check out the link at ai. engineer, send us your proposals for talks, workshops, or discussion groups.[00:42:33] Ben Dunphy: So if you want to come to THE event of the year for AI engineers, the technical event of the year for AI engineers this is at June 25, 26, and 27 in San Francisco. That's it! Get full access to Latent Space at www.latent.space/subscribe
This week…OpenAI uses Sora to court Hollywood, the Humane AI pin isn't bad, Microsoft yoinks Pi AI's team and RIP Stable Diffusion? Hopefully not but big changAIs are afoot. Plus, Mixtral plays DOOM via ASCII, Ubisoft opens the door to AI with NEO NPC, free AI tool Viggle helps you animate photos and PFFT.ai tells us AI jokes, not good ones but still. AND THEN… an interview with Creamteam member and all around awesome person Fiona Nova! We discuss her feelings about AI and then see what she thinks of the Shy Kids' new Sora film “Air Head”. It's an endless cavalcade of ridiculous and informative AI news, AI tools, and AI entertainment cooked up just for you. Follow us for more AI discussions, AI news updates, and AI tool reviews on X @AIForHumansShow Join our vibrant community on TikTok @aiforhumansshow For more info, visit our website at https://www.aiforhumans.show/ /// Show links /// OpenAI In Talks With Filmmakers https://www.bloomberg.com/news/articles/2024-03-22/openai-courts-hollywood-in-meetings-with-film-studios-directors Factorial Funds Post on Sora https://www.factorialfunds.com/blog/under-the-hood-how-openai-s-sora-model-works OpenAI Discusses Sora + Filmmakers (Shy Kids Film) https://openai.com/blog/sora-first-impressions Pi AI Team Moves To Microsoft https://www.nytimes.com/2024/03/19/technology/mustafa-suleyman-google-gemini.html Stability.AI CEO Leaves https://stability.ai/news/stabilityai-announcement Ubisoft's NEO NPCs https://www.theverge.com/2024/3/19/24105748/nvidia-neo-npc-prototypes-gdc-2024 Human AI Pin Coming Out https://www.theverge.com/24084444/humane-ai-pin-hands-on 01 Light Preview https://twitter.com/OpenInterpreter/status/1770821439458840846 Compass https://x.com/ItsMartynasK/status/1771890769865187648?s=20 Mistral plays DOOM via ASCII (very nerdy) https://x.com/reach_vb/status/1772008460122509525?s=20 LLM Street Fighter Showdown! https://x.com/_StanGirard/status/1772023888211571197?s=2 Viggle AI https://twitter.com/ViggleAI PFFT.ai https://pfft.ai/ Fiona Nova https://www.instagram.com/fiona_nova/
As we often discuss, AI is moving fast. Many people say that Apple is getting left behind. Well, this week, Apple's AI plan is becoming clearer. There's plenty of other tech news to discuss, like the looming TikTok ban, Walmart's selling the MacBook Air, and the tech layoffs continue. We also have some witty humor, pro tips, and picks for your enjoyment. Watch on YouTube! INTRO (00:00) About that TikTok ban (03:45) Apple and AI Apple Is in Talks to Let Google Gemini Power iPhone AI Features (06:00) Apple acquires startup DarwinAI as AI efforts ramp up (08:10) Apple researchers achieve breakthroughs in multimodal AI as company ramps up investments (11:50) Nvidia's GTC 2024 Keynote - Blackwell, NVLink Switch, NIM, Project GROOT (16:25) CryptoWatch: Ethereum network finishes cost-cutting Duncan software upgrade (18:20) DAVE'S PRO-TIP OF THE WEEK: Measure AR (20:40) JUST THE HEADLINES: (27:25) ASCII art elicits harmful responses from 5 major AI chatbots Cisco completes $28 Billion acquisition of Splunk Playing thriving reef sounds on underwater speakers ‘could save damaged corals' Monopoly Go hits $2B in revenue just 10 months after launch Neil Young Says His Music Is Returning to Spotify xAI is releasing the weights and architecture of their 314 billion parameter Mixture-of-Experts model, Grok-1 Games are coming to LinkedIn Google DeepMind's latest AI agent learned to play Goat Simulator 3 McDonald's IT systems outage impacts restaurants worldwide TAKES: Walmart brings the M1 MacBook Air to its shelves for $699 (29:30) Apple Vision Pro used in UK spinal fusion operation (34:45) Laid-off techies face ‘sense of impending doom' with job cuts at highest since dot-com crash (36:20) Automakers are sharing consumers' driving behavior with insurance companies (40:20) BONUS ODD TAKE: The Great Toilet Rebrand (45:30) PICKS OF THE WEEK: Dave: TP-Link Tapo Pan/Tilt Security Camera for Baby Monitor, Pet Camera w/ Motion Detection, 1080P, 2-Way Audio, Night Vision, Cloud & SD Card Storage, Works with Alexa & Google Home Tapo C200 (49:20) Nate: Television by Sandwich (54:25) RAMAZON PURCHASE (59:50) Find us elsewhere: https://notpicks.com https://notnerd.com https://www.youtube.com/c/Notnerd https://www.instagram.com/n0tnerd https://www.facebook.com/n0tnerd/ info@Notnerd.com
11 years & nothing's changed; IG impersonators & weird animal video creepiness; Apple restores Epic, for now; TikTok ban bill passes House but will most likely die in Senate; X usage declines, launching video app, Elon cancels Don Lemon show because he's a little beyatch; running out of power; AI Marilyn Monroe; NeMo, Devin & Gemini - one Poe to rule them all; bypassing AI guardrails; more bad Tesla news; Bluesky outsourcing - and charging - for moderation; Schmactors; Somebody Feed Phil; no Riker in Disco; Airbnb tells people to stop filming guests; AirPod hearing aids; Goodbye, Jumbo.Sponsors:Factor - Head to Factor and use code grumpy50 to get 50% off. That's code grumpy50 at Factor to get 50% off!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordPrivate Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!Show notes at: https://gog.show/640FOLLOW UPJoe Zieja's Voice Acting AcademyApple reinstates Epic's developer account two days after banning itJoe Biden says he would sign bill that would force a sale or ban of TikTokThe House Passed the Bill That Could Ban TikTok—and It Wasn't CloseAddicted to TikTok? Here's what the House vote to effectively ban it could mean for youIN THE NEWSTwitter/X Usage Sees Sharp DeclineX will launch a YouTube-like video app on Samsung and Amazon TVsElon Musk Cancels Contract with Don Lemon After Being Asked About Ketamine UsageAmid explosive demand, America is running out of powerAI Marilyn Monroe Marks Another Step Forward In Extending Celebrity Brand Value Beyond The GraveNow it's NVIDIA being sued over AI copyright infringementSXSW Audience Boos Sizzle Reel About the Virtues of AIIntroducing Devin, the first AI software engineerGoogle restricts AI chatbot Gemini from answering queries on global electionsResearchers jailbreak AI chatbots with ASCII art — ArtPrompt bypasses safety measures to unlock malicious queriesElevenLabs Block on Cloning Biden's Voice Easily BypassedEU votes to ban riskiest forms of AI and impose restrictions on othersExclusive: U.S. Must Move ‘Decisively' to Avert ‘Extinction-Level' Threat From AI, Government-Commissioned Report SaysMix-Up With Tesla Touchscreen Killed Angela Chao: ReportTesla Didn't Pay Any Federal Income Taxes for 5 Years, Got $1 Million Tax RefundApple's Vision Pro was used in surgery to help perform spinal operationsMarc Andreessen's VC Was Reportedly Behind Kickstarter's Disastrous Pivot to CryptoThe US Government says IP infringement is all over NFT marketplacesBluesky snags former Twitter/X Trust & Safety exec cut by MuskBluesky launches Ozone, a tool that lets users create and run their own independent moderation servicesMEDIA CANDYSchmactors with James Marsters and Mark DevineSomebody Feed Phil Season 7Discovery Will Be the First Star Trek Show in 50 Years to End Without a Jonathan Frakes AppearanceAPPS & DOODADSPoeApple will allow iOS apps to be distributed on websites in the EUAirbnb to hosts: please stop filming the guestsApple's AirPods Pro could be getting a “hearing aid mode” later this yearCLOSING SHOUT-OUTSWorld Party Frontman Karl Wallinger Dies at 66See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
A majority of internet traffic now originates from APIs, and cybercriminals are taking advantage. Increasingly, APIs are used as a common attack vector because they're a direct pathway to access sensitive data. In this discussion, Lebin Cheng shares what API attack trends Imperva, a Thales Company has observed over the past year, and what steps organizations can take to protect their APIs. This segment is sponsored by Imperva. Visit https://www.securityweekly.com/imperva to learn more about them! The trivial tweaks to bypass authentication in TeamCity, ArtPrompt attacks use ASCII art against LLMs, annoying developers with low quality vuln reports, removing dependencies as part of secure by design, removing overhead with secure by design, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-276
The trivial tweaks to bypass authentication in TeamCity, ArtPrompt attacks use ASCII art against LLMs, annoying developers with low quality vuln reports, removing dependencies as part of secure by design, removing overhead with secure by design, and more! Show Notes: https://securityweekly.com/asw-276
A majority of internet traffic now originates from APIs, and cybercriminals are taking advantage. Increasingly, APIs are used as a common attack vector because they're a direct pathway to access sensitive data. In this discussion, Lebin Cheng shares what API attack trends Imperva, a Thales Company has observed over the past year, and what steps organizations can take to protect their APIs. This segment is sponsored by Imperva. Visit https://www.securityweekly.com/imperva to learn more about them! The trivial tweaks to bypass authentication in TeamCity, ArtPrompt attacks use ASCII art against LLMs, annoying developers with low quality vuln reports, removing dependencies as part of secure by design, removing overhead with secure by design, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-276
In this episode of The Cybersecurity Defenders Podcast, we take a close look at weaponizing ASCII escape sequences with Fredrik (STÖK) Alexandersson from Truesec.Fredrik (STÖK) Alexandersson is a dynamic individual driven by a boundless curiosity and a passion for sharing knowledge. With over three decades of professional experience, he's hacked his way through realms ranging from computers and technology to marketing, fashion, communication, and even the human psyche. Renowned for his lightning-fast presentations and his knack for making complex technical subjects entertaining, STÖK is a prominent figure in the cybersecurity community. His meticulous attention to detail, insatiable curiosity, and "Good Vibes Only" attitude have inspired millions worldwide and earned him recognition from industry giants like Salesforce, Microsoft, and Verizon Media, among many others. Currently, he working as a Hacker and Creative Director at TRUESEC.You can follow him on Twitter/X here.And you can watch his talk on Weaponizing ASCII escape sequences here.
On today's episode of the Business of Tech podcast, key topics include the struggles of mid-market tech companies, the introduction of Microsoft's Pirate toolkit for enhancing AI security, the rise of fractional CAIOs for steering AI strategies, and updates from ASCII, CompTIA, PIA, and SILENCE laboratories. Market data from Canalys reveals a projected 20% growth in worldwide cloud service spending for 2024, with significant contributions from Amazon Web Services, Microsoft Azure, and Google Cloud.Four things to know today00:00 Mid-Market Tech Struggles: Advania Study Highlights Need for Better Solutions04:24 Microsoft's PyRIT: A New Toolkit for Enhancing Generative AI Security Practices07:02 The Emergence of Fractional CAIOs: A Flexible Approach to Steering AI Strategies09:26 ASCII, CompTIA, Pia, and Silence Laboratories in the news Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftech
Moser people got jokes!We asked the consultants at Moser to share their favorite IT jokes with us and they delivered! (An important point to make is that we said "Favorite" not "Best" and those are two different things.)We have knee slappers and eye rollers to share with you this week. If you have a favorite IT joke that you want to share, please send it to us!
1.) MSP Question of The Week How important is work culture as it relates to acquisition or merger? and Why everything boils down to customer experience... See: https://www.channele2e.com/native/customer-experience-is-everything 2.) Five minutes with a Smart Person - Val King of Whitehat Technologies Val is the CEO of Whitehat Virtual Technologies and responsible for day-to-day-operations, as well as leading the company's product development and technology strategy. Val has 20 years of experience in technology, compliance, and security in regulated industries, particularly in financial services and healthcare. Currently Val serves in a dual role as CIO for a regional healthcare system. “The last 18 inches between the screen and the end-user is the Point of Execution, where end user experience matters, where technology either improves productivity, and thus the business, or hinders it. "Every business needs IT. Every business buys IT, but not every business is able to confidently cross that divide from IT being just a cost of doing business to a tool delivering outcomes that makes business measurably better." Val is focused on making this a core competency of Whitehat Virtual Technologies, benefiting every customer we serve. https://www.whitehatvirtual.com/ https://www.linkedin.com/in/valking/ --- Our upcoming events: AUSTIN TX – MASTERMIND LIVE (March 28-29th) http://bit.ly/kernanmastermind https://kernanconsulting-mastermind.mykajabi.com/mastermind-event Use “EARLYBIRD” as the coupon code to save $200! Irvine CA – SMB Techfest (Feb 8th-9th) Make sure you catch Amy at SMB Techfest! https://www.smbtechfest.com/events.asp Our Social Links: https://www.linkedin.com/in/james-kernan-varcoach/ https://www.facebook.com/james.kernan https://www.facebook.com/karlpalachuk/ https://www.linkedin.com/in/karlpalachuk/ https://www.linkedin.com/in/amybabinchak/ https://www.facebook.com/amy.babinchak/
**Dark Indulgence 01.01.24 Industrial | EBM | Dark Disco Mixshow by Dj Scott Durand featuring new music from Eisfabrik | Spark! | Black Light Odyssey | Black Nail Cabaret | Tiger Club | Cassian ft Icehouse | Maelstrom & Louisahhh | spankthenun | Sine | Paul Handley | Die Sexual | Empirion | Ian Vale | Madil Hardis | Sad Devotion ft Azure Blue & more. Help spread by sharing the show! Happy New Year my friends!** Madil Hardis – Mad World (Tears for Fears cover) Ruined Conflict – Home Sine - Dark Matters Sad Devotion ft Azure Blue - Where Are You Beyond Border - New Start Rosenkammaren - Kall stjärna Hergeth - Source Disco Morato & Varvara Pavlovna - Princess Machino & Drvg Cvltvre - Llave Hard Facts - Repulsion Field (Alexithymic Version) Tiger Club ft Graziano & G.J. Lunghi - Star (Original Version) False Dimitrii - Superposition Maelstrom and Louisahhh - Vixen (Cate Hortl Remix) Black Nail Cabaret - Autogenic Paul Handley - For the Love of Something (Emika Remix Rework) Ana Laura Aláez ft Ascii. Disko - Error (Arya Zappa Remix) Her Absence Fill the World - Give me my Ground Argy ft Omnya - Aria (Extended Mix) Black Light Odyssey - Wie Ein Gott Silscodisco - Labyrinth Of Emotions (Original Mix) Adriatique ft WhoMadeWho - Miracle (Original Mix) Slighter ft Craig Joseph Huxtable - Lights Out [The Legendary House Cats Mix] Eisfabrik - All My Life Cassian ft Icehouse - Great Southern Land (Original Mix) Die Sexual - Tremble For Me CMIND - May never come (Jose Rodriguez remix) Ian Vale - Change (Breakbeat Mix) Man 2.0 - March of the Unforgiven (Parissior Remix) Pending Position feat. KY - Lick My Legs (Ruined Conflict Version) I Speak Machine - War ZyeKali - Past The Edge Figure Section - Agata S dismembered (Front 242 P. Codenys remix) SPANKTHENUN - Blot Out The Sun (Black Hole Version) The Log Equation - Swamp Macaque DarkVolt - Electric Angel (Club mix) DAVMA - Voices From The Earth (Original Mix) Empirion - Breakbeat Madness Missing In Stars - I believe you (Calibeats Radio Remix) Silent Crowd - Meeting in the Dark Variant Boris Brejcha - Gravity (Remix) Spark! - 66 ton krom
Joël got to do some pretty fancy single sign-on work. And when it came time to commit, he documented the ridiculous number of redirects to give people a sense of what was happening. Stephanie has been exploring Rails callbacks and Ruby debugging tools, using methods like save_callbacks and Kernel.caller, and creating a function call graph to better understand and manage complex code dependencies. Stephanie is also engaged in an independent project and seeking strategies to navigate the challenges of solo work. She and Joël explore how to find external support and combat isolation, consider ways to stimulate creativity, and obtain feedback on her work without a direct team. Additionally, they ponder succession planning to ensure project continuity after her involvement ends. They also reflect on the unique benefits of solo work, such as personal growth and flexibility. Stephanie's focus is on balancing the demands of working independently while maintaining a connected and sustainable professional approach. ASCII Sequence Diagram Creator (https://textart.io/sequence) Callback debugging methods (https://andycroll.com/ruby/find-list-debug-active-record-callbacks-in-the-console/) Kernel.caller (https://ruby-doc.org/core-3.0.2/Kernel.html#method-i-caller) Method.source_location (https://ruby-doc.org/core-3.0.2/Method.html#method-i-source_location) Building web apps by your lonesome by Jeremy Smith (https://www.youtube.com/watch?v=Rr871vmV4YM) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I got to do something really fun this week, where I was doing some pretty fancy single sign-on work. And when it came time to commit, I wanted to document the kind of ridiculous number of redirects that happen and give people a sense of what was going on. And for my own self, what I had been doing is, I had done a sequence diagram that sort of shows, like, three different services that are all talking to each other and where they redirect to each other as they all go through the sequence to sign someone in. And I was like, how could I embed that in the commit message? Because I think it would be really useful context for someone trying to get an overview of what this commit is doing. And the answer, for me, was, can I get this sequence diagram in ASCII form somewhere? And I found a website that allows me to do this in ASCII art. It's the textart.io/sequence. And that allows me to create a sequence diagram that gets generated as ASCII art. I can copy-paste that into a commit message. And now anybody else who is like, "What is it that Joël is trying to do here?" can look at that and be like, "Oh, oh okay, so, we got these, like, four different places that are all talking to each other in this order. Now I see what's happening." STEPHANIE: That's super neat. I love the idea of having it directly in your commit message just because, you know, you don't have to go and find a graph elsewhere if you want to understand what's going on. It's right there for you, for future commit explorers [laughs] trying to understand what was going on in this snippet of time. JOËL: I try as much as possible to include those sorts of things directly in the commit message because you never know who's reading the commit. They might not have access to some sort of linked resource. So, if I were like, "Hey, go to our wiki and see this link," like, sure, that would be helpful, but maybe the person reading it doesn't have access to the wiki. Maybe they do have access, but they're not on the internet right now, and so they don't have access to the wiki. Maybe the wiki no longer exists, and that's a dead link. So, as much as possible, I try to embed context directly in my commit messages. STEPHANIE: That's really cool. And just another shout out to ASCII art, you know [laughs], persevering through all the times with our fancy tools. It's still going strong [laughs]. JOËL: Something about text, right? STEPHANIE: Exactly. I actually also have a diagram graph thing to share about what's new in my world that is kind of in a similar vein. Another thoughtboter and former guest on the show, Sara Jackson, shared in our dev channel about this really cool mural graph that she made to figure out what was going on with callbacks because she was working on, you know, understanding the lifecycle of this model and was running into, like, a lot of complex behavior. And she linked to a really neat blog post by Andy Croll, too, that included a little snippet sharing a few callback debugging methods that are provided by ActiveRecord. So, basically, you can have your model and just call double underscore callbacks. And it returns a list of all the callbacks that are defined for that model, and I thought that was really neat. So, I played around with it and copypastad [laughs] the snippet into my Rails console to figure out what's going on with basically, like, the god object of that that I work in. And the first issue I ran into was that it was undefined because it turns out that my application was on an older [laughs] version of Rails than that method was provided on. But, there are more specific methods for the types of callbacks. So, if you are looking specifically for all the callbacks related to a save or a destroy, I think it's save underscore callbacks, right? And that was available on the Rails version I was on, which was, I think, 4. But that was a lot of fun to play around with. And then, I ended up chatting with Sara afterwards about her process for creating the diagram after, you know, getting a list of all these methods. And I actually really liked this hybrid approach she took where, you know, she automated some parts but then also manually, like, went through and stepped through the code and, like, annotated notes for the methods as she was traversing them. And, you know, sometimes I think about, like, wow, like, it would be so cool if this graph just generated automatically, but I also think there is some value to actually creating it yourself. And there's some amount of, like, mental processing that happens when you do that, as opposed to, like, looking at a thing that was just, you know, generated afterwards, I think. JOËL: Do you know what kind of graph Sara generated? Was it some kind of, like, function call graph, or was it some other way of visualizing the callbacks? STEPHANIE: I think it was a function call graph, essentially. It even kind of showed a lot of the dependencies, too, because some of the callback functions were quite complicated and then would call other classes. So, there was a lot of, I think, hidden dependencies there that were unexpected, you know, when you think you're just going to create a regular old [laughs] record. JOËL: Yeah, I've been burned by unexpected callbacks or callbacks that do things that you wouldn't want in a particular context and then creating bad data or firing off external services that you really didn't want, and that can be an unpleasant surprise. I appreciate it when the framework offers debugging tools and methods kind of built-in, so these helpers, which I was not aware of. It's really cool because they allow you to kind of introspect and understand the code that you're going through. Do you have any others like that from Rails or Ruby that you find yourself using from time to time to help better understand the code? STEPHANIE: I think one I discovered recently was Kernel.caller, which gives you the stack trace wherever you are when executing. And that was really helpful when you're not raising an exception in certain places, and you need to figure out the flow of the code. I think that was definitely a later discovery. And I'm glad to have it in my back pocket now as something I can use in any kind of Ruby code. JOËL: That can, yeah, definitely be a really useful context to have even just in, like, an interactive console. You're like, wait a minute, where's this coming from? What is the call stack right now? STEPHANIE: Do you have any debugging tools or methods that you like to use that maybe are under the radar a little bit? JOËL: One that I really appreciate that's built into Ruby is the source location method on the method object, so Ruby has a method object. And so, when you're dealing with some sort of method and, like, maybe it got generated programmatically through metaprogramming, or maybe it's coming from a gem or something like that, and you're just like, where is this define? I'm trying to find it. If you're in your editor and you're doing stuff, maybe you could run some sort of search, or maybe it has some sort of keyword lookup where you can just find the definition of what's under your cursor. But if you're in an interactive console, you can create a method object for that method name and then call dot source location on it. And it will tell you, here's where it's defined. So, very handy in the right circumstances. STEPHANIE: Awesome. That's a great tip. JOËL: Of course, one of the most effective debugging tools is having a pair, having somebody else work with you, but that's not always something that you have. And you and I were talking recently about what it's like to work solo on a project. Because you're currently on a project, you're solo, at least from the thoughtbot side of things. You're embedding with a team, with a client. Are you working on kind of, like, a solo subtask within that, or are you still kind of embedding and interacting with the other teammates on a regular basis? STEPHANIE: Yeah. So, the past couple of weeks, I am working on more of a solo initiative. The other members of my client team are kind of ramping up on some other projects for this next quarter. And since my engagement is ending soon, I'm kind of left working on some more residual tasks by myself. And this is new for me, actually. I've not really worked in a super siloed by-myself kind of way before. I usually have at least one other dev who I'm, like, kind of partnering up with on a project, or an epic, or something like that. And so, I've had a very quiet week where no one is, you know, kind of, like, reaching out to me and asking me to review their code, or kind of checking in, or, you know, asking me to check in with them. And yeah, it's just a little bit different than how I think I like to normally work. I do like to work with other people. So, this week has been interesting in terms of just kind of being a more different experience where I'm not as actively collaborating with others. JOËL: What do you think are some of the biggest challenges of being kind of a little bit out in your own world? STEPHANIE: I think the challenges for me can definitely be the isolation [laughs], and also, what kind of goes hand in hand with that is when you need help, you know, who can you turn to? There's not as much of an obvious person on your team to reach out to, especially if they're, like, involved with other work, right? And that can be kind of tough. Some of the other ones that I've been thinking about have been, you know, on one hand, like, I get to make all of the decisions that I want [laughs], but sometimes you kind of get, like, really in your own head about it. And you're not in that space of, like, evaluating different solutions that you maybe might not think of. And I've been trying to figure out how to, like, mitigate some of that risk. JOËL: What are some of the strategies that you use to try to balance, like making good decisions when you're a bit more solo? Do you try to pull in someone from another team to talk ideas through? Do you have some sort of internal framework that you use to try to figure out things on your own? What does that look like? STEPHANIE: Yeah, luckily, the feature I'm working on is not a huge project. Well, if it were, I think then I wouldn't be alone on it. But, you know, sometimes you find yourself kind of tasked with one big thing for a while, and you are responsible for from start to finish, like all of the architectural decisions to implementation. But, at least for me, the scope is a little more narrow. And so, I don't feel as much of a need to get a lot of heads together because I at least feel somewhat confident in what I'm doing [laughs]. But I have found myself being a bit more compelled to kind of just verbalize what I'm doing more frequently, even to, like, myself in Slack sometimes. It's just like, I don't know who's reading this, but I'm just going to put it out there because maybe someone will see this and jump in and say, "Oh, like, interesting. Here's some other context that I have that maybe might steer you away from that," or even validating what I have to say, right? Like, "That sounds like a good idea," or, you know, just giving me an emoji reaction [laughs] is sometimes all I need. So, either in Slack or when we give our daily sync updates, I am, I think, offering a little more details than I might if I already was working with someone who I was more in touch with in an organic way. JOËL: And I think that's really powerful because it benefits you. Sort of by having to verbalize that or type it out, you, you know, gain a little bit of self-awareness about what you're trying to do, what the struggles are. But also, it allows anybody else who has potentially helpful information to jump in. I think that's not my natural tendency. When I'm on something solo, I tend to kind of, like, zoom in and focus in on something and, like, ignore a little bit of the world around me. Like, that's almost the time when I should look at overcommunicating. So, I think most times I've been on something solo, I sort of keep relearning this lesson of, like, you know, it's really important to constantly be talking out about the things that you're doing so that other people who are in a broader orbit around you can jump in where necessary. STEPHANIE: Yeah, I think you actually kind of touched on one of the unexpected positives, at least for me. Something I wasn't expecting was how much time I would have to just be with my thoughts. You know, as I'm implementing or just in my head, I'm mulling over a problem. I have less frequent, not distractions necessarily, but interruptions. And sometimes, that has been a blessing because I am not in a spot where I have a lot of meetings right now. And so, I didn't realize how much generative thought happens when you are just kind of, like, doing your own thing for a little bit. I'm curious, for you, is that, like, a space that you enjoy being when you're working by yourself? And I guess, you know, you were saying that it's not your natural state to kind of, like, share what's going on until maybe you've fully formed an idea. JOËL: I think I often will regret not having shared out before everything is done. The times that I have done it, I've been like, that was a really positive experience; I should do that more. I think it's easy to sort of wait too long before sharing something out. And with so many things, it feels like there's only one more small task before it's done. Like, I just need to get this one test to go green, and then I can just put up a PR, and then we'll have a conversation about it. But then, oh, this other test broke, or this dependency isn't installing correctly. And before you know it, you've spent a whole day chasing down these things and still haven't talked. And so, I think if some of those things were discussed earlier, it would help both to help me feel more plugged in, but also, I think everybody else feels like they're getting a chance to participate as well. STEPHANIE: So, you mentioned, you know, obviously, there's, like, the time spent just arriving at the solution before sharing it out for feedback. But have you ever been in a position where there is no one to give you feedback and, like, not even a person to review your code? JOËL: That's really challenging. So, occasionally, if I'm working on a project, maybe it would be, like, very early-stage startup that maybe just has, like, a founder, and then I'm, like, the only technical person on the team, generally, what I'll try to do is to have some kind of review buddy within thoughtbot, so some other developer who's not staffed on my project but who has access to the code such that I can ask them to say, "Hey, can you just take a look at this and give me a code review?" That's the ideal situation. You know, some companies tend to lock things down a lot more if you're dealing with something like healthcare or something like that, where there might be some concerns around personal information, that kind of thing. But generally, in those cases, you can find somebody else within the company who will have some technical knowledge who can take a look at your code; at least, that's been my experience. STEPHANIE: Nice. I don't think I've quite been in that position before; again, I've really mostly worked within a team. But there was a conference talk I watched a little bit ago from Jeremy Smith, and it was called Building Web Apps by Your Lonesome. And he is a, like, one-man agency. And he talked about, you know, what it's like to be in that position where you pretty much don't have other people to collaborate with, to review your code. And one thing that he said that I really liked was shifting between writer and editor mode. If you are the person who has to kind of just decide when your code is good enough to merge, I like that transition between, like, okay, I just spent however many hours putting together the solution, and now I'm going to look at it with a critical eye. And sometimes I think that might require stepping away for a little bit or, like, revisiting it even the next day. That might be able to help see things that you weren't able to notice when you were in that writing mode. But I have found that distinction of roles really helpful because it does feel different when you're looking at it from those two lenses. JOËL: I've definitely done that for some, like, personal solo projects, where I'm participating in a game jam or something, and then I am the only person to review my code. And so, I will definitely, at that point, do a sort of, like, personal code review where I'll look at it. Maybe I'm doing PRs on GitHub, and I'm just merging. Maybe I'm just doing a git diff and looking at a commit in the command line on my own machine. But it is useful, even for myself, to sort of switch into that editor mode and just kind of look at everything there and say, "Is it in a good place?" Ideally, I think I do that before putting it out for a co-worker's review, so you kind of get both. But on a solo project, that has worked actually pretty well for me as well. STEPHANIE: One thing that you and I have talked about before in a different context, I think, when we have chatted about writing conference talks, is you are really great about focusing on the audience. And I was thinking about this in relation to working solo because even when you are working by yourself on a project, you're not writing the code for yourself, even though you might feel like [laughs] it in the moment. And I also kind of like the idea of asking, like, who are you building for? You know, can you ask the stakeholder or whoever has hired you, like, "Who will maintain this project in the future?" Because likely, it won't be you. Hopefully, it won't be you unless that's what you want to be doing. There's also what my friend coined the circus factor as opposed to the bus factor, which is, like, if you ran away to the circus tomorrow [laughs], you know, what is the impact that would have? And yeah, I think working solo, you know, some people might think, like, oh, that gives me free rein to just write the code exactly how I want to, how I want to read it. But I think there is something to be said about thinking about the future of who will be [inaudible 18:10] what you just happen to be working on right now. JOËL: And keep in mind that that person might be future you who might be coming back and be like, "What is going on here?" So, yeah, audience, I think, is a really important thing to keep in mind. I like to ask the question, if somebody else were looking at this code, and somebody else might be future me, what parts would they be confused by? If I was walking somebody else through the code for the first time, where would they kind of stop me through the walkthrough and be like, "Hey, why is this happening? What's the connection between these two things? I can see they're calling each other, but I don't know why." And that's where maybe you put in a comment. Maybe you find a better method or a class name to better explain what happens. Maybe you need to put more context in a commit message. There's all sorts of tools that we can use to better increase documentation. But having that pause and asking, "What will confuse someone?" is, I think, one of the more powerful techniques I do when I'm doing self-review. STEPHANIE: That's really cool. I'm glad you mentioned that, you know, it could also be future you. Because another thing that Jeremy says in this talk that I was just thinking about is the idea of optimizing for autonomy. And there's a lot to be said there because autonomy is like, yeah, like, you end up being the person who has to deal with problems [laughs], you know, if you run into something that you can't figure out, and, ideally, you'll have set yourself up for success. But I think working solo doesn't mean that you are in your own universe by yourself completely. And thinking about future, you, too, is kind of, like, part of the idea that the person in this moment writing code will change [laughs]. You'll get new information. Maybe, like, you'll find out about, like, who might be working on this in the future. And it is kind of a fine balance between making sure that you're set up to handle problems, but at the same time, maybe it's that, like, you set anyone up to be able to take it away from where you left it. JOËL: I want to take a few moments to sort of talk a little bit about what it means to be solo because I think there are sort of multiple different solo experiences that can be very different but also kind of converge on some similar themes. Maybe some of our listeners are listening to us talking and being like, "Well, I'm not at a consultancy, so this never happens to me." But you might find yourself in that position. And I think one that we mentioned was maybe you are embedded on a team, but you're kind of on a bit of a larger project where you're staffed solo. So, even though you are part of a larger team, you do feel like the initiative that you're on is siloed to you a little bit. Are there any others that you'd like to highlight? STEPHANIE: I think we also mentioned, you know, if you're a single developer working on an application because you might be the first technical hire, or a one-person agency, or something, that is different still, right? Because then your community is not even your company, but you have to kind of seek out external communities on social networks, or Slack groups, or whatever. I've also really been interested in the idea of developers kind of being able to be rotated with some kind of frequency where you don't end up being the one person who knows everything about a system and kind of becomes this dependency, right? But how can we make projects so, like, well functioning that, like, anyone can step in to do some work and then move on? If that's just for a couple of weeks, for a couple of months. Do you have any thoughts about working solo in that kind of situation where you're just stepping into something, maybe even to help someone out who's, you know, on vacation, or kind of had to take an unexpected leave? JOËL: Yeah, that can be challenging. And I think, ideally, as a team, if you want to make that easier, you have to set up some things both on a, like, social level and on a tactical level, so all the classic code quality things that you want in place, well structured, encapsulated code, good documentation, things like that. To a certain extent, even breaking down tasks into smaller sort of self-sufficient stories. I talk a lot about working incrementally. But it's a lot easier to say, "Hey, we've got this larger story. It was broken down into 20 smaller pieces that can all be shipped independently, and a colleague got three of them done and then had to go on leave for some reason. Can you step in and do stories 4 through 20?" As opposed to, "Hey, we have this big, amorphous story, and your colleague did some work, and it kind of is done. There's a branch with some code on it. They left a few notes or maybe sent us an email. But they had to go on leave unexpectedly. Can you figure it out and get it done?" The second scenario is going to be much more challenging. STEPHANIE: Yeah, I was just thinking about basically what you described, right? Where you might be working on your own, and you're like, well, I have this one ticket, and it's capturing everything, and I know all that's going on [laughs], even though it's not quite documented in the ticket. But it's, you know, maybe on my branch, or in my head, or, worst of all, on my local machine [laughs] without being pushed up. JOËL: I think maybe that's an anti-pattern of working solo, right? A lot of these disciplines that you build when you're working in a team, such as breaking up tickets into smaller pieces, it's easy to kind of get a little bit lazy with them when you're working solo and let your tickets inflate a little bit, or just have stuff thrown together in branches on your local machine, which then makes it harder if somebody does need to come in to either collaborate with you or take over from you if you ever need to step aside. STEPHANIE: Right. I have definitely seen some people, even just for their personal projects, use, like, a Trello board or some other project management tool. And I think that's really neat because then, you know, obviously, it's maybe just for their own, like, self-organization needs, but it's, like, that recognition that it's still a complicated project. And just because they're working by themselves doesn't mean that they can't utilize a tool for project management that is meant for teams or not even teams [laughs], you know, people use them for their own personal stuff all the time. But I really like that you can choose different levels of how much you're documenting for your future self or for anyone else. You had mentioned earlier kind of the difference between opening up a PR for you...you have to merge your branch into main or whatever versus just committing to main. And that distinction might seem, like, if you were just working on a personal project, like, oh, you know, why go through the extra step? But that can be really valuable in terms of just seeing, like, that history, right? JOËL: I think on solo projects, it can really depend on the way you tend to treat your commit history. I'm very careful with the history on the main branch where I want it to tell a sort of, like, cohesive story. Each commit is kind of, like, crafted a little bit. So, even when I'm working solo and I'm committing directly to master or to the main branch, I'm not just, like, throwing random things there. Ideally, every commit is green and builds and is, like, self-contained. If you don't have that discipline, then it might be particularly valuable to go through the, like, a branching system or a PR system. Or if you just want, like, a place to experiment, just throw a bunch of code together, a bunch of things break; nothing is cohesive, that's fine. It's all a work in progress until you finally get to your endpoint, and then you squash it down, or you merge it, or whatever your workflow is, and then it goes back into the main branch. So, I think that for myself, I have found that, oftentimes, I get not really a whole lot of extra value by going through a branching and PR system when it's, like, a truly solo project, you know, I'm building a side project, something like that. But that's not necessarily true for everyone. STEPHANIE: I think one thing I've seen in other people's solo projects is using a PR description and, you know, having the branching strategy, even just to jot down future improvements or future ideas that they might take with the work, especially if you haven't kind of, like, taken the next step of having that project management system that we talked about. But there is, like, a little more room for some extra context or to, like, leave yourself little notes that you might not want necessarily in your commit history but is maybe more related to this project being, like, a work in progress where it could go in a lot of different directions, and you're figuring that out by yourself. JOËL: Yeah, I mean, definitely something like a draft PR can be a great place to have work in progress and experiment and things like that. Something you were saying got me wondering what distinction you typically have between what you would put in a commit message versus something that you would put in a PR description, particularly given that if you've got, like, a single commit PR, GitHub will automatically make the commit message your PR message as well. STEPHANIE: This has actually evolved for me over time, where I used to be a lot more reliant on PR descriptions holding a lot of the context in terms of the decision-making. I think that was because I thought that, like, that was the most accessible place of information for reviewers to find out, you know, like, why certain decisions were made. And we were using, you know, PR templates and stuff like that. But now the team that I'm working on uses commit message templates that kind of contain the information I would have put in a PR, including, like, motivation for the change, any risks, even deployment steps. So, I have enjoyed that because I think it kind of shortens the feedback loop, too, right? You know, you might be committing more frequently but not, you know, opening a PR until later. And then you have to revisit your commits to figure out, like, okay, what did I do here? But if you are putting that thought as soon as you have to commit, that can save you a little bit of work down the line. What you said about GitHub just pulling your commit message into the PR description has been really nice because then I could just, like, open a thing [laughs]. And that has been nice. I think one aspect that I really like about the PR is leaving myself or reviewers, like, notes via comments, like, annotating things that should not necessarily live in a more permanent form. But maybe I will link to documentation for a method that I'm using that's a little less common or just add some more information about why I made this decision over another at a more granular level. JOËL: Yeah, I think that's probably one of the main things that I tend to put in a PR message rather than the commit message is any sort of extra information that will be helpful at review time. So, maybe it's a comment that says, "Hey, there is a lot of churn in this PR. You will probably have a better experience if you review this in split view versus unified view," things like that. So, kind of, like, meta comments about how you might want to approach reviewing this PR, as opposed to something that, let's say somebody is reviewing the history or is, like, browsing the code later, that wouldn't be relevant to them because they're not in a code review mindset. They're in a, like, code reading, code understanding mindset or looking at the message to say, "Why did you make the changes? I saw this weird method. Why did you introduce that?" So, hopefully, all of that context is in the commit message. STEPHANIE: Yeah, you reminded me of something else that I do, which is leave notes to my future self to revisit something if I'm like, oh, like, this was the first idea I had for the, you know, the way to solve this problem but, you know, note to self to look at this again tomorrow, just in case I have another idea or even to, like, you know, do some more research or ask someone about it and see if they have any other ideas for how to implement what I was aiming for. And I think that is the editor mode that we were talking about earlier that can be really valuable when you're working by yourself to spend a little extra time doing. You know, you are essentially optimizing for autonomy by being your own reviewer or your own critic in a healthy and positive way [laughs], hopefully. JOËL: Exactly. STEPHANIE: So, at the beginning of this episode, I mentioned that this is a new experience for me, and I'm not sure that I would love to do it all of the time. But I'm wondering, Joël, if there are any, you know, benefits or positives to working solo that you enjoy and find that you like to do just at least for a short or temporary amount of time. JOËL: I think one that I appreciate that's maybe a classic developer response is the heads downtime, the focus, being able to just sit down with a problem and a code editor and trying to figure it out. There are times where you really need to break out of that. You need somebody else to challenge you to get through a problem. But there are also just amazing times where you're in that flow state, and you're getting things done. And that can be really nice when you're solo. STEPHANIE: Yeah, I agree. I have been enjoying that, too. But I also definitely am looking forward to working with others on a team, so it's kind of fun having to get to experience both ways of operating. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.
Japón suspende bus autónomo rural / Vacuna biotech contra la cocaína / El ataque de Daguestán podría ser el final de Telegram / Ok judicial a los hackeos de Encrochat Esta semana no tengo patrocinador. Apúntate al Patreon o algo. Yo que sé. Japón suspende bus autónomo rural / Vacuna biotech contra la cocaína / El ataque de Daguestán podría ser el final de Telegram / Ok judicial a los hackeos de Encrochat
We're back this week with another round of hot tips for making your computing life less annoying, including super secret UI settings, methods of bending digital voice assistants to your will, a low-level Windows hotkey not even Will knew about, the latest PowerToys (since the last time we talked about PowerToys), an easy way to trim videos without encoding them again, the fastest video player in the West, and other tips you won't want to miss!The apps we mentioned in this ep include LosslessCut, mpv, Authy, and the ever-growing Power Toys.The Windows hotkeys we mentioned:Win + . - emoji/Unicode pickerWin + Shift + S - easy screenshotsWin + R - run applicationsWin + L - lock your computerWin + Alt + B - disable / enable HDR at the OS levelWin + Ctrl + Shift + B - video driverWin + P - select output deviceWin + alt + r - record video of game window that's open (needs Game Bar)Win + alt + g - record last 30 seconds, but needs to be turned on first (also Game Bar)Win + V - clipboard history (need to enable in privacy)Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
Stephanie is consciously trying to make meetings better for herself by limiting distractions. A few episodes ago, Joël talked about a frustrating bug he was chasing down and couldn't get closure on, so he had to move on. This week, that bug popped up again and he chased it down! AND he got to use binary search to find its source–which was pretty cool! Together, Stephanie and Joël discuss dependency graphs as a mental model, and while they apply to code, they also help when it comes to planning tasks and systems. They talk about coupling, cycles, re-structuring, and visualizations. Ruby Graph Library (https://github.com/monora/rgl) Graphviz (https://graphviz.org/) Using a Dependency Graph to Visualize RSpec let (https://thoughtbot.com/blog/using-a-dependency-graph-to-visualize-rspec-let) Mermaid.js (https://mermaid.js.org/) Strangler Fig pattern (https://martinfowler.com/bliki/StranglerFigApplication.html) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I'm always trying to make meetings better for me [chuckles], more tolerable or more enjoyable. And in meetings a lot, I find myself getting distracted when I don't necessarily want to be. You know, oftentimes, I really do want to try to pay attention to just what I'm doing in that meeting in the moment. In fact, just now, I was thinking about the little tidbit I had shared on a previous episode about priorities, where really, you know, you can only have one priority [laughs] at a time. And so, in that moment, hopefully, my priority is the meeting that I'm in. But, you know, I find myself, like, accidentally opening Slack or, like, oh, was I running the test suite just a few minutes before the meeting started? Let me just go check on that really quick. And, oh no, there's a failure, oh God, that red is really, you know, drawing my eye. And, like, could I just debug it really quick and get that satisfying green so then I can pay attention to the meeting? And so on and so forth. I'm sure I'm not alone in this [laughs]. And I end up not giving the meeting my full attention, even though I want to be, even though I should be. So, one thing that I started doing about a year ago is origami. [laughs] And that ended up being a thing that I would do with my hands during meetings so that I wasn't using my mouse, using my keyboard, and just, like, looking at other stuff in the remote meeting world that I live in. So, I started with paper stars, made many, many paper stars, [laughs] and then, I graduated to paper cranes. [laughs] And so, that's been my origami craft of choice lately. Then now, I have little cranes everywhere around the house. I've kind of created a little paper crane army. [laughs] And my partner has enjoyed putting them in random places around the house for me [laughs] to find. So, maybe I'll open a cabinet, and suddenly, [laughs] a paper crane is just there. And I think I realized that I've actually gotten quite good at doing these crafts. And it's been interesting to kind of be putting in the hours of doing this craft but also not be investing time, like, outside of meetings. And I'm finding that I'm getting better at this thing, so that seemed pretty cool. And it is mindless enough that I'm mentally just paying attention but, yeah, like, building that muscle memory to perfecting the craft of origami. JOËL: I'm curious, for your army of paper cranes, is there a standard size that you make, or do you have, like, a variety of sizes? STEPHANIE: I have this huge stack of, like, 500 sheets of origami paper that are all the same size. So, they're all about, let's say, two or three inches large. But I think the tiny ones I've seen, really small paper cranes, maybe that would be, like, the next level to tackle because working with smaller paper seems, you know, even more challenging. JOËL: I'd imagine the ratio of, like, paper thickness to the size of the thing that you're making is different. STEPHANIE: At this point, they say that if you make 1,000, then you bring good luck. I think I'm well on my way [laughs] to hopefully being blessed with good luck in this household of my little paper crane army. JOËL: It's interesting that you mentioned the power of having something tactile to do with your hands during a meeting, and I definitely relate to that. I feel like it's so easy, even, like, mindlessly, to just hit Command-Tab when I'm doing things on a screen. Like, my hands are on the keyboard. If I'm not doing something, I'm just going to mindlessly hit Command-Tab. It's kind of like on your phone sometimes. I don't know if you do this, like, just scrolling side to side. You're not actually doing anything. You just want motion with your fingers. STEPHANIE: Yes. I know exactly what you're talking about. And it's funny because it's a bit of a duality where, you know, when you are in your development workflow, you want things to be as quick and convenient as possible, so that Command-Tab, you know, is very easy. It's just built in, and that helps speed up your, you know, day-to-day work. But then it's also that little bit of mindlessness, I think, that can get you down the distraction path. When I was first looking for something to do with my hands, to have, like, a little tactile thing to keep me focused in meetings, I did explore getting one of those fidget cubes; I have to say. [laughs] It's just a little toy, you know, that comes with a bunch of different settings for you to fidget with. There's, like, a ball you can roll, you know, with your thumb, or maybe some buttons to click, and it gives you that really satisfying tactile experience. And I know they work really well for a lot of people, but I've really enjoyed the, I guess, the unexpected benefits [chuckles] of getting better at a hobby [laughs] while spending my time at my work. Joël, what is new with you? JOËL: So, a few episodes ago, I talked about a really kind of frustrating bug that I was chasing down that was due to some, like, non-determinism in the environment. And it kind of came, and then it went away. And I wasn't able to get sort of closure on that and had to move on. Well, this week, that bug popped up again, and this time, I was actually able to chase it down. So, that felt really exciting. And I got to use binary search to try to find the source of it, which made me feel really cool. STEPHANIE: Oooh, do tell. What ended up being the issue? JOËL: I'm connecting to an external Snowflake data warehouse, and ActiveRecord tries to fetch the schema and crashes as part of that with some cryptic error that originates from the C extension ODBC Ruby driver package. I figured out that it's probably something to do with, like, a particular table name or something in the table metadata when we're pulling this schema that we're not happy about. But I don't know which table is the one that it's not happy with. Well, this time, I was able to figure out, by reading through some of the documentation, that I can pull subsets of the schema. So, I can pull the first n values of that schema, and it won't crash. It only crashes if I try to fetch the entire set, which is what is happening under the hood. At that point, you know, I could fetch each row individually, but there's hundreds of these. So, you know, I try, okay, what happens if I try to fetch 1,000 of these? Is it going to crash? Because it's a massive system. So, yes, I get a crash. So, I know that a table less than a thousandth in the list of tables is what's causing the problems. So, okay, fetch 500 halfway in between there. It's still going to crash. Okay, 250, 125. I then kind of keep halving all the time until I find one that doesn't crash. And now I know that it is somewhere between the last crash and this one. So, I think it was between 125 and 250. And now I can say, okay, well, let's fetch the first, you know, maybe 200 tables, okay, that crashes. And I keep halving that space until you finally find it. And then, like, okay, so it's this one right here. Now, the problem is the bad table actually crashes. So, I think it ended up being, like, number 175 or something like that. So, I never get to see the actual table itself. But because the list of tables is in alphabetical order, and I can see because I can fetch the first 174 and it succeeds, so I can tell what the previous 5, 6, you know, previous 174 are. I can pretty easily go and look at the actual database and the list of tables and say, okay, well, it's in the same order. And the next one is this one, and hey, look, there is some metadata there that has some very long fields that are longer than one might expect, specifically going over a potentially implied 256-character limit. That seems somewhat suspicious. And, oh, if we remove this table, all of a sudden, everything works. STEPHANIE: Wow, binary search, an excellent debugging tool [laughs] when you have no idea, you know, what could possibly be causing your issue. JOËL: It's such a cool tool. Like, I'm always so happy when I get a chance to use it. The problem is, you need a way to be able to answer the question, like, have I found it? Yes or no? Or, generally, is it greater or less than this current position? STEPHANIE: Well, that's really exciting that you ended up figuring out how to solve the bug. I know last time we talked about it, you kind of had left off in a space of, hopefully, we won't run into this issue again because it's no longer happening. But it seems like you were also set up this time around to be able to debug once it cropped up again. JOËL: Yes. So, binary search is really cool. It's got this, like, very, like, fancy computer science name. But in reality, it's a fairly simple, straightforward technique that I use fairly frequently in my development. And there's another kind of computer sciency fancy-sounding concept that I use all the time. You've all heard me reference this multiple times on the show. You're right; we're finally doing it. This is the dependency graph episode. STEPHANIE: Woo. [laughter] It's time. I'm excited to really dig into it because, you know, as someone who has heard you talk about it a lot, you know, and is maybe a little less familiar with graph theory and how, you know, it can be applied to my day to day work, I'm really excited to dig into a little bit about, you know, what a regular developer needs to know about dependency graphs to add to their toolbox of skills. JOËL: So, I think at its core, the idea of a dependency graph is that you have a group of entities, some of which depend on each other. They can't do a task, or they can't be created unless some other subtasks or dependent actions take place. And so, we have a sort of formal structural way of describing these things. Visually, we often draw these things out where each of the pieces is like a little bubble or a circle, and then we draw arrows towards the things that it depends on. So, if A cannot be done without B being done first, we draw an arrow from A to B. That's kind of how it is in the abstract. More concretely, this kind of thing shows up constantly throughout the work that we do because a lot of what we do as developers is managing things that are connected to each other or that depend on each other. We build complex systems out of smaller components that all rely on each other. STEPHANIE: Yeah, I think it's interesting because I use the word dependency, you know, very frequently when talking about normal work that I'm doing, you know, dependencies as in libraries, right? That we've pulled into our application, or dependencies, like, talking about other classes that are referenced in this class that I'm working in. And I never really thought about what could be explored further or, like, what could be learned from really digging into those connections. JOËL: It's a really powerful mental model. And, like you said, dependencies exist all over our work, and we often use that word. So, you mentioned something like packages, where your application depends on Rails, which in turn depends on ActiveRecord, which in turn depends on a bunch of other things. And so, you've got this whole chain of maybe immediate dependencies, and then those dependencies have dependencies, and those dependencies have dependencies, and it kind of, like, grows outward from there. And in a very kind of simplistic model, you might think, oh, well, it's more, like, a kind of a tree structure. But oftentimes, you'll have things like branches on one side that connect back to branches on the other. And now you've got something that's no longer really tree-like. It's more of a sort of interconnected web, and that is a graph. STEPHANIE: I think understanding the dependencies of your system has also become more important to me as I learn about things that can go wrong when I don't know enough about what my system is, you know, relying on that I had kind of taken for granted previously. I'm especially thinking about packages like we were mentioning, and, you know, not realizing that your application is dependent on this other library, right? That's brought in by a gem that you're using. And there's maybe, like, a security issue, right? With that. And suddenly, you have this problem on your hands that you didn't realize before. And I know that that has been more of a common discussion now in terms of security practices, just being more aware of all the things that you are depending on as really our work becomes more and more interconnected with the things available to us with open source. JOËL: I think where understanding the graph-like nature of this becomes really important is when you're doing something like an upgrade. So, let's say you do have a gem that has a security problem, and you want to upgrade it to fix that security issue. But the upgrade that includes the security patch is also a breaking upgrade. And so, now everything else in your system that depends on that gem or on that package is going to break unless you have them in a version that is compatible with the new version of that gem. And so, you might have to then go downstream and upgrade those packages in a way that's compatible with your app before you can bring in the security patch. And a lot of that can be done automatically by Bundler. Bundler is software that is built around navigating dependency graphs like that and finding versions that are compatible with each other. But sometimes, your code will need to change in order to upgrade one of these downstream gems so that you can then pull in the upgrade from the gem that needs a security patch. And so, understanding a little bit of that graph is going to be important to safely upgrading that gem. STEPHANIE: So, I know another application of dependency graphs that you have thought about and written a blog post for is RSpec let declarations and how a lot of the time when we are using let, you know, we are likely calling other variables defined by let. And so, when you are encountering a test file, it can be really hard to grok what data is being set up in your test. JOËL: Yeah, so that is really interesting because you can define something that will get executed in a lazy fashion if it gets referenced. But then not only is the let lazy and will not trigger unless it's referenced, but a let can reference other lets, which are also lazy, and only get triggered if they get referenced. So, you might have a bunch of lets defined in any order you want throughout a file, and they're all kind of interconnected with these references to each other. But they only get triggered if something calls it directly or it's in this, like, chain of dependencies. And getting a grasp on what actually gets created, which lets will actually execute, which ones don't in a file can quickly get out of hand. And so, thinking of this in terms of a dependency graph has been a really helpful mental model for me to understand what's going on in a complex test file. STEPHANIE: Yeah, absolutely. Especially when sometimes the lets are coming from all over the place, you know, maybe a describe block hundreds of lines away, or even a completely different file if you are using a shared context that's being pulled in. So, I can see why this was a complex problem that could be made a little simpler with plotting out a dependency graph. And in preparation for this episode, I was doing a little bit of my own exploration on this because I certainly know, you know, the pain of trying to figure out what is being executed in my tests when there are a lot of lets that reference each other. And in the blog post, you kind of gave a little step-by-step of how you could start with creating a dependency graph for the test that you're working with. And I was really curious if this process could be automated because, you know, I do enjoy, you know, pulling out the pen and paper [chuckles] every now and then. But I'm not, like, a particularly visual person. God forbid I, like, draw a circle, but then, like, don't have enough space for the rest of the circles. [laughs] So, I was really hoping for a tool that could do this for me, especially if, you know, you do, you have a lot of tests that you have to try to understand in a relatively short amount of time. And so, I ended up doing something kind of hacky with RSpec and overriding let definitions to automate this process. JOËL: That's really cool. So, is the tool that you're trying to build something where you feed it in a spec file, and it gives you some kind of graphical representation like an SVG or something as output? STEPHANIE: Yeah. I did consider that approach first, where you feed in the file, but then I ended up going with something more dynamic where you are running the test, and then as it gets executed, tracing the let definitions and then registering them to build your dependency graph. JOËL: So, you've got some sort of internal modeling that describes a dependency graph. And then, somehow, you're going to turn that, you know, a series of Ruby objects into some kind of visual. STEPHANIE: Yeah, exactly. And the bulk of that work was actually done with a library called RGL, which stands for just Ruby Graph Library. [laughs] And what's nice is that it has a really easy interface for plugging in the vertices and edges of the dependency graph that you want to build. And then, it is already hooked up with Graphviz to, you know, write the SVG to a file. And so, I ended up really just having to build up an array of my dependencies and the connections to each other and then feed it into the constructor of the graph. JOËL: And for all of our listeners, you mentioned Graphviz. That is a third-party tool that can be installed on your machine that can generate these SVG diagrams from...I believe it has its own sort of syntax. So, you create, I believe it's dot, D-O-T, so dot dot file. And based off of that, it generates all sorts of things, but SVG being potentially one of them. STEPHANIE: Yeah. The nice thing was that I actually didn't end up having to use the DSL of Graphviz because the RGL gem was doing them for me. JOËL: Nice. So, it plugs in directly. STEPHANIE: Yeah, exactly. And I was really curious about using this gem because I, you know, just wanted to write Ruby, especially to plug into other things that are already in Ruby. And I found that surprisingly easy, thanks to all of the RSpec config options that they make available to you, including an option to extend the example group class, which is actually where let and let bang is defined. And so, I ended up overriding those classes and using, you know, the name of the let that you're defining and then the block to basically register the dependencies. And I also ended up exploring a little bit with using Ruby's built-in parser to figure out in the block that's being passed to the let, what parts of that block could potentially be a reference to another let. JOËL: That's really cool. Did you get any fun results from that? STEPHANIE: I did. It worked pretty well in being able to capture all of the let declarations, and other lets that it references. And so, I was able to successfully, you know, like, generate a visual dependency graph of all of the lets, so that was really neat. The part that I was really kind of excited about trying next, though I didn't end up having time to yet, was figuring out which of those let values are executed by way of the let bang, right? Which is eager or what is referenced in the test that then gets executed as well. And so, the RGL library is pretty neat and has some formatting options, too, with the Graphviz output. So, you can change the font color or styling options for different, you know, nodes and edges. And so, I was really curious to pursue this further, maybe, and use it to show exactly what gets evaluated now that I have successfully mapped my let graph. JOËL: Right. Because the whole point of this exercise is that not the entire graph is going to get evaluated. The underlying question is, what data actually gets created when my test runs? And so, you build out this whole dependency graph, and then you can follow a few simple rules to say, okay, this branch gets called, this branch gets called, this series of things gets called. And okay, this subset of let blocks trigger, and therefore this data has been created for my given test. STEPHANIE: Yeah. Though I will say that even where I got so far to, just seeing all of the let definitions in a spec file was really helpful to have a better understanding, you know, if I do have to add a test in here, and I'm thinking about reaching for a pre-existing let declaration, to be like, oh, like, it actually, you know, goes on to reference all of these other things that may be factories [chuckles] that are created might make me, you know, think twice, or just have a little better understanding of what I'm really dealing with. JOËL: Right. The idea that when you're calling out to a let, or a factory, or something else that's just a node in a large graph, you're not necessarily referencing just one thing. You might actually be referencing the head of a very long chain of things that maybe you don't intend to trigger the whole thing. STEPHANIE: Yeah, exactly. JOËL: So, in that sense, having a sort of visual or at least an idea of the graph can give you a much better sense of the cost of certain operations that you might have to do. STEPHANIE: The cost of the operations certainly, especially when, you know, you are working in a legacy codebase, and you, you know, like, maybe don't know how everything plays together or is connected. And it's very tempting to just reach for [chuckles] the things that have been, you know, created or built for you. And I'm certainly guilty of that sometimes on this client project, where the domain is so complex, and there are so many associated models. And I'm like, well, like, let me just, you know, use this let that already, you know, has a factory set up for what I think I need for this test. But then realizing, oh, actually, like, it is creating all these things, and do I really need them? I think it can be really challenging to unravel all of that in your head. And so, with this very scrappy tool that I [chuckles] built for my own purposes, you know, maybe it makes it, like, one step easier to try to fully understand what I'm working with and maybe do something different. JOËL: One aspect that I think is really powerful about dependency graphs is that it takes this kind of, like, abstract concept that we oftentimes have an intuitive sense around, the idea that we have different components that depend on each other, and it shows it to us visually on, like, a 2D plane. And that can be really helpful to get an understanding or an overview of a system. You mentioned that RGL uses Graphviz to generate some SVGs. A visual tool that I've been using to draw some of my dependency graphs has been mermaid.js. It has a syntax that's, like, a text-based syntax, but it's almost visual in that you have a piece of text and name of a node. And then, you'll draw a little ASCII arrow, you know, two dashes and a greater than sign to say this thing depends on, and then write another name, and just have a row, like, a bunch of entries to say; A depends on B. A also depends on C. C depends on D, and so on, and, like, build up that list. And then Mermaid will just generate that diagram for you. STEPHANIE: Yeah. I've used Mermaid a few times. One really helpful use that I had for it was diagramming out a bunch of React components that I had and wanting to understand the connections between them. And I think you can even paste the Mermaid syntax into your GitHub pull request description, and it'll render as the graph image. JOËL: Yeah, that's what's really cool is that Mermaid syntax has become embedded in a lot of other places in the past few years. So, it's really easy to embed graphs now into all sorts of things. You mentioned GitHub. It works in pull requests descriptions, comments, I think pretty much anywhere that Markdown is accepted. So, you could put one in your README if you wanted. Another place that I use a lot, Obsidian, my note-taking tool, allows me to embed graphs directly in there, which is really much nicer than previously; sometimes, when I wanted to express something as a visual, I would use some sort of drawing tool to do something and export an image, and then embed that in my note. But now I can just put in this text, and it will automatically render that as a diagram. And part of what's really nice about that is that then it's really easy for me to go and change that if I'm like, oh, but actually, I want to add one more connection in here. I don't have to re go back to, hopefully, a file that I've saved somewhere and, like, change an image file and re-export it. I just, you know, I add one line of text to my note, and it just works. STEPHANIE: That's awesome. Yeah, the ability to change it seems really useful. So, we've talked a little bit about tools for creating a visual aid for understanding our dependencies. And now that we have our graph, maybe we might have some concerning observations about what we see, especially when perhaps some of our dependencies are pointing back to each other. JOËL: Yes. So, I think you're referencing cycles, in particular. That would be the formal term for it. And those are really interesting. They happen in dependency graphs. And I would say, in many cases, they can be a bit of a smell. There's definitely situations where they're fine. But there are things that you look at, and you're like, okay, this is going to be a more complex kind of tricky bit of the graph to work with. Some cases, you just straight up can't have them. So, I want to say that the way RSpec lets are set up, you cannot write code that produces cycles. But you might have...I think Ruby allows classes to reference each other in such a way that it creates a cycle, and not all languages do that. So, Elm and F#, I believe, require that modules cannot reference each other. The fancy term for this is a dependent acyclic graph, or DAG, which basically just means that there are no cycles in that graph. STEPHANIE: Yeah. What you said about classes referencing each other is very interesting because I've definitely seen that. And then, if I have to go about changing something, maybe even it's just the class name, right? Now there's no way in which I can really make just one change. I have to kind of do it all in one go. JOËL: I think that's a common property of a cycle, and a graph is that changes that happen somewhere in that cycle often need to be all shipped together as one piece. You can't break it up into smaller chunks because everything depends on everything else. So, it has to be kind of boxed together and shipped as one thing. STEPHANIE: And you'd mentioned that cycles, you know, can be a bit of a code smell. And if the goal is to be able to break it up so that it is a little bit more manageable to work with, how would you go about breaking a cycle? JOËL: So, I think breaking a cycle is going to vary a little bit based on your problem domain. So, are you modeling a series of classes that are referencing each other? Is this a function call graph? Is this even, like, a series of tasks that you're trying to do? But typically, what you want to do is make sure that eventually, at some point, like, something doesn't loop back to referencing something higher up in your hierarchy. And so, oftentimes, it ends up being about what is allowed to know about what? Do you have higher-level concepts that can know and depend on lower-level concepts but not vice versa? And again, we are talking about this a little bit at the abstract level. But in terms of, let's say, different code modules, or classes, or something like that, commonly, you might say, well, we want some sort of layering where we have almost, like, more primitive types of classes at the bottom. And they don't get to know about anything above them. But the ones above that might be more complex that are composed of smaller pieces know about the ones below them. And you might have multiple layers kind of like that that all kind of point down, but nothing points up. STEPHANIE: That is a very common heuristic. [chuckles] I think you were basically just describing how I also understand creating React components, where you want to separate your presentational ones from your functional ones. And, yeah, it makes a lot of sense that as soon as you start adding that complexity of, you know, those primitive classes at the bottom, starting to, you know, point to things higher up or to know about things higher up, that is where a cycle may be accidentally introduced. JOËL: It's interesting just how many design principles that we have in software. If you dig into them a little bit, you find out that they're about decoupling things, and oftentimes, it's specifically breaking up cycles. So, one way that you might have something like this that actually has dependency in the name, the dependency inversion principle, where what you're effectively doing is you're taking one of those dependency arrows, and you're flipping it the other way. So, instead of A depending on B, you're flipping it. Now B depends on A, and that can be enough to break a cycle. STEPHANIE: So, one thing I've picked up from our conversations about dependency graphs is that oftentimes, you know, when you're trying to figure out where to start, you want to look for those areas or those nodes where there's nothing else that depends on it. JOËL: Yeah. I think you have those nodes that, if this were a tree, you would call them the leaf nodes. In the case of a graph, I'm not sure if that's technically correct, but they don't depend on anything. They're kind of your base case. And so, you can, you know, if it's a function, you can run it. If it's a file, you can load it; if it's a class, also you can load it up and not have to do anything else because it has no dependencies. And knowing that those are there, I think, can be really useful in terms of knowing an order you might want to execute something in. And this is really interesting for one of my favorite uses of a graph, which is breaking down a series of tasks that you need to do. So, commonly, you might say, okay, I have a large task I need to do. I break it down into a series of subtasks. And, you know, maybe I draw out, like, a bulleted list and, you know, task 1, 2, 3, 4, 5. The problem is that they're not necessarily just a flat list. They all have, like, orders, like dependencies between each other. So, maybe one has to happen before 2, but it also has to happen before 3, which needs to happen before two, and, like, there's all these interconnections. And then, you find out that you can't ship them independently the way you thought initially. So, by building up a graph, you end up with something that shows you exactly what depends on what. And then, like you said, the parts that are really interesting where you can start doing work are the ones that have no dependencies themselves. Other things might depend on them, but they have no dependencies. Therefore, they can be safely built, shipped, deployed to production, and they can be done independently of the other subtasks. STEPHANIE: Yeah. I was also thinking about things that could be done in parallel as well. So, if you do have multiple of those items with no dependencies, like, that is a really good way to be able to break up that work and, yeah, identify things that are not blocked. JOËL: For a complex set of tasks, it's great to see, okay, these two pieces have no dependencies. We can have them be done in parallel, shipped independently. And then you can just kind of keep repeating that process. Because once all of the tasks that have no dependencies have been done, well, you can almost, like, remove them from the graph and see, okay, what's the new set of things that have no dependencies? And then, keep doing that until you've eventually done the whole graph. And that may sound like, oh okay, we're just kind of using a little bit of intuition and working through the graph. It turns out that this is a, like, actual, like, formal thing. When it comes to graphs, it's a traversal algorithm called topological sort is the fancy name for it, and it basically, yeah, it goes through that. It gives you a list of nodes in order where each node that you're given has no dependencies that have not been evaluated yet. So, it works from effectively to use our tree terminology, from the leaf nodes to the root, potentially roots plural, of the graph, and each step is independent. So that's a lot of, like, fancy terminology, and getting a little bit of, like, computer science graph theory into here. So, my, like, general heuristic is that graphs should be evaluated from the bottom up when you're trying to evaluate each piece independently. So, when you do that, you get to do each piece independently, as opposed to if you're evaluating from the top down. So, starting from the one thing that depends on everything else, well, it can't be shipped until all of its dependencies have been shipped. And all the transitional dependencies can't be shipped until their dependencies have been shipped. And so, you end up being not able to ship anything until you've built the entire graph. And that's when you end up with, you know, a 2,000-line PR that took you multiple weeks and might be buggy. And it's going to take a long time to review. And it's just not what anybody wants. STEPHANIE: I'm glad you brought this up because I think this is where I am really curious to get better at because oftentimes, when I am breaking down a complex task, it's quite hard for me to see all of the steps that need to happen. And so, you know, you maybe start out with that, like, top-level node, like, the task that needs to be done as you understand it immediately. And it's really hard to actually identify the dependencies and, like, the smaller pieces along the way. And because you're not able to identify that, you think that you do have to just do it all in one go. JOËL: Yeah, that sort of root node is typically the overarching task, the goal of what you want to do. And a common, I think, scenario for something like this would be, let's say, you're doing a Rails upgrade. And so, that root node is upgrade Rails. And a common thing that you might want to do is say, okay, let's go to the gem file, upgrade Rails, see what breaks, and then just keep fixing those things. That's working from the top down. And you're going to be in a long-running branch, and you're going to keep fixing things, fixing things, fixing things until you have found all the things but done all the things. And then you do a big bang upgrade that may have taken you weeks. As opposed to if you're working from the bottom up, you try to figure out, okay, what are all the subtasks? And that might take some exploration. You might not know upfront. But then you might say, okay, here, I can upgrade RSpec versus a dependency, or I need to change the interface of this class and ship all these pieces one at a time. And then, the final step is flipping that upgrade in the gem file, saying, okay, now I've upgraded Rails from 4 to 5, or whatever the version is that you're trying to do. STEPHANIE: I think you've really hit the nail on the head when it comes to trying to do something but not knowing what subtasks may compose of it and getting into that problem of, you know, having not broken it down, like, enough to really see all the dependencies. And, you know, maybe this is a conversation [chuckles] for another episode, but the skill of breaking up those tasks and exploring what those dependencies are, and being able to figure them out upfront before you start to just do that upgrade and then see what happens, that's definitely an area that I want to keep investing in. And I'm sure other people would be really curious about, too, to help them make their jobs easier. JOËL: I think one tip that I've learned that's really fun and that connects into all of this is sometimes you do end up with a cycle in your dependencies of tasks. A technique for breaking that up is a pattern that I have pitched multiple times on the show: the strangler fig pattern. And part of why it's so powerful is that it allows you to work incrementally by breaking up some of these cycles in your dependency graph. And one of the lessons that I've learned from that is that just because you have sort of an initial set of subtasks and you have a graph of them doesn't mean that you can't change them. If you're following strangler fig, what you're actually doing is introducing one or more new subtasks to that graph. But the way you introduce them breaks up that cycle. So, you can always add new tasks or split up existing ones as you get a better understanding of the work you need to do. It's not something that is fixed or set in stone upfront. STEPHANIE: Yeah, that's a really great tip. I think next time, what I really want to explore, you know, your heuristic of going from bottom up, yeah, sure, it sounds all fine and dandy. But how to get to a point where you're able to see everything at the bottom, right? And, like, when you are tasked, or you do start with the thing at the top, like, the end goal. Yeah, I'm sure that's something we'll explore [chuckles] another day. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.