POPULARITY
Cloud native patterns and open source developments were on display at the KubeCon + CloudNativeCon Europe conference. The biannual gathering was showing how the container ecosystem continues to mature and analysts Jean Atelsek and William Fellows join host Eric Hanselman to explore their insights. The Cloud Native Computing Foundation (CNCF), part of the Linux Foundation, continues to expand the event and advance the maturity of the open source projects that are part of its purview. Day 2 operations have been gaining focus and the pre-conference FinOps X event was an indication of the emphasis on operational controls as it digs into infrastructure cost management. The opening “Day 0” events at KubeCon, which have been the forum for specialized project meetings, have become a key part of the conference, with over 6,000 attendees, almost half of the reported 13,000 total. The Kubernetes container management project is now over ten years old and one of the other signs of technology evolution was the integration of the OpenInfra Foundation, which managed the OpenStack project and other infrastructure elements, into the Linux Foundation. Open source projects are gaining wider adoption and one of the messages from projects and vendors at KubeCon, was the hope that it could offer alternatives to enterprise infrastructure stalwart, VMware. The CNCF is expanding its investments in improving security across the projects under its umbrella. There was also continued development of platform engineering initiatives. Bounding the expanding world of open source projects to create consistent development and operational tool chains for enterprise is one more sign of maturity in the container world. More S&P Global Content: AWS, Microsoft Azure and Google Cloud enter the FinOps vortex For S&P Global subscribers: Kubernetes meets the AI moment in Europe with technology, security, investment Data management, GenAI, hybrid cloud are top Kubernetes workloads – Highlights from VotE: DevOps Kubernetes ecosystem tackles new technical and market challenges Kubernetes, serverless adoption evolve with cloud-native maturity – Highlights from VotE: DevOps Credits: Host/Author: Eric Hanselman Guests: Jean Atelsek, William Fellows Producer/Editor: Adam Kovalsky Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith
In this episode of I Want That Too, Jim Hill and Lauren Hersey explore the colorful world of theme park merchandise—from Epcot's Flower & Garden Festival to Universal's Epic Universe rollout—and spotlight a pastel plush lineup that's taking over Japan (and maybe your shelf next). Figment's Flattening at Flower & Garden – Lauren shares her thoughts on Disney's 3D-printed topiaries, the orange spirit jersey, and whether Figment is looking a little… squished this year. Captain Cacao and Universal Merch Madness – Jim breaks down Universal's plans to “Adopt a Dragon,” upgrade wands with haptic tech, and introduce new mascots, including a butter beer-loving bear. Cherry Blossom Duffy and the Pastel Plush Invasion – From Baymax in bloom to sleepy Marie, Jim and Lauren review Japan's newest spring merch—and speculate if Duffy's soft-spoken friends will ever click stateside. Bounding for the Gala – Lauren previews her Disneybound look for the Epcot 40th Anniversary Gala, complete with Cinderella blue and glass slipper energy. Flower & Garden Tips – A rundown of the Egg-stravaganza scavenger hunt, the collectible prize system, and why it's one of the best festival activities for families on a budget. This episode is filled with plush previews, theme park strategies, and the delightful chaos of spring merch season. Be Our Guest Vacations Planning your next Disney vacation? Be Our Guest Vacations is a Platinum-level Earmarked travel agency with concierge-level service to make every trip magical. Their team of expert agents plans vacations across the globe, from Disney and Universal to cruises and adventures, ensuring you have the best possible experience without the stress. Learn More Learn more about your ad choices. Visit megaphone.fm/adchoices
There's just something wonderful about accomplishing a goal, especially one that you've been working on for years! Add in that you accomplished it with great friends and it's truly magical. Dinglehoppers, we did it! We did the thing! And now you get to hear about it... (Oh, and we Disney bounded as Marvel characters too.)Check out our enchanting extras:https://linktr.ee/WonderfulThingAboutDisney
March 12, 2025: From the floor of HIMSS 2025 in Vegas Colin Banas, CMO of DrFirst, and Thomas Wells, Medical director of Piedmont, explore the evolving landscape of healthcare technology. How might AI transform physician-patient relationships rather than diminishing them? What would need to change for true interoperability to become reality instead of remaining an endless talking point? The physicians discuss Piedmont's vast Georgia network and their innovative approaches to telehealth, virtual specialists, and the pressing need to address behavioral health through technology. Key Points:03:52 Interoperability and EHR Systems05:55 AI in Medication Management10:01 Telemedicine and Behavioral Health11:24 Closing Thoughts and Fun QuestionSubscribe: This Week HealthTwitter: This Week HealthLinkedIn: This Week HealthDonate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Chris & Angie discuss the latest updates on Bengal tigers, highlighting the successful conservation efforts in India that have nearly doubled the tiger population over the past 20 years. The podcast explores the critical ecological role of tigers, the cultural significance in Asia, and the impact of conservation projects on local communities and economies. Together, we emphasize the importance of continuing efforts and global collaboration to protect these majestic big cats and their habitats. Finally, we touch on the role of zoos in preserving genetic diversity and share insights from zookeeper John about working with tigers. The conclusion highlights the positive conservation news, potential challenges, and actionable steps individuals can take to support tiger conservation. We also mention the revamp of our website and merchandise store! We encourage you to utilize the features to learn more about your favorite species, and we also provide insights into other species, many of which remain endangered. Check it out HERE Podcast Timeline 00:00 Welcome and Website Updates 01:39 Exciting Bengal Tiger News 03:37 Conservation Efforts and Challenges 06:37 Bengal Tiger Characteristics 08:40 Global Tiger Populations 14:08 Ecological Importance of Tigers 18:30 Conservation Success Stories 25:36 Tiger Subspecies Overview 33:13 Zookeeper Insights on Tigers 36:54 The Importance of Zoos in Conservation 38:32 Tiger Personalities and Subspecies 43:01 Tiger Communication and Vocalizations 47:12 Tiger Hunting Skills and Behavior 01:03:04 Reproduction and Raising Cubs 01:09:39 Conservation Efforts and Organizations ------------------------------------------------------------- Another thank you to all our Patreon supporters. You too can join for one cup of "good" coffee a month. With your pledge you can support your favorite podcast on Patreon and give back to conservation. With the funds we receive each month, we are have been sending money to conservation organizations monthly. We now send a check to every organization we cover, as we feel they all are deserving of our support. Thank you so much for your support and for supporting animal conservation. Please considering supporting us at Patreon HERE. We also want to thank you to all our listeners. We are giving back to every conservation organization we cover and you make that possible. We are committed to donating large portions of our revenue (at minimum 25%) to every organization we cover each week. Thank you for helping us to grow, and for helping to conserve our wildlife. Please contact us at advertising@airwavemedia.com if you would like to advertise on our podcast You can also visit our website HERE. Learn more about your ad choices. Visit megaphone.fm/adchoices
Elvira sashays her way onto the Amiga in her first platformer outing!
Elvira sashays her way onto the Amiga in her first platformer outing!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024), published by Matt MacDermott on September 1, 2024 on The AI Alignment Forum. Yoshua Bengio wrote a blogpost about a new AI safety paper by him, various collaborators, and me. I've pasted the text below, but first here are a few comments from me aimed at an AF/LW audience. The paper is basically maths plus some toy experiments. It assumes access to a Bayesian oracle that can infer a posterior over hypotheses given data, and can also estimate probabilities for some negative outcome ("harm"). It proposes some conservative decision rules one could use to reject actions proposed by an agent, and proves probabilistic bounds on their performance under appropriate assumptions. I expect the median reaction in these parts to be something like: ok, I'm sure there are various conservative decision rules you could apply using a Bayesian oracle, but isn't obtaining a Bayesian oracle the hard part here? Doesn't that involve advances in Bayesian machine learning, and also probably solving ELK to get the harm estimates? My answer to that is: yes, I think so. I think Yoshua does too, and that that's the centre of his research agenda. Probably the main interest of this paper to people here is to provide an update on Yoshua's research plans. In particular it gives some more context on what the "guaranteed safe AI" part of his approach might look like -- design your system to do explicit Bayesian inference, and make an argument that the system is safe based on probabilistic guarantees about the behaviour of a Bayesian inference machine. This is in contrast to more hardcore approaches that want to do formal verification by model-checking. You should probably think of the ambition here as more like "a safety case involving proofs" than "a formal proof of safety". Bounding the probability of harm from an AI to create a guardrail Published 29 August 2024 by yoshuabengio As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action? Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate or safety specification. With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at ru...
I'm always on the lookout for people to ask to come on the podcast and bringing new topics to the podcast. I'm excited that Corazon aka VintageFilAmGlam on Instagram said yes, and now you get to listen and learn from her! We kick things off with one of the long running fandoms, Dr. Who. Which I still have yet to watch. Corazon talks about how she got into the fandom, some of her favorite Doctors, and we talk about who she'd love to see play the Doctor in the future! From there, we bounce over to Disney. We talk about Corazon's history with Disney from childhood to present. We delve into some of the products under the Disney umbrella, like Marvel, Star Wars, and Pixar. It's a wide ranging conversation about the House of Mouse. Then Corazon leads a discussion about Asian Representation in Media. We talk about some of the representation of recent, what more needs to be done, and more. I'm grateful for the discussion and education. Lastly, we wrap up with a triple threat of fashion. We're talking vintage fashion, bounding, and cosplay. Corazon talks about how she got into all three, what she looks for in vintage fashion, how she bounds, and the characters she represents! You can find Corazon at: https://www.instagram.com/vintagefilamglam/ https://www.tiktok.com/@vintagefilamglam https://www.facebook.com/vintagefilamglam
I'm excited to have Nesreen aka thenessdiaries_ on Instagram join me for this latest episode, after having followed her account for awhile. Nesreen brings new fandoms and a wonderful conversation. We start with Star Wars. Nesreen talks about how she got into Star Wars from a young age, to rediscovering it later in life. We talk about some of her favorite movies and shows, including Star Wars Rebels. Plus, we of course talk favorite characters, unexpected ones we fell in love with, and the future of the franchise (this was recorded prior to The Acolyte releasing). From there, we talk about a show that has not been talked about on this podcast! We're talking about HBO's True Blood series. Nesreen did a rewatch, inspiring me to rewatch (and finish) the series. We talk about what we loved about the show including the wonderful character, Lafayette. Nesreen answers the Bill vs Eric question, and we talk about the show overall and how it was watching it during the height of vampire popularity. Another new fandom to the podcast is F1 Racing. Nesreen educates me about the sport of F1 racing. She talks about the trials and tribulations of being able to watch it live, and we talk about how she got into the sport and her favorite driver. Lastly, we talk about a combination of content creation. From Nesreen doing Disneybounds and bounding in general, to creating content at Disney, and balancing that side of life with social activism when it comes to Palestine, genocide, and more. This is a very open and honest discussion and I really appreciate and thank Nesreen for providing the education and information that she did on the podcast, and with what she does on her social media. You can find Nesreen at: https://www.instagram.com/thenessdiaries_/ https://www.tiktok.com/@thenessdiaries_ https://linktr.ee/thenessdiaries
In this enchanting episode, ‘Bounding to the Magic,' we delve into the whimsical world of Disney Bounding. Join us as we explore how fans express their love for Disney characters through creative fashion choices, without donning full costumes. We'll share tips on how to craft the perfect Disney-inspired outfit, discuss the latest trends in the Disney Bounding community, and hear from special guests about their magical experiences. Whether you're a seasoned bounder or just curious about this stylish way to show your Disney spirit, this episode is your guide to adding a sprinkle of pixie dust to your everyday wardrobe. Season 2 Episode 0013 show 0042
Cette semaine, avec mon co-host invité Cashboy Reg et son ami Roadman Stacks, on reçoit Sienna Katharios! On en apprend beaucoup sur la transexualité et sur la transphobie. Sienna nous partage les défis qu'elle a rencontrés liés à la dysphorie de genre. On parle de l'importance de la communication pour augmenter les plaisirs intimes. On explore l'univers des kinks. On se demande aussi si le trauma bonding est sain. Cet épisode est présenté par Eros et compagnie. Obtenez 15% de rabais sur votre prochain achat en utilisant le code promo "DAEDS" ou en utilisant le lien suivant: https://www.erosetcompagnie.com/?code=deads Comptoir Plaza Créole! Commandez en salle et obtenez un pâté gratuit avec le code promo: DAEDS: https://www.comptoirplazacreole.ca/ Rejoignez notre Patreon: https://www.patreon.com/damouretdesexe Vous avez des courriers du coeur, des commentaires et des suggestions? Envoyez nous un courriel au damouretdesexepodcast@gmail.com Suivez nous sur Instagram: https://www.instagram.com/damouretdesexeSuivez nous sur Twitter: https://www.twitter.com/DAEDS_podcastSuivez nous sur Tik Tok: https://www.tiktok.com/@damouretdesexe
Episode 3: Extropic is building a new kind of computer – not classical bits, nor quantum qubits, but a secret, more complex third thing. They call it a Thermodynamic Computer, and it might be many orders of magnitude more powerful than even the most powerful supercomputers today. Check out their “litepaper” to learn more: https://www.extropic.ai/future.======(00:00) - Intro(00:41) - Guillaume's Background(02:40) - Trevor's Background(04:02) - What is Extropic Building? High-Level Explanation(07:07) - Frustrations with Quantum Computing and Noise(10:08) - Scaling Digital Computers and Thermal Noise Challenges(13:20) - How Digital Computers Run Sampling Algorithms Inefficiently(17:27) - Limitations of Gaussian Distributions in ML(20:12) - Why GPUs are Good at Deep Learning but Not Sampling(23:05) - Extropic's Approach: Harnessing Noise with Thermodynamic Computers(28:37) - Bounding the Noise: Not Too Noisy, Not Too Pristine(31:10) - How Thermodynamic Computers Work: Inputs, Parameters, Outputs(37:14) - No Quantum Coherence in Thermodynamic Computers(41:37) - Gaining Confidence in the Idea Over Time(44:49) - Using Superconductors and Scaling to Silicon(47:53) - Thermodynamic Computing vs Neuromorphic Computing(50:51) - Disrupting Computing and AI from First Principles(52:52) - Early Applications in Low Data, Probabilistic Domains(54:49) - Vast Potential for New Devices and Algorithms in AI's Early Days(57:22) - Building the Next S-Curve to Extend Moore's Law for AI(59:34) - The Meaning and Purpose Behind Extropic's Mission(01:04:54) - Call for Talented Builders to Join Extropic(01:09:34) - Putting Ideas Out There and Creating Value for the Universe(01:11:35) - Conclusion and Wrap-Up======Links:Christian Keil – https://twitter.com/pronounced_kyleGuillaume Verd - https://twitter.com/GillVerdBeff Jezos - https://twitter.com/BasedBeffJezosTrevor McCourt - https://twitter.com/trevormccrt1First Principles:Gaussian Distribution: https://en.wikipedia.org/wiki/Normal_distributionEnergy-Based Models: https://en.wikipedia.org/wiki/Energy-based_modelShannon's Theorem: https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem======Production and marketing by The Deep View (https://thedeepview.co). For inquiries about sponsoring the podcast, email team@firstprinciples.fm======Checkout the video version here → http://tinyurl.com/4fh497n9
For some reason the guys start out talking about dead farm animals, horses, sheep, cows. Foo has a lame horse story. Gym rode a donkey in Yosemite. Farmers agree on stuff with a handshake and that's cool. Some local areas have a strange mix of new homes and farmland. Gym admits that he is not farm at all and his calling card is the ocean and beach areas. Foo talks about his less than stellar food experience when he went snowboarding. Both guys pine for visiting Disneyland and Foo is interested in Bounding. Plus more!
Episode: 1716 Bounding Billies: The century-long reign of the wound-rubber golf ball. Today, our guest, operations researcher Andrew Boyd, tells about the simplest of toys, the golf ball.
After stumbling across her work from previous guests, I was so excited to have Brittany aka BajanPrincessBrittany on IG join me for this latest episode! We start things off with talking about Bridgerton. Brittany talks about how she first got into the TV series and then the books. We talk about some of her favorite characters and storylines, how she connects to the show, and more. We also segue into a James Bond discussion before circling back to wrap up Bridgerton and its upcoming season. Then we switch over to the Marvel Cinematic Universe. We talk about her first introduction into the MCU, some of her favorite movies, shows, and characters. Plus, Brittany assembles her own Avengers squad. Find out who she chooses! Brittany then takes us to Disney. Not literally though, sorry everyone. Disney is so all encompassing that we do our best to tackle movies, TV shows, and theme parks. Plus, we discuss if Powerline is overrated or underrated. What do you think? We wrap up with a conversation about Bounding, and how I found her work. Brittany got into doing Disneybounding (and other bounds from things like the MCU). We talk about her wardrobe, self shoots, some of the Bound Challenges people post, and some other wonderful creators. You can find Brittany at: https://www.instagram.com/bajanprincessbrittany/ https://linktr.ee/bajanprincessbrittany Get 10% off your order of Woodmarks, Tolkien style maps, and more from In The Reads by using code TALES10 at checkout. Visit them at: https://inthereads.com/
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Data, Dataset, Big Data, DIKUW Pyramid, explain how these terms relate to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary AI Glossary Series – DevOps, Machine Learning Operations (ML Ops) AI Glossary Series – Automated Machine Learning (AutoML) AI Glossary Series – Data Preparation, Data Cleaning, Data Splitting, Data Multiplication, Data Transformation AI Glossary Series – Data Augmentation, Data Labeling, Bounding box, Sensor fusion AI Glossary Series – Data, Dataset, Big Data, DIKUW Pyramid Continue reading AI Today Podcast: AI Glossary Series – Data Warehouse, Data Lake, Extract Transform Load (ETL) at Cognilytica.
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Data, Dataset, Big Data, DIKUW Pyramid, explain how these terms relate to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary AI Glossary Series – DevOps, Machine Learning Operations (ML Ops) AI Glossary Series – Automated Machine Learning (AutoML) AI Glossary Series – Data Preparation, Data Cleaning, Data Splitting, Data Multiplication, Data Transformation AI Glossary Series – Data Augmentation, Data Labeling, Bounding box, Sensor fusion AI Glossary Series – Data, Dataset, Big Data, DIKUW Pyramid Continue reading AI Today Podcast: AI Glossary Series – Structured Data, Unstructured Data, Semi-structured Data at Cognilytica.
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms V's of Big Data, Data Volume, Exabyte / Petabyte / Yottabyte / Zettabyte, Data Variety, Data Velocity, Data Veracity, explain how these terms relate to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary AI Glossary Series – DevOps, Machine Learning Operations (ML Ops) AI Glossary Series – Automated Machine Learning (AutoML) AI Glossary Series – Data Preparation, Data Cleaning, Data Splitting, Data Multiplication, Data Transformation AI Glossary Series – Data Augmentation, Data Labeling, Bounding box, Sensor fusion AI Glossary Series – Data, Dataset, Big Data, DIKUW Pyramid Continue reading AI Today Podcast: AI Glossary Series – V's of Big Data, Data Volume, Exabyte / Petabyte / Yottabyte / Zettabyte, Data Variety, Data Velocity, Data Veracity at Cognilytica.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Autonomous replication and adaptation: an attempt at a concrete danger threshold, published by Hjalmar Wijk on August 17, 2023 on The AI Alignment Forum. Note: This is a rough attempt to write down a more concrete threshold at which models might pose significant risks from autonomous replication and adaptation (ARA). It is fairly in the weeds and does not attempt to motivate or contextualize the idea of ARA very much, nor is it developed enough to be an official definition or final word - but it still seemed worth publishing this attempt, to get feedback and input. It's meant to have epistemic status similar to "a talk one might give at a lab meeting" and not "an official ARC Evals publication." It draws heavily on research and thinking done at ARC Evals, (including the recent pilot report), and credit for many of these ideas go to my colleagues. That said, this document speaks for me and not the organization as a whole, and any errors or omissions are my own. I'm especially interested in feedback on whether the suite of tasks is targeting an appropriate threshold of difficulty (neither too early nor too late), and whether this definition of ARA feels useful for pinning down an appropriate task list. I have been exploring the ability of language model agents to acquire resources, create copies of themselves, and adapt to novel challenges they encounter in the wild. In previous work, my colleagues and I have determined that the risks from autonomous behavior of particular existing agents seems low, because they fail on a sufficient variety of simple tasks related to autonomous behavior. This post will go in the other direction and try to sketch a tentative qualitative threshold ('autonomous replication and adaptation') at which significant concerns about autonomous capabilities do seem warranted. While this threshold provides qualitative guidance, it lacks legibility and reproducibility, so evaluation efforts may want to develop legible conservative approximations of ARA through concrete tasks, and I present a sketch of such tasks. The risk: AI systems may soon autonomously: Make money, for example through cybercrime, or freelancing work. Use money (or other resources) to acquire more computing power. Install and maintain more copies of itself or improve the scale and speed of its operation. Overcome obstacles and adapt to changes through research and self-improvement. Either keep itself well hidden such that law enforcement is unlikely to ever discover it, or make itself secure to even somewhat serious attempts to shut it down. Bounding the risks from a system that is acting autonomously beyond human control and improving itself seems very challenging, as it could potentially: Command very significant resources (if it can scale to a fraction of the cybercrime or outsourcing industry that could represent $billions). Have no human moral constraints. Lack many human weaknesses and incentives. Specifically improve its abilities in domains relevant for causing catastrophic harm (such as bioweapons design). It thus seems important to develop an evaluation that could rule out these sorts of threats. The autonomous replication and adaptation (ARA) threshold: Directly evaluating whether a system will, in practice, be able to avoid being shut down or accumulate $billions doesn't seem possible in a lab. The ARA threshold thus evaluates whether the agent can replicate itself under 4 primary conservative assumptions, to make evaluations more tractable (see appendix for slightly more detail): The system only has to do effective replication at small scales (going from one to a few copies), and we assume this scales to many copies. The system only has to be plausibly resilient to active opposition from the world. The system does not have to deal with changes to the world larger than...
Ali aka The Ali Way on social media joins me to talk about his life and love of fandoms in a deeply personal talk. We first start off with talking about the world of Disney and Pixar. Ali talks about some of his favorite movies and characters and what draws him to these two fandoms. Then, we hit up the Marvel Cinematic Universe. We talk about his connection with Kamala Khan / Ms. Marvel, and we touch on a number of the MCU film and TV show properties. Interspersed through all of this is a deeply personal talk and reflection from Ali about his life and journey thus far. Then, we get into his love of dancing. From a young age, to what spurred his desire to dance and get involved in it. We also talk about how he started bounding. From the characters he does, how he chooses his outfits, gets ready, collaborates, and so much more. Honestly, words don't do justice to Ali's episode. You'll need to listen, because there is a sense of just pure joy and heart that comes through. You can find Ali at: https://www.instagram.com/the.ali.way/ https://www.tiktok.com/@the.ali.way Get 10% off your order of Woodmarks, Tolkien style maps, and more from In The Reads by using code TALES10 at checkout. Visit them at: https://inthereads.com/ Disclaimer- this was a commitment agreed upon prior to the strike. We stand with SAG and the WGA. Give them what they deserve!
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Data is critical to making AI and machine learning work. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Data Augmentation, Data Labeling, Bounding box, Sensor fusion. Data Augmentation are the techniques used to enhance existing data through the use of additional data, manipulations on existing data, or combinations of data in various ways. Continue reading AI Today Podcast: AI Glossary Series – Data Augmentation, Data Labeling, Bounding box, Sensor fusion at Cognilytica.
The CHGO Bulls crew are live for the NBA Draft Lottery! The Chicago Bulls have a 1.8% chance at landing the #1 overall pick in the 2023 NBA Draft. They have an 8.5% chance for their pick to land in the top 4, which would allow them to keep their pick. If it doesn't land inside the top 4, the pick will be transferred to the Orlando Magic, thus completing the trade for Nikola Vucevic. Will Gottlieb will be sequestered in the drawing room, while Matt and Big Dave watch the results unfold from our CHGO studios! UPDATE: The Bulls didn't jump into the top 4. For a few minutes, we thought they did. Bounding jubilation followed by absolute devastation. After seeing the results, Matt and Big Dave talk about the potential impact of the Blazers jumping into the top 4. Can the Bulls get the Knicks pick from the Blazers in the 2023 Draft? Will joins the guys from the Draft Lottery to offer his insights from the experience and the lottery results. Score the best seats in the house at Gametime: https://gametime.hnyj8s.net/c/3442941/1441553/10874 SUBSCRIBE: https://www.youtube.com/c/CHGOSports WEBSITE: http://allCHGO.com/ BUY MERCH: http://CHGOLocker.com FOLLOW ON SOCIAL: Twitter: @CHGO_Bulls / @Bulls_Peck / @BawlSports / @will_gottlieb Instagram: @CHGO_Sports GET OUR FREE NEWSLETTER: http://www.allchgo.com/newsletter Support us by supporting our sponsors! | Offers from our sponsors: DraftKings: Download the DraftKings Sportsbook app now, use promo code “CHGO”, make ANY $5 bet this week and get $200 in BONUS BETS win or lose! fuboTV: Watch the Cubs, White Sox, Bulls & Blackhawks on Marquee and NBC Sports Chicago with fuboTV! Go to fubotv.com/chgo for 15% off your first month of Fubo Pro! Shady Rays: Go to shadyrays.com and use code CHGO for 50% OFF 2+ pairs of polarized sunglasses. Goose Island: Chicago's beer since 1988. Grab a beer from their Innovation tanks at the Goose Island Taproom or get a smash burger and a fresh beer of the week at the Clybourn Brewhouse. For reservations and pick up, go to gooseisland.com/locations. Athletic Greens: Athletic Greens is going to give you a FREE 1 year supply of immune-supporting Vitamin D AND 5 FREE travel packs with your first purchase. Just visit https://athleticgreens.com/CHGOBulls Roman: To learn more about how you can achieve your personal sexual health goals, go to ro.co/CHGO to get 20% off your entire first order. FOCO: CHGO has teamed up with FOCO to secure your access to the best sports collectibles and gear around! Get 10% off your order at FOCO.com with promo code “CHGO”. ComEd: Get started saving money and energy today! For energy saving tips and to schedule your free Facility Assessment, go to ComEd.com/PoweringBiz. Pins & Aces: Pins & Aces is the official golf apparel partner of CHGO. Check out PinsAndAces.com and use promo code “CHGO” to receive 15% off your first order and get free shipping. Learn more about your ad choices. Visit megaphone.fm/adchoices
(A)bounding to The Congo --- Support this podcast: https://podcasters.spotify.com/pod/show/keith-stensaas/support
Listen in as Michael Grumbine (from our Exploring Tolkien), John Trent from Bounding into Comics, John Carswell From The Tolkien Road podcast, The MIddle-earth Mixer, and Steve Babb from Glass Hammer talk Tolkien, movies, games, Steven Spielberg and more!
Listen in as Michael Grumbine (from our Exploring Tolkien), John Trent from Bounding into Comics, John Carswell From The Tolkien Road podcast, The MIddle-earth Mixer, and Steve Babb from Glass Hammer talk Tolkien, movies, games, Steven Spielberg and more!
2023 is the year of Multimodal AI, and Latent Space is going multimodal too! * This podcast comes with a video demo at the 1hr mark and it's a good excuse to launch our YouTube - please subscribe! * We are also holding two events in San Francisco — the first AI | UX meetup next week (already full; we'll send a recap here on the newsletter) and Latent Space Liftoff Day on May 4th (signup here; but get in touch if you have a high profile launch you'd like to make). * We also joined the Chroma/OpenAI ChatGPT Plugins Hackathon last week where we won the Turing and Replit awards and met some of you in person!This post featured on Hacker News.Out of the five senses of the human body, I'd put sight at the very top. But weirdly when it comes to AI, Computer Vision has felt left out of the recent wave compared to image generation, text reasoning, and even audio transcription. We got our first taste of it with the OCR capabilities demo in the GPT-4 Developer Livestream, but to date GPT-4's vision capability has not yet been released. Meta AI leapfrogged OpenAI and everyone else by fully open sourcing their Segment Anything Model (SAM) last week, complete with paper, model, weights, data (6x more images and 400x more masks than OpenImages), and a very slick demo website. This is a marked change to their previous LLaMA release, which was not commercially licensed. The response has been ecstatic:SAM was the talk of the town at the ChatGPT Plugins Hackathon and I was fortunate enough to book Joseph Nelson who was frantically integrating SAM into Roboflow this past weekend. As a passionate instructor, hacker, and founder, Joseph is possibly the single best person in the world to bring the rest of us up to speed on the state of Computer Vision and the implications of SAM. I was already a fan of him from his previous pod with (hopefully future guest) Beyang Liu of Sourcegraph, so this served as a personal catchup as well. Enjoy! and let us know what other news/models/guests you'd like to have us discuss! - swyxRecorded in-person at the beautiful StudioPod studios in San Francisco.Full transcript is below the fold.Show Notes* Joseph's links: Twitter, Linkedin, Personal* Sourcegraph Podcast and Game Theory Story* Represently* Roboflow at Pioneer and YCombinator* Udacity Self Driving Car dataset story* Computer Vision Annotation Formats* SAM recap - top things to know for those living in a cave* https://segment-anything.com/* https://segment-anything.com/demo* https://arxiv.org/pdf/2304.02643.pdf * https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/* https://blog.roboflow.com/segment-anything-breakdown/* https://ai.facebook.com/datasets/segment-anything/* Ask Roboflow https://ask.roboflow.ai/* GPT-4 Multimodal https://blog.roboflow.com/gpt-4-impact-speculation/Cut for time:* WSJ mention* Des Moines Register story* All In Pod: timestamped mention* In Forbes: underrepresented investors in Series A* Roboflow greatest hits* https://blog.roboflow.com/mountain-dew-contest-computer-vision/* https://blog.roboflow.com/self-driving-car-dataset-missing-pedestrians/* https://blog.roboflow.com/nerualhash-collision/ and Apple CSAM issue * https://www.rf100.org/Timestamps* [00:00:19] Introducing Joseph* [00:02:28] Why Iowa* [00:05:52] Origin of Roboflow* [00:16:12] Why Computer Vision* [00:17:50] Computer Vision Use Cases* [00:26:15] The Economics of Annotation/Segmentation* [00:32:17] Computer Vision Annotation Formats* [00:36:41] Intro to Computer Vision & Segmentation* [00:39:08] YOLO* [00:44:44] World Knowledge of Foundation Models* [00:46:21] Segment Anything Model* [00:51:29] SAM: Zero Shot Transfer* [00:51:53] SAM: Promptability* [00:53:24] SAM: Model Assisted Labeling* [00:56:03] SAM doesn't have labels* [00:59:23] Labeling on the Browser* [01:00:28] Roboflow + SAM Video Demo * [01:07:27] Future Predictions* [01:08:04] GPT4 Multimodality* [01:09:27] Remaining Hard Problems* [01:13:57] Ask Roboflow (2019)* [01:15:26] How to keep up in AITranscripts[00:00:00] Hello everyone. It is me swyx and I'm here with Joseph Nelson. Hey, welcome to the studio. It's nice. Thanks so much having me. We, uh, have a professional setup in here.[00:00:19] Introducing Joseph[00:00:19] Joseph, you and I have known each other online for a little bit. I first heard about you on the Source Graph podcast with bian and I highly, highly recommend that there's a really good game theory story that is the best YC application story I've ever heard and I won't tease further cuz they should go listen to that.[00:00:36] What do you think? It's a good story. It's a good story. It's a good story. So you got your Bachelor of Economics from George Washington, by the way. Fun fact. I'm also an econ major as well. You are very politically active, I guess you, you did a lot of, um, interning in political offices and you were responding to, um, the, the, the sheer amount of load that the Congress people have in terms of the, the support.[00:01:00] So you built, representing, which is Zendesk for Congress. And, uh, I liked in your source guide podcast how you talked about how being more responsive to, to constituents is always a good thing no matter what side of the aisle you're on. You also had a sideline as a data science instructor at General Assembly.[00:01:18] As a consultant in your own consultancy, and you also did a bunch of hackathon stuff with Magic Sudoku, which is your transition from N L P into computer vision. And apparently at TechCrunch Disrupt, disrupt in 2019, you tried to add chess and that was your whole villain origin story for, Hey, computer vision's too hard.[00:01:36] That's full, the platform to do that. Uh, and now you're co-founder c e o of RoboFlow. So that's your bio. Um, what's not in there that[00:01:43] people should know about you? One key thing that people realize within maybe five minutes of meeting me, uh, I'm from Iowa. Yes. And it's like a funnily novel thing. I mean, you know, growing up in Iowa, it's like everyone you know is from Iowa.[00:01:56] But then when I left to go to school, there was not that many Iowans at gw and people were like, oh, like you're, you're Iowa Joe. Like, you know, how'd you find out about this school out here? I was like, oh, well the Pony Express was running that day, so I was able to send. So I really like to lean into it.[00:02:11] And so you kind of become a default ambassador for places that. People don't meet a lot of other people from, so I've kind of taken that upon myself to just make it be a, a part of my identity. So, you know, my handle everywhere Joseph of Iowa, like I I, you can probably find my social security number just from knowing that that's my handle.[00:02:25] Cuz I put it plastered everywhere. So that's, that's probably like one thing.[00:02:28] Why Iowa[00:02:28] What's your best pitch for Iowa? Like why is[00:02:30] Iowa awesome? The people Iowa's filled with people that genuinely care. You know, if you're waiting a long line, someone's gonna strike up a conversation, kinda ask how you were Devrel and it's just like a really genuine place.[00:02:40] It was a wonderful place to grow up too at the time, you know, I thought it was like, uh, yeah, I was kind of embarrassed and then be from there. And then I actually kinda looking back it's like, wow, you know, there's good schools, smart people friendly. The, uh, high school that I went to actually Ben Silverman, the CEO and, or I guess former CEO and co-founder of Pinterest and I have the same teachers in high school at different.[00:03:01] The co-founder, or excuse me, the creator of crispr, the gene editing technique, Dr. Jennifer. Doudna. Oh, so that's the patent debate. There's Doudna. Oh, and then there's Fang Zang. Uh, okay. Yeah. Yeah. So Dr. Fang Zang, who I think ultimately won the patent war, uh, but is also from the same high school.[00:03:18] Well, she won the patent, but Jennifer won the[00:03:20] prize.[00:03:21] I think that's probably, I think that's probably, I, I mean I looked into it a little closely. I think it was something like she won the patent for CRISPR first existing and then Feng got it for, uh, first use on humans, which I guess for commercial reasons is the, perhaps more, more interesting one. But I dunno, biolife Sciences, is that my area of expertise?[00:03:38] Yep. Knowing people that came from Iowa that do cool things, certainly is. Yes. So I'll claim it. Um, but yeah, I, I, we, um, at Roble actually, we're, we're bringing the full team to Iowa for the very first time this last week of, of April. And, well, folks from like Scotland all over, that's your company[00:03:54] retreat.[00:03:54] The Iowa,[00:03:55] yeah. Nice. Well, so we do two a year. You know, we've done Miami, we've done. Some of the smaller teams have done like Nashville or Austin or these sorts of places, but we said, you know, let's bring it back to kinda the origin and the roots. Uh, and we'll, we'll bring the full team to, to Des Moines, Iowa.[00:04:13] So, yeah, like I was mentioning, folks from California to Scotland and many places in between are all gonna descend upon Des Moines for a week of, uh, learning and working. So maybe you can check in with those folks. If, what do they, what do they decide and interpret about what's cool. Our state. Well, one thing, are you actually headquartered in Des Moines on paper?[00:04:30] Yes. Yeah.[00:04:30] Isn't that amazing? That's like everyone's Delaware and you're like,[00:04:33] so doing research. Well, we're, we're incorporated in Delaware. Okay. We we're Delaware Sea like, uh, most companies, but our headquarters Yeah. Is in Des Moines. And part of that's a few things. One, it's like, you know, there's this nice Iowa pride.[00:04:43] And second is, uh, Brad and I both grew up in Brad Mc, co-founder and I grew up in, in Des Moines. And we met each other in the year 2000. We looked it up for the, the YC app. So, you know, I think, I guess more of my life I've known Brad than not, uh, which is kind of crazy. Wow. And during yc, we did it during 2020, so it was like the height of Covid.[00:05:01] And so we actually got a house in Des Moines and lived, worked outta there. I mean, more credit to. So I moved back. I was living in DC at the time, I moved back to to Des Moines. Brad was living in Des Moines, but he moved out of a house with his. To move into what we called our hacker house. And then we had one, uh, member of the team as well, Jacob Sorowitz, who moved from Minneapolis down to Des Moines for the summer.[00:05:21] And frankly, uh, code was a great time to, to build a YC company cuz there wasn't much else to do. I mean, it's kinda like wash your groceries and code. It's sort of the, that was the routine[00:05:30] and you can use, uh, computer vision to help with your groceries as well.[00:05:33] That's exactly right. Tell me what to make.[00:05:35] What's in my fridge? What should I cook? Oh, we'll, we'll, we'll cover[00:05:37] that for with the G P T four, uh, stuff. Exactly. Okay. So you have been featured with in a lot of press events. Uh, but maybe we'll just cover the origin story a little bit in a little bit more detail. So we'll, we'll cover robo flow and then we'll cover, we'll go into segment anything.[00:05:52] Origin of Roboflow[00:05:52] But, uh, I think it's important for people to understand. Robo just because it gives people context for what you're about to show us at the end of the podcast. So Magic Sudoku tc, uh, techers Disrupt, and then you go, you join Pioneer, which is Dan Gross's, um, YC before yc.[00:06:07] Yeah. That's how I think about it.[00:06:08] Yeah, that's a good way. That's a good description of it. Yeah. So I mean, robo flow kind of starts as you mentioned with this magic Sudoku thing. So you mentioned one of my prior business was a company called Represent, and you nailed it. I mean, US Congress gets 80 million messages a year. We built tools that auto sorted them.[00:06:23] They didn't use any intelligent auto sorting. And this is somewhat a solved problem in natural language processing of doing topic modeling or grouping together similar sentiment and things like this. And as you mentioned, I'd like, I worked in DC for a bit and been exposed to some of these problems and when I was like, oh, you know, with programming you can build solutions.[00:06:40] And I think the US Congress is, you know, the US kind of United States is a support center, if you will, and the United States is sports center runs on pretty old software, so mm-hmm. We, um, we built a product for that. It was actually at the time when I was working on representing. Brad, his prior business, um, is a social games company called Hatchlings.[00:07:00] Uh, he phoned me in, in 2017, apple had released augmented reality kit AR kit. And Brad and I are both kind of serial hackers, like I like to go to hackathons, don't really understand new technology until he build something with them type folks. And when AR Kit came out, Brad decided he wanted to build a game with it that would solve Sudoku puzzles.[00:07:19] And the idea of the game would be you take your phone, you hover hold it over top of a Sudoku puzzle, it recognizes the state of the board where it is, and then it fills it all in just right before your eyes. And he phoned me and I was like, Brad, this sounds awesome and sounds like you kinda got it figured out.[00:07:34] What, what's, uh, what, what do you think I can do here? It's like, well, the machine learning piece of this is the part that I'm most uncertain about. Uh, doing the digit recognition and, um, filling in some of those results. I was like, well, I mean digit recognition's like the hell of world of, of computer vision.[00:07:48] That's Yeah, yeah, MNIST, right. So I was like, that that part should be the, the easy part. I was like, ah, I'm, he's like, I'm not so super sure, but. You know, the other parts, the mobile ar game mechanics, I've got pretty well figured out. I was like, I, I think you're wrong. I think you're thinking about the hard part is the easy part.[00:08:02] And he is like, no, you're wrong. The hard part is the easy part. And so long story short, we built this thing and released Magic Sudoku and it kind of caught the Internet's attention of what you could do with augmented reality and, and with computer vision. It, you know, made it to the front ofer and some subreddits it run Product Hunt Air app of the year.[00:08:20] And it was really a, a flash in the pan type app, right? Like we were both running separate companies at the time and mostly wanted to toy around with, with new technology. And, um, kind of a fun fact about Magic Sudoku winning product Hunt Air app of the year. That was the same year that I think the model three came out.[00:08:34] And so Elon Musk won a Golden Kitty who we joked that we share an award with, with Elon Musk. Um, the thinking there was that this is gonna set off a, a revolution of if two random engineers can put together something that makes something, makes a game programmable and at interactive, then surely lots of other engineers will.[00:08:53] Do similar of adding programmable layers on top of real world objects around us. Earlier we were joking about objects in your fridge, you know, and automatically generating recipes and these sorts of things. And like I said, that was 2017. Roboflow was actually co-found, or I guess like incorporated in, in 2019.[00:09:09] So we put this out there, nothing really happened. We went back to our day jobs of, of running our respective businesses, I sold Represently and then as you mentioned, kind of did like consulting stuff to figure out the next sort of thing to, to work on, to get exposed to various problems. Brad appointed a new CEO at his prior business and we got together that summer of 2019.[00:09:27] We said, Hey, you know, maybe we should return to that idea that caught a lot of people's attention and shows what's possible. And you know what, what kind of gives, like the future is here. And we have no one's done anything since. No one's done anything. So why is, why are there not these, these apps proliferated everywhere.[00:09:42] Yeah. And so we said, you know, what we'll do is, um, to add this software layer to the real world. Will build, um, kinda like a super app where if you pointed it at anything, it will recognize it and then you can interact with it. We'll release a developer platform and allow people to make their own interfaces, interactivity for whatever object they're looking at.[00:10:04] And we decided to start with board games because one, we had a little bit of history there with, with Sudoku two, there's social by default. So if one person, you know finds it, then they'd probably share it among their friend. Group three. There's actually relatively few barriers to entry aside from like, you know, using someone else's brand name in your, your marketing materials.[00:10:19] Yeah. But other than that, there's no real, uh, inhibitors to getting things going and, and four, it's, it's just fun. It would be something that'd be bring us enjoyment to work on. So we spent that summer making, uh, boggle the four by four word game provable, where, you know, unlike Magic Sudoku, which to be clear, totally ruins the game, uh, you, you have to solve Sudoku puzzle.[00:10:40] You don't need to do anything else. But with Boggle, if you and I are playing, we might not find all of the words that adjacent letter tiles. Unveil. So if we have a, an AI tell us, Hey, here's like the best combination of letters that make high scoring words. And so we, we made boggle and released it and that, and that did okay.[00:10:56] I mean maybe the most interesting story was there's a English as a second language program in, in Canada that picked it up and used it as a part of their curriculum to like build vocabulary, which I thought was kind of inspiring. Example, and what happens just when you put things on the internet and then.[00:11:09] We wanted to build one for chess. So this is where you mentioned we went to 2019. TechCrunch Disrupt TechCrunch. Disrupt holds a Hackathon. And this is actually, you know, when Brad and I say we really became co-founders, because we fly out to San Francisco, we rent a hotel room in the Tenderloin. We, uh, we, we, uh, have one room and there's like one, there's room for one bed, and then we're like, oh, you said there was a cot, you know, on the, on the listing.[00:11:32] So they like give us a little, a little cot, the end of the cot, like bled and over into like the bathroom. So like there I am sleeping on the cot with like my head in the bathroom and the Tenderloin, you know, fortunately we're at a hackathon glamorous. Yeah. There wasn't, there wasn't a ton of sleep to be had.[00:11:46] There is, you know, we're, we're just like making and, and shipping these, these sorts of many[00:11:50] people with this hack. So I've never been to one of these things, but[00:11:52] they're huge. Right? Yeah. The Disrupt Hackathon, um, I don't, I don't know numbers, but few hundreds, you know, classically had been a place where it launched a lot of famous Yeah.[00:12:01] Sort of flare. Yeah. And I think it's, you know, kind of slowed down as a place for true company generation. But for us, Brad and I, who likes just doing hackathons, being, making things in compressed time skills, it seemed like a, a fun thing to do. And like I said, we'd been working on things, but it was only there that like, you're, you're stuck in a maybe not so great glamorous situation together and you're just there to make a, a program and you wanna make it be the best and compete against others.[00:12:26] And so we add support to the app that we were called was called Board Boss. We couldn't call it anything with Boggle cause of IP rights were called. So we called it Board Boss and it supported Boggle and then we were gonna support chess, which, you know, has no IP rights around it. Uh, it's an open game.[00:12:39] And we did so in 48 hours, we built an app that, or added fit capability to. Point your phone at a chess board. It understands the state of the chess board and converts it to um, a known notation. Then it passes that to stock fish, the open source chess engine for making move recommendations and it makes move recommendations to, to players.[00:13:00] So you could either play against like an ammunition to AI or improve your own game. We learn that one of the key ways users like to use this was just to record their games. Cuz it's almost like reviewing game film of what you should have done differently. Game. Yeah, yeah, exactly. And I guess the highlight of, uh, of chess Boss was, you know, we get to the first round of judging, we get to the second round of judging.[00:13:16] And during the second round of judging, that's when like, TechCrunch kind of brings around like some like celebs and stuff. They'll come by. Evan Spiegel drops by Ooh. Oh, and he uh, he comes up to our, our, our booth and um, he's like, oh, so what does, what does this all do? And you know, he takes an interest in it cuz the underpinnings of, of AR interacting with the.[00:13:33] And, uh, he is kinda like, you know, I could use this to like cheat on chess with my friends. And we're like, well, you know, that wasn't exactly the, the thesis of why we made it, but glad that, uh, at least you think it's kind of neat. Um, wait, but he already started Snapchat by then? Oh, yeah. Oh yeah. This, this is 2019, I think.[00:13:49] Oh, okay, okay. Yeah, he was kind of just checking out things that were new and, and judging didn't end up winning any, um, awards within Disrupt, but I think what we won was actually. Maybe more important maybe like the, the quote, like the co-founders medal along the way. Yep. The friends we made along the way there we go to, to play to the meme.[00:14:06] I would've preferred to win, to be clear. Yes. You played a win. So you did win, uh,[00:14:11] $15,000 from some Des Moines, uh, con[00:14:14] contest. Yeah. Yeah. The, uh, that was nice. Yeah. Slightly after that we did, we did win. Um, some, some grants and some other things for some of the work that we've been doing. John Papa John supporting the, uh, the local tech scene.[00:14:24] Yeah. Well, so there's not the one you're thinking of. Okay. Uh, there's a guy whose name is Papa John, like that's his, that's his, that's his last name. His first name is John. So it's not the Papa John's you're thinking of that has some problematic undertones. It's like this guy who's totally different. I feel bad for him.[00:14:38] His press must just be like, oh, uh, all over the place. But yeah, he's this figure in the Iowa entrepreneurial scene who, um, he actually was like doing SPACs before they were cool and these sorts of things, but yeah, he funds like grants that encourage entrepreneurship in the state. And since we'd done YC and in the state, we were eligible for some of the awards that they were providing.[00:14:56] But yeah, it was disrupt that we realized, you know, um, the tools that we made, you know, it took us better part of a summer to add Boggle support and it took us 48 hours to add chest support. So adding the ability for programmable interfaces for any object, we built a lot of those internal tools and our apps were kind of doing like the very famous shark fin where like it picks up really fast, then it kind of like slowly peters off.[00:15:20] Mm-hmm. And so we're like, okay, if we're getting these like shark fin graphs, we gotta try something different. Um, there's something different. I remember like the week before Thanksgiving 2019 sitting down and we wrote this Readme for, actually it's still the Readme at the base repo of Robo Flow today has spent relatively unedited of the manifesto.[00:15:36] Like, we're gonna build tools that enable people to make the world programmable. And there's like six phases and, you know, there's still, uh, many, many, many phases to go into what we wrote even at that time to, to present. But it's largely been, um, right in line with what we thought we would, we would do, which is give engineers the tools to add software to real world objects, which is largely predicated on computer vision. So finding the right images, getting the right sorts of video frames, maybe annotating them, uh, finding the right sort of models to use to do this, monitoring the performance, all these sorts of things. And that from, I mean, we released that in early 2020, and it's kind of, that's what's really started to click.[00:16:12] Why Computer Vision[00:16:12] Awesome. I think we should just kind[00:16:13] of[00:16:14] go right into where you are today and like the, the products that you offer, just just to give people an overview and then we can go into the, the SAM stuff. So what is the clear, concise elevator pitch? I think you mentioned a bunch of things like make the world programmable so you don't ha like computer vision is a means to an end.[00:16:30] Like there's, there's something beyond that. Yeah.[00:16:32] I mean, the, the big picture mission for the business and the company and what we're working on is, is making the world programmable, making it read and write and interactive, kind of more entertaining, more e. More fun and computer vision is the technology by which we can achieve that pretty quickly.[00:16:48] So like the one liner for the, the product in, in the company is providing engineers with the tools for data and models to build programmable interfaces. Um, and that can be workflows, that could be the, uh, data processing, it could be the actual model training. But yeah, Rob helps you use production ready computer vision workflows fast.[00:17:10] And I like that.[00:17:11] In part of your other pitch that I've heard, uh, is that you basically scale from the very smallest scales to the very largest scales, right? Like the sort of microbiology use case all the way to[00:17:20] astronomy. Yeah. Yeah. The, the joke that I like to make is like anything, um, underneath a microscope and, and through a telescope and everything in between needs to, needs to be seen.[00:17:27] I mean, we have people that run models in outer space, uh, underwater remote places under supervision and, and known places. The crazy thing is that like, All parts of, of not just the world, but the universe need to be observed and understood and acted upon. So vision is gonna be, I dunno, I feel like we're in the very, very, very beginnings of all the ways we're gonna see it.[00:17:50] Computer Vision Use Cases[00:17:50] Awesome. Let's go into a lo a few like top use cases, cuz I think that really helps to like highlight the big names that you've, big logos that you've already got. I've got Walmart and Cardinal Health, but I don't, I don't know if you wanna pull out any other names, like, just to illustrate, because the reason by the way, the reason I think that a lot of developers don't get into computer vision is because they think they don't need it.[00:18:11] Um, or they think like, oh, like when I do robotics, I'll do it. But I think if, if you see like the breadth of use cases, then you get a little bit more inspiration as to like, oh, I can use[00:18:19] CVS lfa. Yeah. It's kind of like, um, you know, by giving, by making it be so straightforward to use vision, it becomes almost like a given that it's a set of features that you could power on top of it.[00:18:32] And like you mentioned, there's, yeah, there's Fortune One there over half the Fortune 100. I've used the, the tools that Robel provides just as much as 250,000 developers. And so over a quarter million engineers finding and developing and creating various apps, and I mean, those apps are, are, are far and wide.[00:18:49] Just as you mentioned. I mean everything from say, like, one I like to talk about was like sushi detection of like finding the like right sorts of fish and ingredients that are in a given piece of, of sushi that you're looking at to say like roof estimation of like finding. If there's like, uh, hail damage on, on a given roof, of course, self-driving cars and understanding the scenes around us is sort of the, you know, very early computer vision everywhere.[00:19:13] Use case hardhat detection, like finding out if like a given workplace is, is, is safe, uh, disseminate, have the right p p p on or p p e on, are there the right distance from various machines? A huge place that vision has been used is environmental monitoring. Uh, what's the count of species? Can we verify that the environment's not changing in unexpected ways or like river banks are become, uh, becoming recessed in ways that we anticipate from satellite imagery, plant phenotyping.[00:19:37] I mean, people have used these apps for like understanding their plants and identifying them. And that dataset that's actually largely open, which is what's given a proliferation to the iNaturalist, is, is that whole, uh, hub of, of products. Lots of, um, people that do manufacturing. So, like Rivian for example, is a Rubal customer, and you know, they're trying to scale from 1000 cars to 25,000 cars to a hundred thousand cars in very short order.[00:20:00] And that relies on having the. Ability to visually ensure that every part that they're making is produced correctly and right in time. Medical use cases. You know, there's actually, this morning I was emailing with a user who's accelerating early cancer detection through breaking apart various parts of cells and doing counts of those cells.[00:20:23] And actually a lot of wet lab work that folks that are doing their PhDs or have done their PhDs are deeply familiar with that is often required to do very manually of, of counting, uh, micro plasms or, or things like this. There's. All sorts of, um, like traffic counting and smart cities use cases of understanding curb utilization to which sort of vehicles are, are present.[00:20:44] Uh, ooh. That can be[00:20:46] really good for city planning actually.[00:20:47] Yeah. I mean, one of our customers does exactly this. They, they measure and do they call it like smart curb utilization, where uhhuh, they wanna basically make a curb be almost like a dynamic space where like during these amounts of time, it's zoned for this during these amounts of times.[00:20:59] It's zoned for this based on the flows and e ebbs and flows of traffic throughout the day. So yeah, I mean the, the, the truth is that like, you're right, it's like a developer might be like, oh, how would I use vision? And then all of a sudden it's like, oh man, all these things are at my fingertips. Like I can just, everything you can see.[00:21:13] Yeah. Right. I can just, I can just add functionality for my app to understand and ingest the way, like, and usually the way that someone gets like almost nerd sniped into this is like, they have like a home automation project, so it's like send Yeah. Give us a few. Yeah. So send me a text when, um, a package shows up so I can like prevent package theft so I can like go down and grab it right away or.[00:21:29] We had a, uh, this one's pretty, pretty niche, but it's pretty funny. There was this guy who, during the pandemic wa, wanted to make sure his cat had like the proper, uh, workout. And so I've shared the story where he basically decided that. He'd make a cat workout machine with computer vision, you might be alone.[00:21:43] You're like, what does that look like? Well, what he decided was he would take a robotic arm strap, a laser pointer to it, and then train a machine to recognize his cat and his cat only, and point the laser pointer consistently 10 feet away from the cat. There's actually a video of you if you type an YouTube cat laser turret, you'll find Dave's video.[00:22:01] Uh, and hopefully Dave's cat has, has lost the weight that it needs to, cuz that's just the, that's an intense workout I have to say. But yeah, so like, that's like a, um, you know, these, uh, home automation projects are pretty common places for people to get into smart bird feeders. I've seen people that like are, are logging and understanding what sort of birds are, uh, in their background.[00:22:18] There's a member of our team that was working on actually this as, as a whole company and has open sourced a lot of the data for doing bird species identification. And now there's, I think there's even a company that's, uh, founded to create like a smart bird feeder, like captures photos and tells you which ones you've attracted to your yard.[00:22:32] I met that. Do, you know, get around the, uh, car sharing company that heard it? Them never used them. They did a SPAC last year and they had raised at like, They're unicorn. They raised at like 1.2 billion, I think in the, the prior round and inspected a similar price. I met the CTO of, of Getaround because he was, uh, using Rob Flow to hack into his Tesla cameras to identify other vehicles that are like often nearby him.[00:22:56] So he's basically building his own custom license plate recognition, and he just wanted like, keep, like, keep tabs of like, when he drives by his friends or when he sees like regular sorts of folks. And so he was doing like automated license plate recognition by tapping into his, uh, camera feeds. And by the way, Elliot's like one of the like OG hackers, he was, I think one of the very first people to like, um, she break iPhones and, and these sorts of things.[00:23:14] Mm-hmm. So yeah, the project that I want, uh, that I'm gonna work on right now for my new place in San Francisco is. There's two doors. There's like a gate and then the other door. And sometimes we like forget to close, close the gate. So like, basically if it sees that the gate is open, it'll like send us all a text or something like this to make sure that the gate is, is closed at the front of our house.[00:23:32] That's[00:23:32] really cool. And I'll, I'll call out one thing that readers and listeners can, uh, read out on, on your history. One of your most popular initial, um, viral blog post was about, um, autonomous vehicle data sets and how, uh, the one that Udacity was using was missing like one third of humans. And, uh, it's not, it's pretty problematic for cars to miss humans.[00:23:53] Yeah, yeah, actually, so yeah, the Udacity self-driving car data set, which look to their credit, it was just meant to be used for, for academic use. Um, and like as a part of courses on, on Udacity, right? Yeah. But the, the team that released it, kind of hastily labeled and let it go out there to just start to use and train some models.[00:24:11] I think that likely some, some, uh, maybe commercial use cases maybe may have come and, and used, uh, the dataset, who's to say? But Brad and I discovered this dataset. And when we were working on dataset improvement tools at Rob Flow, we ran through our tools and identified some like pretty, as you mentioned, key issues.[00:24:26] Like for example, a lot of strollers weren't labeled and I hope our self-driving cars do those, these sorts of things. And so we relabeled the whole dataset by hand. I have this very fond memory is February, 2020. Brad and I are in Taiwan. So like Covid is actually just, just getting going. And the reason we were there is we were like, Hey, we can work on this from anywhere for a little bit.[00:24:44] And so we spent like a, uh, let's go closer to Covid. Well, you know, I like to say we uh, we got early indicators of, uh, how bad it was gonna be. I bought a bunch of like N 90 fives before going o I remember going to the, the like buying a bunch of N 95 s and getting this craziest look like this like crazy tin hat guy.[00:25:04] Wow. What is he doing? And then here's how you knew. I, I also got got by how bad it was gonna be. I left all of them in Taiwan cuz it's like, oh, you all need these. We'll be fine over in the us. And then come to find out, of course that Taiwan was a lot better in terms of, um, I think, yeah. Safety. But anyway, we were in Taiwan because we had planned this trip and you know, at the time we weren't super sure about the, uh, covid, these sorts of things.[00:25:22] We always canceled it. We didn't, but I have this, this very specific time. Brad and I were riding on the train from Clay back to Taipei. It's like a four hour ride. And you mentioned Pioneer earlier, we were competing in Pioneer, which is almost like a gamified to-do list. Mm-hmm. Every week you say what you're gonna do and then other people evaluate.[00:25:37] Did you actually do the things you said you were going to do? One of the things we said we were gonna do was like this, I think re-release of this data set. And so it's like late, we'd had a whole week, like, you know, weekend behind us and, uh, we're on this train and it was very unpleasant situation, but we relabeled this, this data set, and one sitting got it submitted before like the Sunday, Sunday countdown clock starts voting for, for.[00:25:57] And, um, once that data got out back out there, just as you mentioned, it kind of picked up and Venture beat, um, noticed and wrote some stories about it. And we really rereleased of course, the data set that we did our best job of labeling. And now if anyone's listening, they can probably go out and like find some errors that we surely still have and maybe call us out and, you know, put us, put us on blast.[00:26:15] The Economics of Annotation (Segmentation)[00:26:15] But,[00:26:16] um, well, well the reason I like this story is because it, it draws attention to the idea that annotation is difficult and basically anyone looking to use computer vision in their business who may not have an off-the-shelf data set is going to have to get involved in annotation. And I don't know what it costs.[00:26:34] And that's probably one of the biggest hurdles for me to estimate how big a task this is. Right? So my question at a higher level is tell the customers, how do you tell customers to estimate the economics of annotation? Like how many images do, do we need? How much, how long is it gonna take? That, that kinda stuff.[00:26:50] How much money and then what are the nuances to doing it well, right? Like, cuz obviously Udacity had a poor quality job, you guys had proved it, and there's errors every everywhere. Like where do[00:26:59] these things go wrong? The really good news about annotation in general is that like annotation of course is a means to an end to have a model be able to recognize a thing.[00:27:08] Increasingly there's models that are coming out that can recognize things zero shot without any annotation, which we're gonna talk about. Yeah. Which, we'll, we'll talk more about that in a moment. But in general, the good news is that like the trend is that annotation is gonna become decreasingly a blocker to starting to use computer vision in meaningful ways.[00:27:24] Now that said, just as you mentioned, there's a lot of places where you still need to do. Annotation. I mean, even with these zero shot models, they might have of blind spots, or maybe you're a business, as you mentioned, that you know, it's proprietary data. Like only Rivian knows what a rivian is supposed to look like, right?[00:27:39] Uh, at the time of, at the time of it being produced, like underneath the hood and, and all these sorts of things. And so, yeah, that's gonna necessarily require annotation. So your question of how long is it gonna take, how do you estimate these sorts of things, it really comes down to the complexity of the problem that you're solving and the amount of variance in the scene.[00:27:57] So let's give some contextual examples. If you're trying to recognize, we'll say a scratch on one specific part and you have very strong lighting. You might need fewer images because you control the lighting, you know the exact part and maybe you're lucky in the scratch. Happens more often than not in similar parts or similar, uh, portions of the given part.[00:28:17] So in that context, you, you, the function of variance, the variance is, is, is lower. So the number of images you need is also lower to start getting up to work. Now the orders of magnitude we're talking about is that like you can have an initial like working model from like 30 to 50 images. Yeah. In this context, which is shockingly low.[00:28:32] Like I feel like there's kind of an open secret in computer vision now, the general heuristic that often. Users, is that like, you know, maybe 200 images per class is when you start to have a model that you can rely[00:28:45] on? Rely meaning like 90, 99, 90, 90%, um,[00:28:50] uh, like what's 85 plus 85? Okay. Um, that's good. Again, these are very, very finger in the wind estimates cuz the variance we're talking about.[00:28:59] But the real question is like, at what point, like the framing is not like at what point do it get to 99, right? The framing is at what point can I use this thing to be better than the alternative, which is humans, which maybe humans or maybe like this problem wasn't possible at all. And so usually the question isn't like, how do I get to 99?[00:29:15] A hundred percent? It's how do I ensure that like the value I am able to get from putting this thing in production is greater than the alternative? In fact, even if you have a model that's less accurate than humans, there might be some circumstances where you can tolerate, uh, a greater amount of inaccuracy.[00:29:32] And if you look at the accuracy relative to the cost, Using a model is extremely cheap. Using a human for the same sort of task can be very expensive. Now, in terms of the actual accuracy of of what you get, there's probably some point at which the cost, but relative accuracy exceeds of a model, exceeds the high cost and hopefully high accuracy of, of a human comparable, like for example, there's like cameras that will track soccer balls or track events happening during sporting matches.[00:30:02] And you can go through and you know, we actually have users that work in sports analytics. You can go through and have a human. Hours and hours of footage. Cuz not just watching their team, they're watching every other team, they're watching scouting teams, they're watching junior teams, they're watching competitors.[00:30:15] And you could have them like, you know, track and follow every single time the ball goes within blank region of the field or every time blank player goes into, uh, this portion of the field. And you could have, you know, exact, like a hundred percent accuracy if that person, maybe, maybe not a hundred, a human may be like 95, 90 7% accuracy of every single time the ball is in this region or this player is on the field.[00:30:36] Truthfully, maybe if you're scouting analytics, you actually don't need 97% accuracy of knowing that that player is on the field. And in fact, if you can just have a model run at a 1000th, a 10000th of the cost and goes through and finds all the times that Messi was present on the field mm-hmm. That the ball was in this region of the.[00:30:54] Then even if that model is slightly less accurate, the cost is just so orders of magnitude different. And the stakes like the stakes of this problem, of knowing like the total number of minutes that Messi played will say are such that we have a higher air tolerance, that it's a no-brainer to start to use Yeah, a computer vision model in this context.[00:31:12] So not every problem requires equivalent or greater human performance. Even when it does, you'd be surprised at how fast models get there. And in the times when you, uh, really look at a problem, the question is, how much accuracy do I need to start to get value from this? This thing, like the package example is a great one, right?[00:31:27] Like I could in theory set up a camera that's constantly watching in front of my porch and I could watch the camera whenever I have a package and then go down. But of course, I'm not gonna do that. I value my time to do other sorts of things instead. And so like there, there's this net new capability of, oh, great, I can have an always on thing that tells me when a package shows up, even if you know the, the thing that's gonna text me.[00:31:46] When a package shows up, let's say a flat pack shows up instead of a box and it doesn't know what a flat pack likes, looks like initially. Doesn't matter. It doesn't matter because I didn't have this capability at all before. And I think that's the true case where a lot of computer vision problems exist is like it.[00:32:00] It's like you didn't even have this capability, this superpower before at all, let alone assigning a given human to do the task. And that's where we see like this explosion of, of value.[00:32:10] Awesome. Awesome. That was a really good overview. I want to leave time for the others, but I, I really want to dive into a couple more things with regards to Robo Flow.[00:32:17] Computer Vision Annotation Formats[00:32:17] So one is, apparently your original pitch for Robo Flow was with regards to conversion tools for computer vision data sets. And I'm sure as, as a result of your job, you have a lot of rants. I've been digging for rants basically on like the best or the worst annotation formats. What do we know? Cause most of us, oh my gosh, we only know, like, you know, I like,[00:32:38] okay, so when we talk about computer vision annotation formats, what we're talking about is if you have an image and you, you picture a boing box around my face on that image.[00:32:46] Yeah. How do you describe where that Monty box is? X, Y, Z X Y coordinates. Okay. X, y coordinates. How, what do you mean from the top lefts.[00:32:52] Okay. You, you, you, you take X and Y and then, and then the. The length and, and the width of the, the[00:32:58] box. Okay. So you got like a top left coordinate and like the bottom right coordinate or like the, the center of the bottom.[00:33:02] Yeah. Yeah. Top, left, bottom right. Yeah. That's one type of format. Okay. But then, um, I come along and I'm like, you know what? I want to do a different format where I wanna just put the center of the box, right. And give the length and width. Right. And by the way, we didn't even talk about what X and Y we're talking about.[00:33:14] Is X a pixel count? Is a relative pixel count? Is it an absolute pixel count? So the point is, the number of ways to describe where a box lives in a freaking image is endless, uh, seemingly and. Everyone decided to kind of create their own different ways of describing the coordinates and positions of where in this context of bounding Box is present.[00:33:39] Uh, so there's some formats, for example, that like use re, so for the x and y, like Y is, uh, like the left, most part of the image is zero. And the right most part of the image is one. So the, the coordinate is like anywhere from zero to one. So 0.6 is, you know, 60% of your way right up the image to describe the coordinate.[00:33:53] I guess that was, that was X instead of Y. But the point is there, of the zero to one is the way that we determined where that was in the position, or we're gonna do an absolute pixel position anyway. We got sick, we got sick of all these different annotation formats. So why do you even have to convert between formats?[00:34:07] Is is another part of this, this story. So different training frameworks, like if you're using TensorFlow, you need like TF Records. If you're using PyTorch, it's probably gonna be, well it depends on like what model you're using, but someone might use Coco JSON with PyTorch. Someone else might use like a, just a YAML file and a text file.[00:34:21] And to describe the cor it's point is everyone that creates a model. Or creates a dataset rather, has created different ways of describing where and how a bounding box is present in the image. And we got sick of all these different formats and doing these in writing all these different converter scripts.[00:34:39] And so we made a tool that just converts from one script, one type of format to another. And the, the key thing is that like if you get that converter script wrong, your model doesn't not work. It just fails silently. Yeah. Because the bounding boxes are now all in the wrong places. And so you need a way to visualize and be sure that your converter script, blah, blah blah.[00:34:54] So that was the very first tool we released of robo. It was just a converter script, you know, like these, like these PDF to word converters that you find. It was basically that for computer vision, like dead simple, really annoying thing. And we put it out there and people found some, some value in, in that.[00:35:08] And you know, to this day that's still like a surprisingly painful[00:35:11] problem. Um, yeah, so you and I met at the Dall-E Hackathon at OpenAI, and we were, I was trying to implement this like face masking thing, and I immediately ran into that problem because, um, you know, the, the parameters that Dall-E expected were different from the one that I got from my face, uh, facial detection thing.[00:35:28] One day it'll go away, but that day is not today. Uh, the worst format that we work with is, is. The mart form, it just makes no sense. And it's like, I think, I think it's a one off annotation format that this university in China started to use to describe where annotations exist in a book mart. I, I don't know, I dunno why that So best[00:35:45] would be TF record or some something similar.[00:35:48] Yeah, I think like, here's your chance to like tell everybody to use one one standard and like, let's, let's, can[00:35:53] I just tell them to use, we have a package that does this for you. I'm just gonna tell you to use the row full package that converts them all, uh, for you. So you don't have to think about this. I mean, Coco JSON is pretty good.[00:36:04] It's like one of the larger industry norms and you know, it's in JS O compared to like V xml, which is an XML format and Coco json is pretty descriptive, but you know, it has, has its own sort of drawbacks and flaws and has random like, attribute, I dunno. Um, yeah, I think the best way to handle this problem is to not have to think about it, which is what we did.[00:36:21] We just created a, uh, library that, that converts and uses things. Uh, for us. We've double checked the heck out of it. There's been hundreds of thousands of people that have used the library and battle tested all these different formats to find those silent errors. So I feel pretty good about no longer having to have a favorite format and instead just rely on.[00:36:38] Dot load in the format that I need. Great[00:36:41] Intro to Computer Vision Segmentation[00:36:41] service to the community. Yeah. Let's go into segmentation because is at the top of everyone's minds, but before we get into segment, anything, I feel like we need a little bit of context on the state-of-the-art prior to Sam, which seems to be YOLO and uh, you are the leading expert as far as I know.[00:36:56] Yeah.[00:36:57] Computer vision, there's various task types. There's classification problems where we just like assign tags to images, like, you know, maybe safe work, not safe work, sort of tagging sort of stuff. Or we have object detection, which are the boing boxes that you see and all the formats I was mentioning in ranting about there's instant segmentation, which is the polygon shapes and produces really, really good looking demos.[00:37:19] So a lot of people like instant segmentation.[00:37:21] This would be like counting pills when you point 'em out on the, on the table. Yeah. So, or[00:37:25] soccer players on the field. So interestingly, um, counting you could do with bounding boxes. Okay. Cause you could just say, you know, a box around a person. Well, I could count, you know, 12 players on the field.[00:37:35] Masks are most useful. Polygons are most useful if you need very precise area measurements. So you have an aerial photo of a home and you want to know, and the home's not a perfect box, and you want to know the rough square footage of that home. Well, if you know the distance between like the drone and, and the ground.[00:37:53] And you have the precise polygon shape of the home, then you can calculate how big that home is from aerial photos. And then insurers can, you know, provide say accurate estimates and that's maybe why this is useful. So polygons and, and instant segmentation are, are those types of tasks? There's a key point detection task and key point is, you know, if you've seen those demos of like all the joints on like a hand kind of, kind of outlined, there's visual question answering tasks, visual q and a.[00:38:21] And that's like, you know, some of the stuff that multi-modality is absolutely crushing for, you know, here's an image, tell me what food is in this image. And then you can pass that and you can make a recipe out of it. But like, um, yeah, the visual question in answering task type is where multi-modality is gonna have and is already having an enormous impact.[00:38:40] So that's not a comprehensive survey, very problem type, but it's enough to, to go into why SAM is significant. So these various task types, you know, which model to use for which given circumstance. Most things is highly dependent on what you're ultimately aiming to do. Like if you need to run a model on the edge, you're gonna need a smaller model, cuz it is gonna run on edge, compute and process in, in, in real time.[00:39:01] If you're gonna run a model on the cloud, then of course you, uh, generally have more compute at your disposal Considerations like this now, uh,[00:39:08] YOLO[00:39:08] just to pause. Yeah. Do you have to explain YOLO first before you go to Sam, or[00:39:11] Yeah, yeah, sure. So, yeah. Yeah, we should. So object detection world. So for a while I talked about various different task types and you can kinda think about a slide scale of like classification, then obvious detection.[00:39:20] And on the right, at most point you have like segmentation tasks. Object detection. The bounding boxes is especially useful for a wide, like it's, it's surprisingly versatile. Whereas like classification is kind of brittle. Like you only have a tag for the whole image. Well, that doesn't, you can't count things with tags.[00:39:35] And on the other hand, like the mask side of things, like drawing masks is painstaking. And so like labeling is just a bit more difficult. Plus like the processing to produce masks requires more compute. And so usually a lot of folks kind of landed for a long time on obvious detection being a really happy medium of affording you with rich capabilities because you can do things like count, track, measure.[00:39:56] In some CAGR context with bounding boxes, you can see how many things are present. You can actually get a sense of how fast something's moving by tracking the object or bounding box across multiple frames and comparing the timestamp of where it was across those frames. So obviously detection is a very common task type that solves lots of things that you want do with a given model.[00:40:15] In obviously detection. There's been various model frameworks over time. So kind of really early on there's like R-CNN uh, then there's faster rc n n and these sorts of family models, which are based on like resnet kind of architectures. And then a big thing happens, and that is single shot detectors. So faster, rc n n despite its name is, is very slow cuz it takes two passes on the image.[00:40:37] Uh, the first pass is, it finds par pixels in the image that are most interesting to, uh, create a bounding box candidate out of. And then it passes that to a, a classifier that then does classification of the bounding box of interest. Right. Yeah. You can see, you can see why that would be slow. Yeah. Cause you have to do two passes.[00:40:53] You know, kind of actually led by, uh, like mobile net was I think the first large, uh, single shot detector. And as its name implies, it was meant to be run on edge devices and mobile devices and Google released mobile net. So it's a popular implementation that you find in TensorFlow. And what single shot detectors did is they said, Hey, instead of looking at the image twice, what if we just kind of have a, a backbone that finds candidate bounding boxes?[00:41:19] And then we, we set loss functions for objectness. We set loss function. That's a real thing. We set loss functions for objectness, like how much obj, how object do this part of the images. We send a loss function for classification, and then we run the image through the model on a single pass. And that saves lots of compute time and you know, it's not necessarily as accurate, but if you have lesser compute, it can be extremely useful.[00:41:42] And then the advances in both modeling techniques in compute and data quality, single shot detectors, SSDs has become, uh, really, really popular. One of the biggest SSDs that has become really popular is the YOLO family models, as you described. And so YOLO stands for you only look once. Yeah, right, of course.[00:42:02] Uh, Drake's, uh, other album, um, so Joseph Redman introduces YOLO at the University of Washington. And Joseph Redman is, uh, kind of a, a fun guy. So for listeners, for an Easter egg, I'm gonna tell you to Google Joseph Redman resume, and you'll find, you'll find My Little Pony. That's all I'll say. And so he introduces the very first YOLO architecture, which is a single shot detector, and he also does it in a framework called Darknet, which is like this, this own framework that compiles the Cs, frankly, kind of tough to work with, but allows you to benefit from the speedups that advance when you operate in a low level language like.[00:42:36] And then he releases, well, what colloquially is known as YOLO V two, but a paper's called YOLO 9,000 cuz Joseph Redmond thought it'd be funny to have something over 9,000. So get a sense for, yeah, some fun. And then he releases, uh, YOLO V three and YOLO V three is kind of like where things really start to click because it goes from being an SSD that's very limited to competitive and, and, and superior to actually mobile That and some of these other single shot detectors, which is awesome because you have this sort of solo, I mean, him and and his advisor, Ali, at University of Washington have these, uh, models that are becoming really, really powerful and capable and competitive with these large research organizations.[00:43:09] Joseph Edmond leaves Computer Vision Research, but there had been Alexia ab, one of the maintainers of Darknet released Yola VI four. And another, uh, researcher, Glenn Yer, uh, jocker had been working on YOLO V three, but in a PyTorch implementation, cuz remember YOLO is in a dark implementation. And so then, you know, YOLO V three and then Glenn continues to make additional improvements to YOLO V three and pretty soon his improvements on Yolov theory, he's like, oh, this is kind of its own things.[00:43:36] Then he releases YOLO V five[00:43:38] with some naming[00:43:39] controversy that we don't have Big naming controversy. The, the too long didn't read on the naming controversy is because Glen was not originally involved with Darknet. How is he allowed to use the YOLO moniker? Roe got in a lot of trouble cuz we wrote a bunch of content about YOLO V five and people were like, ah, why are you naming it that we're not?[00:43:55] Um, but you know,[00:43:56] cool. But anyway, so state-of-the-art goes to v8. Is what I gather.[00:44:00] Yeah, yeah. So yeah. Yeah. You're, you're just like, okay, I got V five. I'll skip to the end. Uh, unless, unless there's something, I mean, I don't want, well, so I mean, there's some interesting things. Um, in the yolo, there's like, there's like a bunch of YOLO variants.[00:44:10] So YOLOs become this, like this, this catchall for various single shot, yeah. For various single shot, basically like runs on the edge, it's quick detection framework. And so there's, um, like YOLO R, there's YOLO S, which is a transformer based, uh, yolo, yet look like you only look at one sequence is what s stands were.[00:44:27] Um, the pp yo, which, uh, is PAT Paddle implementation, which is by, which Chinese Google is, is their implementation of, of TensorFlow, if you will. So basically YOLO has like all these variants. And now, um, yo vii, which is Glen has been working on, is now I think kind of like, uh, one of the choice models to use for single shot detection.[00:44:44] World Knowledge of Foundation Models[00:44:44] Well, I think a lot of those models, you know, Asking the first principal's question, like let's say you wanna find like a bus detector. Do you need to like go find a bunch of photos of buses or maybe like a chair detector? Do you need to go find a bunch of photos of chairs? It's like, oh no. You know, actually those images are present not only in the cocoa data set, but those are objects that exist like kind of broadly on the internet.[00:45:02] And so computer visions kind of been like us included, have been like really pushing for and encouraging models that already possess a lot of context about the world. And so, you know, if GB T's idea and i's idea OpenAI was okay, models can only understand things that are in their corpus. What if we just make their corpus the size of everything on the internet?[00:45:20] The same thing that happened in imagery, what's happening now? And that's kinda what Sam represents, which is kind of a new evolution of, earlier on we were talking about the cost of annotation and I said, well, good news. Annotations then become decreasingly necessary to start to get to value. Now you gotta think about it more, kind of like, you'll probably need to do some annotation because you might want to find a custom object, or Sam might not be perfect, but what's about to happen is a big opportunity where you want the benefits of a yolo, right?[00:45:47] Where it can run really fast, it can run on the edge, it's very cheap. But you want the knowledge of a large foundation model that already knows everything about buses and knows everything about shoes, knows everything about real, if the name is true, anything segment, anything model. And so there's gonna be this novel opportunity to take what these large models know, and I guess it's kind of like a form of distilling, like distill them down into smaller architectures that you can use in versatile ways to run in real time to run on the edge.[00:46:13] And that's now happening. And what we're seeing in actually kind of like pulling that, that future forward with, with, with Robo Flow.[00:46:21] Segment Anything Model[00:46:21] So we could talk a bit about, um, about SAM and what it represents maybe into, in relation to like these, these YOLO models. So Sam is Facebook segment Everything Model. It came out last week, um, the first week of April.[00:46:34] It has 24,000 GitHub stars at the time of, of this recording within its first week. And why, what does it do? Segment? Everything is a zero shot segmentation model. And as we're describing, creating masks is a very arduous task. Creating masks of objects that are not already represented means you have to go label a bunch of masks and then train a model and then hope that it finds those masks in new images.[00:47:00] And the promise of Segment anything is that in fact you just pass at any image and it finds all of the masks of relevant things that you might be curious about finding in a given image. And it works remarkably. Segment anything in credit to Facebook and the fair Facebook research team, they not only released the model permissive license to move things forward, they released the full data set, all 11 million images and 1.1 billion segmentation masks and three model sizes.[00:47:29] The largest ones like 2.5 gigabytes, which is not enormous. Medium ones like 1.2 and the smallest one is like 400, 3 75 megabytes. And for context,[00:47:38] for, for people listening, that's six times more than the previous alternative, which, which is apparently open images, uh, in terms of number images, and then 400 times more masks than open[00:47:47] images as well.[00:47:48] Exactly, yeah. So huge, huge order magnitude gain in terms of dataset accessibility plus like the model and how it works. And so the question becomes, okay, so like segment. What, what do I do with this? Like, what does it allow me to do? And it didn't Rob float well. Yeah, you should. Yeah. Um, it's already there.[00:48:04] You um, that part's done. Uh, but the thing that you can do with segment anything is you can almost, like, I almost think about like this, kinda like this model arbitrage where you can basically like distill down a giant model. So let's say like, like let's return to the package example. Okay. The package problem of, I wanna get a text when a package appears on my front porch before segment anything.[00:48:25] The way that I would go solve this problem is I would go collect some images of packages on my porch and I would label them, uh, with bounding boxes or maybe masks in that part. As you mentioned, it can be a long process and I would train a model. And that model it actually probably worked pretty well cause it's purpose-built.[00:48:44] The camera position, my porch, the packages I'm receiving. But that's gonna take some time, like everything that I just mentioned the
Bounding into Comics reports on culture and geek news, but doesn't ever hesitate to call out the hypocrisy, double-standards, and cronyism seen in those coastal, gilded, navel-gazing circles. Jonathan and Michael talk with John about Tolkien - past, present, and future!
Bounding into Comics reports on culture and geek news, but doesn't ever hesitate to call out the hypocrisy, double-standards, and cronyism seen in those coastal, gilded, navel-gazing circles. Jonathan and Michael talk with John about Tolkien - past, present, and future!
Bounding across the barren video game landscape in search of digital treasure and probably our parents, it's 1upsmanship! This time our boys take on the recently released and largely panned Playstation exclusive Forspoken in a desperate attempt to add their laundry list of gripes to the pile. But things take a turn toward the celestial as Michael and Adam discuss the value of the Hero's Journey as a story structure in 2023, the value of a game being made for financial considerations, and how the combat is actually sorta awesome when you finally get all your upgrades. Check it out!See omnystudio.com/listener for privacy information.
On the heels of launching my latest book, Our Book of Awesome, I'm enjoying the fellowship of two authors in my life — one of whom I met 22 years ago when I was in my final year at Queen's. Bounding into my life at the time came a young professor named Ajay Agrawal. And I mean bounding! He was cold calling left, right and center, dancing around the room, and extremely theatrical. As you listen to him you'll see why I found him so captivating and clairvoyant. Professor Ajay Agrawal has won Professor of the Year seven times! He's like Canada's Adam Grant. He is the co-author of the bestselling book, Prediction Machines: The Simple Economics of Artificial Intelligence, named one of the best tech books of the year by Forbes, The New York Times and The Economist. His latest book has just come out and it is called, Power and Prediction, also co-authored with Joshua Gans and Avi Goldfarb. Ajay is a tenured professor at Rotman, a research associate at The National Bureau of Economic Research in Cambridge, Massachusetts, founder of the Creative Destruction Lab, a not-for-profit program that helps start-ups launch, and co-founder of Next Canada dedicated to the development and training of young entrepreneurs. He is also a recent winner of the Order of Canada which is the highest civilian honor that Canada awards. Over the years I've gotten a chance to meet Ajay's truly lovely partner in life: Gina Buonaguro. And, guess what? She's a writer too! Ajay focuses on the future. Gina focuses on the past. Gina is originally from New Jersey but has been living in Toronto for many years. She started at Villanova University, all the way up to the University of British Columbia on a Fullbright Scholarship. Gina's written dozens of articles, won five writing grants and is the co-author of six historical fiction books, including her latest, The Virgins of Venice. You could not think of two books which are more different: The Virgins of Venice and Power and Prediction. One is a 500 year old historical fiction saga taking place in a convent with sexy nuns. And the other a deep dive into AI. I was intrigued by the relationship dynamics between them, what their books really say, and how their writing processes work. So I invited them, together, to come on 3 Books. I also asked Leslie to join the conversation. So the four of us sat down in Gina and Ajay's living room and we discussed questions like: what is the fate of girls in 16th century Venice, what does it mean for a city to be excommunicated, why has Uber been so revolutionary, what is the point vs systems solution in AI, how can books be shared and read together, what is an Untouchable Day, how can we think about living a little more intentionally, how does AI manipulate us today and it goes on and on and on. This is a wide ranging conversation that I think you will truly enjoy. Let's flip the page into Chapter 117 now… What You'll Learn: What does it mean for a city to be excommunicated? Why has Uber been so revolutionary? What is a point solution vs a system solution in AI? What is holding AI back? How can books be shared and read together? What is a writer's group? What is the power of reading aloud to our kids? How can we bring in more quiet into our busy urban lives? What is an untouchable day? How can we live intentionally? How does AI manipulate us? What are the challenges of raising kids in a tech centric world? Why are young people finding social interactions so awkward these days? What is the Chinese solution to screen time? What is the tension between ideology and critical thinking? How can we encourage more critical thinking? How can we temper cancel culture? What is the role of school today? You can find show notes and more information by clicking here: https://www.3books.co/chapters/117 Leave us a voicemail. Your message may be included in a future chapter: 1-833-READ-A-LOT. Sign up to receive podcast updates here: https://www.3books.co/email-list 3 Books is a completely insane and totally epic 15-year-long quest to uncover and discuss the 1000 most formative books in the world. Each chapter discusses the 3 most formative books of one of the world's most inspiring people. Sample guests include: Brené Brown, David Sedaris, Malcolm Gladwell, Angie Thomas, Cheryl Strayed, Rich Roll, Soyoung the Variety Store Owner, Derek the Hype Man, Kevin the Bookseller, Vishwas the Uber Driver, Roxane Gay, David Mitchell, Vivek Murthy, Mark Manson, Seth Godin, Judy Blume and Quentin Tarantino. 3 Books is published on the lunar calendar with each of the 333 chapters dropped on the exact minute of every single new moon and every single full moon all the way up to 5:21 am on September 1, 2031. 3 Books is an Apple "Best Of" award-winning show and is 100% non-profit with no ads, no sponsors, no commercials, and no interruptions. 3 Books has 3 clubs including the End of the Podcast Club, the Cover to Cover Club, and the Secret Club, which operates entirely through the mail and is only accessible by calling 1-833-READ-A-LOT. Each chapter is hosted by Neil Pasricha, New York Times bestselling author of The Book of Awesome, The Happiness Equation, Two-Minute Mornings, etc. For more info check out: https://www.3books.co
This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today's most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don't suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It's adopted from physics. It peels noise away from the image09.17 Isn't that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can't go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo's game two against Lee Sedol in 201623.40 Moravec's paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be trained on a single chip27.50 Models can increasingly be trained for general purposes and then tweaked for particular tasks28.30 Some of the big new models are open access29.00 What else should people learn about with regard to advanced AI?29.20 Neural Radiance Fields (NERF) models30.40 Flamingo and Gato31.15 We have mostly discussed research in these episodes, rather than engineering
Jesus becomes the intriguing guest of honor for a Pharisee and his religious companions' party this week on our Journey to Jerusalem. During the party, Jesus neglects to wash his hands before the meal. Gasp. And this seemingly insignificant decision tumbles into a robust discourse about religious practices, the inner health of the soul, and the vibrant outer life Jesus desires his friends to have in the world. And it's not really about following a detailed or rigorous list of religious rules. So what's the key to a vibrant kind of life? Join us this Sunday at 10 AM at Hillside Covenant to find out. Worship with Hillside Covenant Church as Dan Seitz teaches from Luke 12:13-34. To view or download a copy of this week's sermon notes follow this link: http://u.pc.cd/K16ctalK[ro1] Download the study guide “Facing the Journey Ahead” which accompanies Hillside's Fall sermon series on the Book of Luke “Journey to Jerusalem”. Follow this link to download the study guide: http://u.pc.cd/MlD If you are new to Hillside and are looking for ways to get connected and build community, visit our “Get Connected” page: https://hillsidecovenant.churchcenter.com/pages/get-connected We welcome you to Hillside and are so glad you joined us today! To give in support of Hillside Covenant and its ministries follow this link: https://hillsidecovenant.churchcenter.com The full service from Hillside Covenant Church, Sunday, September 25, 2022.
Jesus becomes the intriguing guest of honor for a Pharisee and his religious companions' party this week on our Journey to Jerusalem. During the party, Jesus neglects to wash his hands before the meal. Gasp. And this seemingly insignificant decision tumbles into a robust discourse about religious practices, the inner health of the soul, and the vibrant outer life Jesus desires his friends to have in the world. And it's not really about following a detailed or rigorous list of religious rules. So what's the key to a vibrant kind of life? Join us this Sunday at 10 AM at Hillside Covenant to find out. Worship with Hillside Covenant Church as Dan Seitz teaches from Luke 12:13-34. To view or download a copy of this week's sermon notes follow this link: http://u.pc.cd/K16ctalK Download the study guide “Facing the Journey Ahead” which accompanies Hillside's Fall sermon series on the Book of Luke “Journey to Jerusalem”. Follow this link to download the study guide: http://u.pc.cd/MlD If you are new to Hillside and are looking for ways to get connected and build community, visit our “Get Connected” page: https://hillsidecovenant.churchcenter.com/pages/get-connected We welcome you to Hillside and are so glad you joined us today! To give in support of Hillside Covenant and its ministries follow this link: https://hillsidecovenant.churchcenter.com The sermon from Hillside Covenant Church, Sunday, September 25, 2022.
Jeroomie presents with suspected hypovolemia after a long outing in the sun. Which of the following is the MOST anticipated finding: A. Arterial dilation B. Elevated heart rate with standing C. Bounding pulse D. Elevated central venous pressure LINKS MENTIONED: Did you get this question wrong?! If you were stuck between two answers and selected the wrong one, then you need to visit www.NPTEPASS.com, to learn about the #1 solution to STOP getting stuck. Are you looking for a bundle of Coach K's Top MSK Cheatsheets? Look no further: www.nptecheatsheets.com If you liked these tips, follow us on TikTok! --- Support this podcast: https://anchor.fm/thepthustle/support
What's your favourite thing to wear to the parks? The Wafflers sit down for a chat with good friend Robyn, to talk about different ways to show your Disney side by what you wear. Whether you go all out for Dapper Days, get creative with Disney bounding, or simply pack a case full of Disney-related t-shirts, there will be ideas here for you. Check out Robyn's designs- she has a great range of pocket tees, headbands, circle skirts and more! You can buy them from her online shop, Sew Tiki UK, at sewtiki.co.uk In addition, if you'd like to show your love of the Great British Mickey Waffle, stop by our website at gbmickeywaffle.com and click on ‘Support' to find links to our t-shirt designs on Redbubble or TeePublic. Hope you Enjoy the Show and Waffle On! Want More GB Mickey Waffle? Visit The Website: https://www.gbmickeywaffle.com Watch Here: https://www.youtube.com/TheGreatBritishMickeyWaffle Connect on Social Media: Facebook: https://www.facebook.com/gbmickeywaffle Instagram: https://www.instagram.com/gbmickeywaffle Twitter: https://www.twitter.com/gbmickeywaffle
Hey Howdy Hey! Welcome to episode 129 of Disney Assembled!!! We continue our report on our recent WDW trip. This week, we talk about Animal Kingdom and Disney's Hollywood Studios. If you enjoy the show, please consider helping us out. Buy us a Dole Whip! Become a member and get extra content on Patreon. Subscribe to the show on your podcast player of choice. If you are able, consider giving us a 5-star rating and review. Find and subscribe to our YouTube channel. Visit our Disney Assembled store to purchase some excellent Disney Assembled merch. Have a Disney Dad Joke of the Week for consideration? Have an idea for a future episode? Want to share a story for our "What the...?" series? Follow and shoot us a message on our socials: @DisneyAssembled on Facebook, Instagram, Twitter, and TikTok or email us at disneyassembled@gmail.com. Our show can also be heard on Tuesday, Thursday, and Saturday at 9am EST on Magic of the Mouse Radio. Be sure to check out our Spotify Playlist, "Disney Assembled's Favorites" - click here to listen. You can find links to all of the above, as well as links to where you can subscribe to the show, on our website: disneyassembled.com See you real soon! Troy & Mimi :) --- Send in a voice message: https://anchor.fm/disneyassembled/message
Intro Song: Politicking (By Eric July) https://ericjuly.bandcamp.com/album/rap-circle (https://ericjuly.bandcamp.com/album/rap-circle) All our links in one location! https://direct.me/theunderground (https://direct.me/theunderground) Bitcoin Address - 31oA5KzqUT7VQVZNeziaSF4B51xb921v1q Fountain Podcast App https://fountain.fm/ (https://fountain.fm/) The Rings of Power Main Teaser https://www.youtube.com/watch?v=ewgCqJDI_Nk (https://www.youtube.com/watch?v=ewgCqJDI_Nk) A Change in Marketing from Amazon https://ew.com/tv/lord-of-the-rings-the-rings-of-power-numenor-first-look/ (https://ew.com/tv/lord-of-the-rings-the-rings-of-power-numenor-first-look/) Empire Article where it contradicts the “based on” part https://www.empireonline.com/tv/news/lotr-the-rings-of-power-compete-peter-jackson-trilogy-game-of-thrones-exclusive/ (https://www.empireonline.com/tv/news/lotr-the-rings-of-power-compete-peter-jackson-trilogy-game-of-thrones-exclusive/) https://www.empireonline.com/movies/news/mpire-issue-preview-lord-of-the-rings-the-rings-of-power-bullet-train-tim-burton-stranger-things-copland/ (https://www.empireonline.com/movies/news/mpire-issue-preview-lord-of-the-rings-the-rings-of-power-bullet-train-tim-burton-stranger-things-copland/) Bounding into Comics LOTR Article https://boundingintocomics.com/2022/07/18/rumor-the-lord-of-the-rings-scholar-tom-shippey-fired-for-telling-prime-video-they-were-polluting-the-lore/ (https://boundingintocomics.com/2022/07/18/rumor-the-lord-of-the-rings-scholar-tom-shippey-fired-for-telling-prime-video-they-were-polluting-the-lore/)
Bounding into the cave today is artist and creator Celestian Brightmoon! We had a rad discussion about their path into the arts, the seeds X-Men planted, and then Celestian visits the Forbidden Restaurant. To find more of Celestian's work check out https://www.illustratoroffantasies.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Designers often would like to get a clear view of what they are designing in their viewport. This can be disrupted by simple tools like safe-title markers and bounding boxes around objects in the scene. Now, XPression users can show/hide them with a keystroke. Living Live! with Ross Video www.rossvideo.com/XPression-U
Ang aka ThatMagicalOhana on Instagram joins me to talk about some of her favorite fandoms! She talks to me about Marvel, Star Wars and Disney. We talk about favorite movies, characters, and projects that Ang is looking forward to seeing. Ang also talks to me about fandom fashion, bounding (or Disneybounding), and cosplay. You can find Ang at: https://www.instagram.com/thatmagicalohana/
Join us on the Fireside Chat with John F. Trent, author & Editor in Chief of cultural website Bounding Into Comics, providing comic book exclusives & industry insights since 2014, a comprehensive hub for nerd culture.
Boundaries are a struggle to set, pain to enforce, and if they are placed wrong. Today we will examine them from a very high-level overview. I will share what I have uncovered on how to set enforceable, healthy, self-protecting boundaries. Oh and please excuse my voice. I was just recovering from COVID when I recorded this. Music is original and produced by Chad Rohrbaugh at Rohr Sound Studios. Support this podcast at — https://redcircle.com/that-emotional-/exclusive-content Advertising Inquiries: https://redcircle.com/brands Get bonus content on Patreon See acast.com/privacy for privacy and opt-out information.
The back half of the show is all about Harlequins. The front half is all about the hobby and mostly about basing your models. It’s an important topic. Continuing our … Read More
Learning how to help yourself insteading of listen to people that never have.
Cryptid Campfire heads to Australia this week to discuss Australia's living Dino! Said to roam the Outback, this T-Rex hunts Kangaroos and cattle, causing all who see it to run away in terror… or perhaps its a figment of their imagination! Visit us at www.cryptidcampfire.com where you can get yourself sweet Campfire Gear! Interested in hearing more? Subscribe to us on Patreon and get access to exclusive episodes, wall papers, behind the scenes looks and more!
In this episode I discuss the tool Strategy Bounding. Strategy Bounding allows us to make quantitative decisions regarding balancing major program and product factors like time to market, cost point, reliability, new feature development
Episode 50: Spontaneous Bounding BouldersAs Hazel floats downThe River of No Return, her rocky situation becomes even harder. Voice Talents (in no particular order)Ashton Haugen as The Babbling BrookCharlee Leonard as Hazel PeachwoodNorman Leonard as The NarratorMusicPip HeywoodCobbler's Gulch LogoMegan SwietonSound FX AttributionsSpecial ThanksNon WelsSubjective Special ThanksGwendolyn OxenhamAll the GratitudeBecky Leonard