Podcasts about input output

Communication between an information processing system and the outside world

  • 120PODCASTS
  • 154EPISODES
  • 35mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about input output

Latest podcast episodes about input output

Late Confirmation by CoinDesk
Privacy Is Crypto's 'Number 1 Demon': Charles Hoskinson | Markets Daily

Late Confirmation by CoinDesk

Play Episode Listen Later May 28, 2025 13:53


The latest price moves and insights with Input Output founder and CEO Charles Hoskinson.To get the show every day, follow the podcast here.Input Output founder and CEO Charles Hoskinson joins CoinDesk Live at Consensus to weigh in on the future of crypto, highlighting the archaic financial middlemen, the expected market cycles, and the increasing adoption of cryptocurrencies by global governments.This content should not be construed or relied upon as investment advice. It is for entertainment and general information purposes.-This episode was hosted by Jennifer Sanasie and Sam Ewen. “Markets Daily” is produced by Jennifer Sanasie and edited by Victor Chen.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Markets Daily Crypto Roundup
Privacy Is Crypto's 'Number 1 Demon': Charles Hoskinson

Markets Daily Crypto Roundup

Play Episode Listen Later May 28, 2025 13:53


The latest price moves and insights with Input Output founder and CEO Charles Hoskinson.To get the show every day, follow the podcast here.Input Output founder and CEO Charles Hoskinson joins CoinDesk Live at Consensus to weigh in on the future of crypto, highlighting the archaic financial middlemen, the expected market cycles, and the increasing adoption of cryptocurrencies by global governments.This content should not be construed or relied upon as investment advice. It is for entertainment and general information purposes.-This episode was hosted by Jennifer Sanasie and Sam Ewen. “Markets Daily” is produced by Jennifer Sanasie and edited by Victor Chen.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Marcus Today Market Updates
Pre-Market Report – Wednesday 21st May- US Markets slip slightly - SPI up 48 - Gold bounces 1.8%

Marcus Today Market Updates

Play Episode Listen Later May 20, 2025 9:30


Wall Street recorded a negative session as markets contemplated US fiscal concerns while Congress debated a bill for tax cut. Ended a streak of six consecutive gains for the S&P 500. S&P 500 down 0.39%, NASDAQ down 0.38%. Dow down 115 points. Dropped at open and maintained level for most of session. Ended mid range. Mostly negative sector performance. Energy once again worst sector. Long-term global growth concerns. Growth sectors performed poorly. Tech down. Alphabet down 1.5% despite pushing AI at its Input/Output annual conference. Cyclicals also down. Tesla (+0.5%) rare positive as Musk stated its ready to trial robotaxi in Austin this June. Financials also eased as growth and fiscal concerns came into spotlight. JP Morgan (+0.3%) exception as shareholders approved executive pay packages and appointment of new directors. Utilities and Healthcare only two sectors up. Utilities benefitted from shift into defensives. Healthcare regained some of recent losses after Trump stated tariffs on sector on the way – no update as of yet. Home Depot down 0.6% despite beating estimates. Stated it would swallow tariffs rather than pass on to consumers. Resources up. Weaker dollar a boost. Aluminium, zinc, lead all up over 1%.  ASX to rise. SPI futures up 48 points (+0.62%).Want to invest with Marcus Today? The Managed Strategy Portfolio is designed for investors seeking exposure to our strategy while we do the hard work for you. If you're looking for personal financial advice, our friends at Clime Investment Management can help. Their team of licensed advisers operates across most states, offering tailored financial planning services.  Why not sign up for a free trial? Gain access to expert insights, research, and analysis to become a better investor.

Beyond The Valley
Cardano founder Hoskinson talks bitcoin's $250,000 future

Beyond The Valley

Play Episode Listen Later Apr 17, 2025 52:51


Charles Hoskinson, founder of Input Output and the Cardano blockchain, speaks to CNBC's Arjun Kharpal about out his views on the future of crypto, including why he thinks bitcoin could rally to $250,000. Hoskinson, who is also an Ethereum co-founder, gives his thoughts on why he thinks large technology companies like Apple and Microsoft will begin to use stablecoins. Plus, he discusses where Cardano is heading from a technology perspective. This episode was recorded from Paris Blockchain Week, as part of a Beyond The Valley crypto mini-series.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Brave Marketer
Blockchain & AI Revolution: Safeguarding Digital Transactions and Empowering User Privacy

The Brave Marketer

Play Episode Listen Later Feb 26, 2025 34:59


Chris Ghent, Chief Growth Officer at Input Output, explores the intersection of blockchain technology and artificial intelligence, focusing on the empowerment of users through decentralization and privacy advocacy. He discusses strategies crucial for community engagement, the importance of developing meaningful relationships in tech, and the role of privacy in shaping the future of digital transactions. Key Takeaways: The evolution and integration of AI and blockchain Strategies for effective marketing in the blockchain space The role of privacy in digital transactions The potential of AI to enhance user experience and security Navigating volatility within the cryptocurrency market The future of digital identities and user empowerment Guest Bio: Chris Ghent is the Chief Growth Officer at Input Output. With 16+ years in marketing and entrepreneurship, he specializes in advancing human-centric blockchain technology and driving usability and adoption. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

Alpha and Omega Christian Fellowship

Input Output Outcomes by Alpha and Omega Christian Fellowship "Weekly Sermon"

Tech Lead Journal
#197 - Beyond Input & Output: Building Outcome-Oriented Engineering Teams - Balki Kodarapu

Tech Lead Journal

Play Episode Listen Later Nov 4, 2024 55:59


“Input, Output, Outcome, and Impact. It's an escalating way of where to spend my time as an engineering leader, and more importantly, where my engineering team is spending their time on.” Balki Kodarapu is the VP of Engineering at Lōvu Health and a seasoned engineering leader with a wealth of experience from startups to large organizations. In this episode, Balki shares his valuable insights on how to build and lead high-performing engineering teams that go beyond just churning out code. We go deep into his practical framework for driving outcomes and impact, emphasizing why it's crucial for engineers to understand the ‘why' behind their work. Balki also shares effective strategies for setting, communicating, and reinforcing engineering values. We also discuss the importance of connecting with your team, practicing gratitude and curiosity, and measuring engineering metrics effectively. Tune in to gain valuable insights and practical tips for building outcome-oriented engineering teams and becoming a more effective leader.   Listen out for: Career Turning Points - [00:01:55] Impact & Outcome Driven Engineering - [00:05:50] Helping Engineering Connect to the Outcomes - [00:11:52] Balancing Engineers' Focus Time - [00:16:18] Key Engineering Metrics: Releasing with Joy & Confidence - [00:18:46] Engineering Metrics Other Org Functions Care About - [00:23:01] Setting Engineering Values - [00:30:33] How to Create Engineering Values - [00:36:16] Communicating Values - [00:40:18] Practicing Gratitude & Curiosity - [00:43:59] 3 Tech Lead Wisdom - [00:49:49] _____ Balki Kodarapu's BioBalki Kodarapu, an all-in engineering leader and entrepreneur at heart. Balki has a proven track record of leading SaaS products from inception to hyper-growth, helping companies achieve 2x to 10x revenue growth, including two successful exits. He loves being a hands-on engineer, director, and VP of Engineering (all at once!), contributing daily, shaping product strategy and building high-performing teams. Currently, Balki leads engineering at Lōvu Health where his team helps create positive, joyful & healthy experiences for pregnant & postpartum moms every single day. Follow Balki: LinkedIn – linkedin.com/in/balki _____ Our Sponsors Enjoy an exceptional developer experience with JetBrains. Whatever programming language and technology you use, JetBrains IDEs provide the tools you need to go beyond simple code editing and excel as a developer.Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.Make it happen. With code. Manning Publications is a premier publisher of technical books on computer and software development topics for both experienced developers and new learners alike. Manning prides itself on being independently owned and operated, and for paving the way for innovative initiatives, such as early access book content and protection-free PDF formats that are now industry standard.Get a 40% discount for Tech Lead Journal listeners by using the code techlead24 for all products in all formats. Like this episode?Show notes & transcript: techleadjournal.dev/episodes/197.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.

Wits & Weights: Strength and Nutrition for Skeptics
Is Excess Protein Turned Into Sugar and Fat? (Input-Output Systems) | Ep 237

Wits & Weights: Strength and Nutrition for Skeptics

Play Episode Listen Later Oct 30, 2024 13:56 Transcription Available


Is that scoop of protein powder helping you build muscle, or is it just being wasted and turned into sugar and fat? Today, we're using the engineering concept of Input-Output Systems to bust this common protein myth and help you make informed decisions about your protein intake and supplements like whey and pea/rice powder.Listener Sara S. asked about claims that protein powder isn't used by the body and is instead converted to sugar and fat. Learn about the science of protein metabolism and explain why these claims don't hold up to scrutiny.Learn how to choose the right protein powder for your goals and why it can be a valuable tool in optimizing your overall nutrition strategy.To get your question answered on a future episode, send me a text message.Try 1st Phorm protein powder as mentioned on the episode.Main Takeaways:Your body is an efficient input-output system that uses protein powder (and any "extra" protein) in a very specific wayHigh-quality protein powders, especially whey, are highly bioavailable and can be just as good (or superior to) many whole-food protein sources for muscle protein synthesisWhen choosing a protein powder, there are specific objective elements you should look for rather than believing any particular marketing claimProtein powder can help optimize your overall nutrition by making it easier to meet protein goals and has a surprising benefit when building muscle in a gaining phase

Men Talking Mindfulness
Your Precious Energy: Input, Output, Wasted, or Optimized

Men Talking Mindfulness

Play Episode Listen Later Sep 2, 2024 64:42


Want to optimize your energy and avoid the energy sucking vampires in your life? Learn how to budget your energy like a boss and avoid the drama that drains you. From breathwork to gratitude, we cover it all to keep your vibes high and your energy levels soaring. Learn how to kick negativity to the curb and embrace the power of gratitude to keep your energy on point. So, grab your whoop, tape your mouth shut, and get ready to level up your energy game with us! Timestamps: (00:00) "Mastering Your Energy: A Mindful Approach" (04:03) Focusing on Positive Energy (06:30) Optimizing Your Energy for a Balanced Life (07:30) Managing Energy for Optimal Time Use (09:20) Ways We Waste Our Energy (11:09) Navigating Energy Levels and Reducing Unnecessary Stress (15:27) Avoiding Unnecessary Stress and Energy Drain (16:56) Avoiding Burnout through Time Management and Setting Boundaries (18:27) Choosing Your Social Circle (19:49) Optimizing Energy for Productivity and Well-being (58:02) Optimizing Energy for Enhanced Performance Support our podcast here! https://podcasters.spotify.com/pod/show/mentalkingmindfulness/support --- Support this podcast: https://podcasters.spotify.com/pod/show/mentalkingmindfulness/support

Language Input Podcast
Full S4x08 - Bridging the INPUT-OUTPUT Gap, an Honest Conversation about Language Learning - with @joreneelanguages

Language Input Podcast

Play Episode Listen Later May 28, 2024 63:03


From her experience learning Spanish to honest advice on how to learn any foreign language and much more, today I welcome Jo to the show❗ Welcome to season 4 of the Language Input Podcast, in which I'll continue to have many interviews with language teachers, polyglots, language learners... to help you better understand the language learning process and try to convince you that we can ALL learn ANY language while enjoying the process. And I'll add live Q&A sessions and shorter rapid-fire questions to the mix. Follow me on all my social media for new daily content related to language learning, especially if you're looking to learn my native language Spanish. 🎬 Youtube: https://www.youtube.com/channel/UC5VQO82Gf2c-bmiTPI2h7fA 💻Twitch: ⁠⁠⁠⁠⁠⁠https://www.twitch.tv/spanishnaturalanguages⁠⁠⁠⁠⁠⁠ 📹 Instagram: ⁠⁠⁠⁠⁠⁠https://www.instagram.com/spanishnaturalanguages/⁠⁠⁠⁠⁠⁠ 📱 Tik Tok: ⁠⁠⁠⁠⁠⁠https://www.tiktok.com/@naturalanguagesspanish⁠⁠⁠⁠⁠⁠ ✍️Twitter: ⁠⁠⁠⁠⁠⁠https://twitter.com/NaturaLanguages⁠

Aja's & Claire Simone's Ketch A Vibe Show
Episode 208: Ketch A Vibe 730 With Jazzy The Gee.

Aja's & Claire Simone's Ketch A Vibe Show

Play Episode Listen Later Apr 13, 2024 55:25


Massive thanks to the incredible "Jazzy The Gee" for stepping in and coving for us 2.Aja & Claire1. Jack Kerouac. - Bottom Of My Shoe.2. Mark Murphy. - The Bad And The Beautiful.3. Mario Biondi. - Give Our Love Another Chance.4. Kurt Elling. - Can't Make It With Your Brain.5. Iain Mackenzie. - Frankie.6. Papik. - You're The First My Last My Everything.7. Black Legacy Project. - 41 Shots.8. Maniko Yoshida. - Moment Of Twilight.9. Irina Pavlovic Ft Dean Bowman. - The Soulful Heritage.10. Tony Finch Marino. - Comes For Real.11. Emily Saunders. - Sideways.12. Lea Mondo. - Vices.13. Input Output. - Eye To Eye.14. Superbad. - Beyond.15. Super Db & Jean-Michel Sutcliffe. - Side By Side.16. Harvey Mason Ft George Benson. - What's Going On.17. Alessandro Brunetta. - Wasting The Night.18. Papik & Frankie Lovecchio. - When Everything Is Falling.19. Roy Ayres. - What's The T. Delfonic Edit.20. NY Hustlers. - Fly Island.21. Bernard Pretty Purdy & The Playboys. - Artificialness.22. Milton Wright. - Po' Man.23. Lonnie Liston Smith. - We Can Dream.

Greg Durante Podcast
Truth Inside Out Input Output

Greg Durante Podcast

Play Episode Listen Later Apr 6, 2024 55:02


What's inside of you will eventually show on the outside. No matter how we cover up whatever is needed to cover up, it will find its way out. So fill your life with God's Words, so that when tragedy or trial or problems seem to be knocking you off of your feet, even if you stumble and fall, you'll spill what you have inside. God's Words, His promises, His life, His goodness and faithfulness. You spill because you are full of it and is overflowing. You cannot spill anything from an empty cup. So fill it up. Read the Bible and pray for understanding.

Diversilingua
D#14 - Input & output

Diversilingua

Play Episode Listen Later Mar 23, 2024 16:24


Do you want to know about Stephen Krashen's theories? Have you heard about Input & Output in the language learning process? In this video, we aim to discuss Stephen Krashen's work and how you can leverage it to acquire fluency in your target language. Grupo do Telegram do Diversilíngua para os encontros de prática de inglês: ⁠⁠⁠https://t.me/diversilinguapod Siga-nos nas redes sociais: Youtube: ⁠⁠⁠⁠⁠⁠https://www.youtube.com/@Diversilingua ⁠⁠⁠⁠⁠⁠ Facebook: ⁠⁠⁠⁠⁠⁠https://www.facebook.com/diversilingua/⁠⁠⁠ ⁠⁠⁠ Instagram: ⁠⁠⁠⁠⁠⁠https://www.instagram.com/diversilingua/⁠ --- Send in a voice message: https://podcasters.spotify.com/pod/show/diversilingua/message

podKASt - Der Kaindl Athletic System Podcast
3 Dinge die Du über Trainings glaubst (und die falsch sind) - podKASt 116

podKASt - Der Kaindl Athletic System Podcast

Play Episode Listen Later Feb 4, 2024 44:54


Willkommen zurück bei Kaindl Athletic System! In der heutigen Episode von podKASt 116 räumen wir mit drei weit verbreiteten Trainingsmythen auf, die du wahrscheinlich für wahr hältst.

Mansoor Danish
Ep 7 Inside the Mind The Input - Output Symphony

Mansoor Danish

Play Episode Listen Later Feb 3, 2024 4:35


Manifest It A.L.L. Podcast
The Greatest Way to Be of Service to the Planet | Ep. 229

Manifest It A.L.L. Podcast

Play Episode Listen Later Dec 6, 2023 17:19


Have a big heart + a big mission? Do you want to effect massive change on the planet? Then this is the episode for you! How do you actually be of greatest service to the planet? Hint: it starts with you and who you are BEING and the amount of personal transformational work you've done. Input = Output. Whatever energetic state you are in when you take action is in direct correlation to the results you experience. Which means, the more work you do to move through your own trauma, fears + insecurities, the more you love and believe in yourself and the more love and light you will be able to pour out into the planet. ❤️“The most service you could ever be of for yourself and others is to decide you are good enough. And from that place of abundance you will have/be/do more than you ever imagined because you cannot give from an empty cup.”Tune into this week's episode filled with powerful reminders of the truth of who you really are. Mentioned in the Episode: Transforming Fear to Faith > https://members.emyraldsinclaire.com/product/transforming-fear-to-faith-make-decisions-aligned-with-love/ Embodied - it's not what you learn but who you become in the process > https://emyraldsinclaire.com/embodiedConnect with Emyrald: www.facebook.com/groups/becometheempresswww.instagram.com/manifestwithemyrald www.emyraldsinclaire.com

經理人
EP254【經理人讀書會】如何從小養成閱讀習慣、有系統拆解一本書、寫作更有邏輯?《經理人月刊》副總編輯張玉琦分享她的閱讀體悟!

經理人

Play Episode Listen Later Dec 5, 2023 46:58


找不到時間閱讀,又想快速吸收新知識?經理人全新產品「好書快讀Podcast」上線! 加入 Apple Podcast 付費訂閱,每個月 150 元,即可享有兩個節目的付費內容。 經理人Podcast「好書快讀」+每日聽管理podcast「原子閱讀」系列 好書快讀:每周一到五,每天上架一集,每本書分 3 集、每集約 8 分鐘導讀好書精華,一個月約可聽到 6-7 本好書,幫你把複雜內容化為好吸收的實用技巧,隨學即用。原子閱讀:每天更新一集,每集用 3 分鐘學會職場實用技巧、認識一本好書。 讀書習慣難養成?專業領域知識要如何累積?讀過的書要如何應用? 一起來聽聽《經理人月刊》副總編輯張玉琦、資深主編邵蓓宣,分享閱讀心法。

Digital Freethought Radio Hour
#347 – Input-Output & Who are you and why

Digital Freethought Radio Hour

Play Episode Listen Later Dec 3, 2023 59:20


Atheists talk about information bubbles, societies that form who you are and the ideas that you're allowed to be exposed to (or not) in that information bubble and how all of that forms the "you" that you turn out to be.

Seismic Soundoff
199: How geophysics keeps people safe

Seismic Soundoff

Play Episode Listen Later Sep 21, 2023 17:21


Steve Roche discusses his current Geoscientists without Borders project addressing volcano preparedness in Guatemala. Steve's GWB project addresses the geohazard resiliency and safety of the communities in Guatemala. His project implements community-based educational workshops about earthquake and volcanic hazards. Steve's project also works to increase Guatemala's seismic and volcanic monitoring capacity while reducing disaster response time. In this inspiring conversation with host Andrew Geary, Steve provides his on-the-ground perspective of the project. He offers what has been accomplished and his vision for the project's future. This podcast takes joy in highlighting the humanitarian work of geophysicists worldwide. And this conversation is no exception in sharing how geophysics can impact communities through using the tools and knowledge that geophysicists have to give. And Steve is the perfect guest to showcase all that can be accomplished. RELATED LINKS * Read more about Steve's project, Increasing Natural Hazard Resiliency in Guatemala - https://seg.org/gwb_projects/guatemala-2/ * Explore the seismic monitors placed in Guatemala (and all around the world) - https://stationview.raspberryshake.org/ * Listen to our previous episode on Silvio De Angelis's project in Guatemala - https://seg.org/podcasts/episode-112-international-partnership-for-volcano-early-warning-a-gwb-story/ CALL FOR SUBMISSIONS The Early Career Subcommittee of the SEG Research Committee is receiving nominations of new members to serve the term 2023-2025. This subcommittee is open to graduate students active in research or early-career professionals up to three years post-graduation. As part of the SEG Research Committee, the Early-Career Subcommittee provides their opinion, advice, and vision to the research direction and goals of SEG from the perspective of career starters. If you are passionate about contributing to shaping the future of applied geophysics, please indicate your interest by sending a resume and cover letter to Xiaolei Tu at tuxl2009@hotmail.com before 30 September. SEISMIC SOUNDOFF WANTS TO HEAR FROM YOU! The podcast will celebrate 200 episodes on 5 October, and we want to hear from our listeners on this special milestone. * What's the most valuable thing you've learned from the show? * What surprised you? * What episode do you most share with others? Record your message today at https://www.speakpipe.com/SeismicSoundoff if you have answers to these questions and want to be showcased. BIOGRAPHY Steven L. Roche received his BSc in Geophysics from the University of California, Riverside, in June 1978. He worked for Geophysical Service, Inc. (GSI and HGS) as an Area Geophysicist for the Permian Basin Region of West Texas / Southeastern New Mexico. In January 1994, Steve returned to school, attending the Colorado School of Mines as a member of the Reservoir Characterization Project (RCP), studying multicomponent seismology and 4D applications. After receiving his Ph.D. in 1997, Steve joined Output Exploration, the oil and gas exploration division of Input/Output, working on exploration projects and multicomponent seismic applications within I/O. In 1999, Output Exploration, LLC (OPEX) became an independent oil and gas exploration company, and Steve participated in OPEX exploration efforts. Steve joined Veritas DGC in 2003, specializing in multicomponent applications in the position of Principal Geophysicist – Multicomponent Applications Group. Steve joined Cimarex Energy in Tulsa, Oklahoma, in 2011 as Manager of Geophysics for Cimarex until August 2017, when he joined the faculty within the Geoscience Department at The University of Tulsa. CREDITS Zach Bridges created original music for this show. Andrew Geary hosted, edited, and produced this episode at TreasureMint. The SEG podcast team is Jennifer Cobb, Kathy Gamble, and Ally McGinnis.

Human B Gon
On Humans 4: Input/Output

Human B Gon

Play Episode Listen Later Sep 19, 2023 7:32


On Humans, Module 4: Input/Output and Gender Discussed: Placement, function and usage of human input/output slots, and gender theory Warning: Human excretions are extremely damaging to machine components. Direct contact with humans or its droppings is strongly discouraged. Courtesy of Droidston Research Institute Voice: Natalie Antaya Words: Drew Frohmann + Jake Bogoch Mix: Adam Ive Recorded, mixed and produced at TA2 Sound + Music HUMAN-B-GON is a TA2 Original Production Help preserve the humans' habitat. Donate at SaveGarbageIsland.org Find and support our sponsors at fableandfolly.com/partners Learn more about your ad choices. Visit megaphone.fm/adchoices

Mission To The Moon Podcast
Input ดีแค่ไหนก็ไม่เท่า Output ที่ทรงพลัง! สื่อสารอย่างไรให้ได้ผลลัพธ์ดังใจ (Part 2) | Remaster EP.134

Mission To The Moon Podcast

Play Episode Listen Later Sep 16, 2023 20:39


ซ้อมมาดีแค่ไหน พอจับไมค์ก็ตื่นเต้น จนลืมเรื่องที่จะพูดไปหมด แล้วเราควรจะฝึกฝนอย่างไร? ให้สื่อสารได้อย่างน่าประทับใจ และเป็นมืออาชีพจนผู้ฟังคล้อยตาม . ทักษะการสื่อสารคือสิ่งจำเป็นต่อคนทำงาน แต่อาศัยความมั่นใจอย่างเดียวคงไม่พอ โดย MM Remaster EP. นี้ จะพาทุกคน มาร่วมเจาะลึกองค์ประกอบของการสื่อสาร พร้อมฝึกสกิลการสื่อสารที่ทรงพลัง และได้ผลลัพธ์สมดังความตั้งใจกันต่อ กับครูกรีน The Modern Melody Studio . รับฟังพร้อมกันได้ใน MM Remaster EP. นี้ . #missiontothemoon #missiontothemoonpodcast #theremasterproject

The Rhythms Podcast
10. Episode 10 Q&A Special

The Rhythms Podcast

Play Episode Listen Later Sep 10, 2023 63:01


It's time to hear Kris and Hannah answer YOUR questions about rhythms of life, about making the podcast, and about how personalised rhythms could help you, our listeners, to embrace and enjoy the season you're in! Find out more about your hosts, and discover how far and wide the impact that familiar loveliness could have in your life and the lives of fellow listeners! Links + resources from this episode: Check out Hannah & Kris' first podcast 'Family Time', which is now the first three episodes of the MORE Leadership Podcast. The Lazy Genius is one of our faves, always - check out her book and her podcast where she reminds us often to check our expectations If you're in the area, check out Hannah's family's favourite way to start the Christmas season: Cross Hills Garden Country Fair Listen to Episode 8 of our podcast, 'Input/Output' (if you haven't already) to hear about how you are always becoming you. James Clear's Atomic Habits Find us on Instagram at ⁠@itsrhythmspodcast⁠ Read a ⁠transcript⁠ of this episode --- Send in a voice message: https://podcasters.spotify.com/pod/show/itsrhythms/message

Mission To The Moon Podcast
Input ดีแค่ไหนก็ไม่เท่า Output ที่ทรงพลัง! สื่อสารอย่างไรให้ได้ผลลัพธ์ดังใจ | Remaster EP.133

Mission To The Moon Podcast

Play Episode Listen Later Sep 9, 2023 24:28


แม้ว่าเราจะมีไอเดียระดับเทพแค่ไหน แต่ถ้าขาดการสื่อสารที่ทรงพลัง ก็อาจสูญเปล่า . ทักษะการสื่อสารคือสิ่งจำเป็นต่อคนทำงาน แต่อาศัยความมั่นใจอย่างเดียวคงไม่พอ โดย MM Remaster EP. นี้ จะพาทุกคน มาร่วมเจาะลึกองค์ประกอบของการสื่อสาร กับครูกรีน The Modern Melody Studio พร้อมฝึกสกิลการส่งสารที่ทรงพลัง และได้ผลลัพธ์สมดังความตั้งใจ . รับฟังพร้อมกันได้ใน MM Remaster EP. นี้ . #missiontothemoon #missiontothemoonpodcast #theremasterproject

The Rhythms Podcast
8. Input/Output

The Rhythms Podcast

Play Episode Listen Later Aug 13, 2023 42:41


You are already you. But guess what.. You are also continually becoming you. You are being made. Everyday! So, what's making you? Join Kris and Hannah as they reflect on the ways in which their lives' inputs and influences are impacting their daily choices, habits and rhythms. Links + resources from this episode: Korie Robertson's Facebook post on influences and entertainment (that inspired this episode) @thatsmybookshelf on Instagram: recovering their books to look like Penguin Classics Watch one of Hannah's faves, SWAT, and then DM us with your hot takes! We want to hear from you! Do you have a question for Kris or Hannah about how a rhythm could support your daily life? An area of your life for them to troubleshoot with a RHYTHMS perspective? A follow up question from a previous episode? Or just curious to get to know Kris and Hannah more? We'll be answering your questions on Episode 10 of the pod, so share your question with us by emailing therhythmspodcast@gmail.com Read a transcript of this episode. Find us on Instagram at @itsrhythmspodcast --- Send in a voice message: https://podcasters.spotify.com/pod/show/itsrhythms/message

The Impossible Network
Dave Birss - Supercharging Our Creative Potential Through Effective Use Of AI

The Impossible Network

Play Episode Listen Later Aug 7, 2023 62:29


My guest this week is Dave Birss, With over 30 years of experience in creativity, technology, and innovation Dave has become a highly respected and sought-after public speaker and trainer of creative minds. Dave has taken everything he has learned over the past three decades to become a polymath who shares insights into the worlds of creativity, innovation, and now AI. Currently, Dave is the most popular AI and prompt engineering instructor on LinkedIn Learning. His forward-thinking approach has put him at the forefront of the AI wave, allowing him to balance practical approaches with advancing technology. As he says ‘ In a world of myopic evangelists and apocalyptic naysayers, his guided, sensible approach helps embrace the AI future with our eyes wide open. By stripping away the BS of AI he's an expert in explaining how it can really add value to an organization and individuals. His award-winning books, including "How To Get To Great Ideas," "A User Guide To The Creative Mind," "Friction," and "Iconic Advantage," give readers insight into how great ideas can transform business. That is why the next book he is currently working on will explore the best ways to use AI in business.Now over to Dave Timecodes00:00 Intro 01:54 Dave Harmonica intro03:58 Discovering his love of innovation and creativity 06:14 Becoming an accidental Author08:28 Overcoming barriers to creativity 10:14 Scratching the creative itch11:40 Daves discusses his books 14:10 The IKEA effect16:26 Becoming aware of AI's power 18:35 Dave's role to help others learn AI20:22 Why AI is different from other technologies23;50 Giving AI a creative brief26:18 Input Output, curiosity, and creativity33:02 The parallels to the invention of photography 37:26 Dave CREATE framework42:00 Achieving more with GPT 4 43:54 MAD framework 46:30 The AI education imperative 53:38 Writing a textbook for teachers 57:52 What excites him todaySocial Links Dave Birss site Dave Birss Linkedin Show notes Logeriithms Conor Grennan Runway AI Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jul 26, 2023 54:31


FlashAttention was first published by Tri Dao in May 2022 and it had a deep impact in the large language models space. Most open models you've heard of (RedPajama, MPT, LLaMA, Falcon, etc) all leverage it for faster inference. Tri came on the podcast to chat about FlashAttention, the newly released FlashAttention-2, the research process at Hazy Lab, and more. This is the first episode of our “Papers Explained” series, which will cover some of the foundational research in this space. Our Discord also hosts a weekly Paper Club, which you can signup for here. How does FlashAttention work?The paper is titled “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. There are a couple keywords to call out:* “Memory Efficient”: standard attention memory usage is quadratic with sequence length (i.e. O(N^2)). FlashAttention is sub-quadratic at O(N). * “Exact”: the opposite of “exact” in this case is “sparse”, as in “sparse networks” (see our episode with Jonathan Frankle for more). This means that you're not giving up any precision.* The “IO” in “IO-Awareness” stands for “Input/Output” and hints at a write/read related bottleneck. Before we dive in, look at this simple GPU architecture diagram:The GPU has access to three memory stores at runtime:* SRAM: this is on-chip memory co-located with the actual execution core. It's limited in size (~20MB on an A100 card) but extremely fast (19TB/s total bandwidth)* HBM: this is off-chip but on-card memory, meaning it's in the GPU but not co-located with the core itself. An A100 has 40GB of HBM, but only a 1.5TB/s bandwidth. * DRAM: this is your traditional CPU RAM. You can have TBs of this, but you can only get ~12.8GB/s bandwidth, which is way too slow.Now that you know what HBM is, look at how the standard Attention algorithm is implemented:As you can see, all 3 steps include a “write X to HBM” step and a “read from HBM” step. The core idea behind FlashAttention boils down to this: instead of storing each intermediate result, why don't we use kernel fusion and run every operation in a single kernel in order to avoid memory read/write overhead? (We also talked about kernel fusion in our episode with George Hotz and how PyTorch / tinygrad take different approaches here)The result is much faster, but much harder to read:As you can see, FlashAttention is a very meaningful speed improvement on traditional Attention, and it's easy to understand why it's becoming the standard for most models.This should be enough of a primer before you dive into our episode! We talked about FlashAttention-2, how Hazy Research Group works, and some of the research being done in Transformer alternatives.Show Notes:* FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness (arXiv)* FlashAttention-2* Together AI* From Deep Learning to Long Learning* The Hardware Lottery by Sara Hooker* Hazy Research* Is Attention All You Need?* Nvidia CUTLASS 3* SRAM scaling slows* Transformer alternatives:* S4* Hyena* Recurrent Neural Networks (RNNs)Timestamps:* Tri's background [00:00:00]* FlashAttention's deep dive [00:02:18]* How the Hazy Research group collaborates across theory, systems, and applications [00:17:21]* Evaluating models beyond raw performance [00:25:00]* FlashAttention-2 [00:27:00]* CUDA and The Hardware Lottery [00:30:00]* Researching in a fast-changing market [00:35:00]* Promising transformer alternatives like state space models and RNNs [00:37:30]* The spectrum of openness in AI models [00:43:00]* Practical impact of models like LLAMA2 despite restrictions [00:47:12]* Incentives for releasing open training datasets [00:49:43]* Lightning Round [00:53:22]Transcript:Alessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. Today we have no Swyx, because he's in Singapore, so it's a one-on-one discussion with Tri Dao. Welcome! [00:00:24]Tri: Hi everyone. I'm Tri Dao, excited to be here. [00:00:27]Alessio: Tri just completed his PhD at Stanford a month ago. You might not remember his name, but he's one of the main authors in the FlashAttention paper, which is one of the seminal work in the Transformers era. He's got a lot of interest from efficient transformer training and inference, long range sequence model, a lot of interesting stuff. And now you're going to be an assistant professor in CS at Princeton next year. [00:00:51]Tri: Yeah, that's right. [00:00:52]Alessio: Yeah. And in the meantime, just to get, you know, a low pressure thing, you're Chief Scientist at Together as well, which is the company behind RedPajama. [00:01:01]Tri: Yeah. So I just joined this week actually, and it's been really exciting. [00:01:04]Alessio: So what's something that is not on the internet that people should know about you? [00:01:09]Tri: Let's see. When I started college, I was going to be an economist, so I was fully on board. I was going to major in economics, but the first week I was at Stanford undergrad, I took a few math classes and I immediately decided that I was going to be a math major. And that kind of changed the course of my career. So now I'm doing math, computer science, AI research. [00:01:32]Alessio: I had a similar thing. I started with physics and then I took like a programming course and I was like, I got to do computer science. I don't want to do physics. So FlashAttention is definitely, everybody's using this. Everybody loves it. You just released FlashAttention 2 last week. [00:01:48]Tri: Yeah. Early this week on Monday. Yeah. [00:01:53]Alessio: You know, AI time. Things move fast. So maybe let's run through some of the FlashAttention highlights, some of the innovation there, and then we can dive into FlashAttention 2. So the core improvement in FlashAttention is that traditional attention is a quadratic sequence length. And to the two, FlashAttention is linear, which obviously helps with scaling some of these models. [00:02:18]Tri: There are two factors there. So of course the goal has been to make attention go faster or more memory efficient. And ever since attention became popular in 2017 with the Transformer paper, lots and lots of folks have been working on this. And a lot of approaches has been focusing on approximating attention. The goal is you want to scale to longer sequences. There are tons of applications where you want to do that. But scaling to longer sequences is difficult because attention scales quadratically in sequence length on both runtime and memory, as you mentioned. So instead of trying to approximate attention, we were trying to figure out, can we do the same computation and maybe be more memory efficient? So in the end, we ended up being the memory is linear in sequence length. In terms of computation, it's still quadratic, but we managed to make it much more hardware friendly. And as a result, we do get wall clock speed up on the order of 2 to 4x, which really helps because that just means that you'll be able to train with 2 to 4x longer sequence length for the same cost without doing any approximations. As a result, lots of folks have been using this. The thing is available in a lot of libraries that do language model training or fine tuning. [00:03:32]Alessio: And the approximation thing is important because this is an exact thing versus a sparse. So maybe explain a little bit the difference there. [00:03:40]Tri: For sure. So in addition, essentially you compute pairwise similarity between every single element in a sequence against each other. So there's been other approaches where instead of doing all that pairwise computation, you only compute similarity for some pairs of elements in the sequence. So you don't do quadratic number of comparison. And this can be seen as some form of sparsity. Essentially you're ignoring some of the elements. When you write down the matrix, you essentially say, OK, I'm going to pretend there's zero. So that has some benefits in terms of runtime and memory. But the trade-off is that it tends to do worse in terms of quality because you're essentially approximating or ignoring some elements. And I personally have worked on this as well for a few years. But when we talk to practitioners who actually train models, especially at large scale, they say, tend not to use these approximate attention methods. Because it turns out, this was surprising to me at the time, was that these approximation methods, even though they perform fewer computation, they tend to not be faster in walk-on time. So this was pretty surprising because back then, I think my background was more on the theoretical side. So I was thinking of, oh, how many flops or floating point operations are you performing? And hopefully that correlates well with walk-on time. But I realized that I was missing a bunch of ideas from the system side where flops or floating point operations don't necessarily correlate with runtime. There are other factors like memory reading and writing, parallelism, and so on. So I learned a ton from just talking to systems people because they kind of figured this stuff out a while ago. So that was really eye-opening. And then we ended up focusing a lot more on memory reading and writing because that turned out to be the majority of the time when you're doing attention is reading and writing memory. [00:05:34]Alessio: Yeah, the I.O. awareness is probably one of the biggest innovations here. And the idea behind it is, like you mentioned, the FLOPS growth of the cards have been going up, but the memory bandwidth, not as much. So I think maybe that was one of the assumptions that the original attention paper had. So talk a bit about how that came to be as an idea. It's one of those things that like in insight, it's like, obviously, why are we like rewriting to like HBM every time, you know, and like once you change it, it's clear. But what was that discovery process? [00:06:08]Tri: Yeah, in hindsight, a lot of the ideas have already been there in the literature. And I would say is it was somehow at the intersection of both machine learning and systems. And you kind of needed ideas from both sides. So on one hand, on the system side, so lots of systems folks have known that, oh, you know, kernel fusion is great. Kernel fusion just means that instead of performing, you know, loading the same element, instead of performing an operation, write it down, load it back up and perform the second operation, you just load it once, perform two operations and then write it down again. So that saves you kind of memory read and write in the middle there. So kernel fusion has been a classic. There's been other techniques from the system side, like tiling, where you perform things in the form of computations in block, again, so that you can load it into a really fast memory. Think of it as a cache. And this is, again, classical computer science ideas, right? You want to use the cache. So the system folks have been thinking about these ideas for a long time, and they apply to attention as well. But there were certain things in attention that made it difficult to do a complete kernel fusion. One of which is there is this softmax operation in the middle, which requires you to essentially sum across the row of the attention matrix. So it makes it difficult to kind of break it, because there's this dependency. So it makes it difficult to break things into a block. So on the system side, people have been thinking about these ideas, but it's been difficult to kind of do kernel fusion for the entire operation. On the machine learning side, people have been thinking more algorithmically. They say, okay, either we can approximate attention, or there's this trick called the online softmax trick, which says that because of softmax, the way it's written mathematically, you can actually break it up into smaller pieces, do some rescaling, and still get the right answer. So this online softmax trick has been around for a while. I think there was a paper from NVIDIA folks back in 2018 about this. And then there was a paper from Google. So Marcus, Rob, and Stats wrote a paper late 2021 on using this online softmax trick to break attention up into smaller pieces. So a lot of the ideas were already there. But it turns out, you kind of need to combine ideas from both sides. So you need to understand that, hey, we want to do kernel fusion to reduce memory written writes. But we also need this online softmax trick to be able to break the softmax into smaller pieces so that a lot of the systems tricks kind of carry through. We saw that, and it was kind of a natural idea that we ended up using ideas from both sides, and it ended up working pretty well. Yeah. [00:08:57]Alessio: Are there any downsides to kernel fusion? If I think about databases and the reasons why we have atomic operations, you know, it's like, you have observability and fallback in between them. How does that work with attention? Is there anything that we lose by fusing the operations? [00:09:13]Tri: Yeah, I think mostly on the practical side is that you lose a little bit of flexibility in the sense that, hey, now you have, for example, faster attention, it's just a subroutine that you would call to do attention. But as a researcher, let's say you don't want that exact thing, right? You don't want just attention, let's say you want some modification to attention. You want to do, hey, I'm going to multiply the query and key, but then I'm going to do this extra thing before I carry on. So kernel fusion just means that, okay, we have a subroutine that does the entire thing. But if you want to experiment with things, you won't be able to use that fused kernel. And the answer is, can we have a compiler that then automatically does a lot of this kernel fusion? Lots of compiler folks are thinking about this, either with a new language or you can embed it in PyTorch. PyTorch folks have been working on this as well. So if you write just your code in PyTorch and they can capture the graph, can they generate code that will fuse everything together? That's still ongoing, and it works for some cases. But for attention, because of this kind of softmax rewriting stuff, it's been a little bit more difficult. So maybe in a year or two, we'll have compilers that are able to do a lot of these optimizations for you. And you don't have to, for example, spend a couple months writing CUDA to get this stuff to work. Awesome. [00:10:41]Alessio: And just to make it clear for listeners, when we say we're not writing it to memory, we are storing it, but just in a faster memory. So instead of the HBM, we're putting it in the SRAM. Yeah. [00:10:53]Tri: Yeah. [00:10:54]Alessio: Maybe explain just a little bit the difference there. [00:10:56]Tri: Yeah, for sure. This is kind of a caricature of how you think about accelerators or GPUs in particular, is that they have a large pool of memory, usually called HBM, or high bandwidth memory. So this is what you think of as GPU memory. So if you're using A100 and you list the GPU memory, it's like 40 gigs or 80 gigs. So that's the HBM. And then when you perform any operation, you need to move data from the HBM to the compute unit. So the actual hardware unit that does the computation. And next to these compute units, there are on-chip memory or SRAM, which are much, much smaller than HBM, but much faster. So the analogy there is if you're familiar with, say, CPU and RAM and so on. So you have a large pool of RAM, and then you have the CPU performing the computation. But next to the CPU, you have L1 cache and L2 cache, which are much smaller than DRAM, but much faster. So you can think of SRAM as the small, fast cache that stays close to the compute unit. Physically, it's closer. There is some kind of asymmetry here. So HBM is much larger, and SRAM is much smaller, but much faster. One way of thinking about it is, how can we design algorithms that take advantage of this asymmetric memory hierarchy? And of course, lots of folks have been thinking about this. These ideas are pretty old. I think back in the 1980s, the primary concerns were sorting. How can we sort numbers as efficiently as possible? And the motivating example was banks were trying to sort their transactions, and that needs to happen overnight so that the next day they can be ready. And so the same idea applies, which is that they have slow memory, which was hard disk, and they have fast memory, which was DRAM. And people had to design sorting algorithms that take advantage of this asymmetry. And it turns out, these same ideas can apply today, which is different kinds of memory. [00:13:00]Alessio: In your paper, you have the pyramid of memory. Just to give people an idea, when he says smaller, it's like HBM is like 40 gig, and then SRAM is like 20 megabytes. So it's not a little smaller, it's much smaller. But the throughput on card is like 1.5 terabytes a second for HBM and like 19 terabytes a second for SRAM, which is a lot larger. How do you think that evolves? So TSMC said they hit the scaling limits for SRAM, they just cannot grow that much more. HBM keeps growing, HBM3 is going to be 2x faster than HBM2, I think the latest NVIDIA thing has HBM3. How do you think about the future of FlashAttention? Do you think HBM is going to get fast enough when maybe it's not as useful to use the SRAM? [00:13:49]Tri: That's right. I think it comes down to physics. When you design hardware, literally SRAM stays very close to compute units. And so you don't have that much area to essentially put the transistors. And you can't shrink these things too much. So just physics, in terms of area, you don't have that much area for the SRAM. HBM is off-chip, so there is some kind of bus that essentially transfers data from HBM to the compute unit. So you have more area to essentially put these memory units. And so yeah, I think in the future SRAM probably won't get that much larger, because you don't have that much area. HBM will get larger and faster. And so I think it becomes more important to design algorithms that take advantage of this memory asymmetry. It's the same thing in CPU, where the cache is really small, the DRAM is growing larger and larger. DRAM could get to, I don't know, two terabytes, six terabytes, or something, whereas the cache stays at, I don't know, 15 megabytes or something like that. I think maybe the algorithm design becomes more and more important. There's still ways to take advantage of this, I think. So in the future, I think flash attention right now is being used. I don't know if in the next couple of years, some new architecture will come in and whatnot, but attention seems to be still important. For the next couple of years, I still expect some of these ideas to be useful. Not necessarily the exact code that's out there, but I think these ideas have kind of stood the test of time. New ideas like IO awareness from back in the 1980s, ideas like kernel fusions, tiling. These are classical ideas that have stood the test of time. So I think in the future, these ideas will become more and more important as we scale models to be larger, as we have more kinds of devices, where performance and efficiency become much, much more important. [00:15:40]Alessio: Yeah, and we had Jonathan Frankle on the podcast, and if you go to issattentionallyouneed.com, he has an outstanding bet, and he does believe that attention will be the state of the art architecture still in a few years. Did you think flash attention would be this popular? I'm always curious on the research side, you publish a paper, and obviously you know it's great work, but sometimes it just kind of falls flat in the industry. Could you see everybody just starting to use this, or was that a surprise to you? [00:16:11]Tri: Certainly, I didn't anticipate the level of popularity. Of course, we were extremely happy to have people using this stuff and giving us feedback and so on, and help us improve things. I think when we were writing the paper, I remember sending an email to one of my advisors, and like, hey, I'm excited about this paper, but I think the most important thing will be the artifact, which is the code. So I knew that the code will be valuable. So we kind of focus a lot on the code and make sure that the code is usable and as fast as can be. Of course, the idea, the paper presents the ideas and explain it and have experiments that validate the idea, but I knew that the artifact or the code was also pretty important. And that turned out to be the right focus, which is, you know, we put out the paper, we release the code and continue working on the code. So it's a team effort with my co-authors as well. [00:17:07]Alessio: We mentioned Hazy Research a bunch of times on the podcast before. I would love for you to spend five minutes just talking about how does the group work? How do people get together? How do you bounce ideas off of each other? Yeah. [00:17:21]Tri: So Hazy Research is a research group at Stanford led by one of my advisors, Chris Re. I love the people there. It was one of the best experiences I had. They've made my PhD so much more enjoyable. And I think there are a couple of ways that the group has been working pretty well. So one is, I think there's a diverse pool of people who either, you know, some of them focus on algorithms and theory, some of them focus on building systems, some of them focus on applications. And as a result, there is this flow of idea. So as an example, some of us were working on like more algorithms and theory, and then we can talk to the folks building systems and say, hey, let's try it out and let's put it in the systems and see how it is. And there you will get feedback from systems folks. They will say, hey, we implemented this, or we tried this and this is where it doesn't work, something like that. And once we put it in the systems, the application folks can use the algorithm or new methods or new models. And we again get great feedback from them because the application folks, for example, some of my good friends, they focus on medical imaging or seizure detection. And that is the problem they care about. And if your method doesn't work on the task they care about, they will tell you. Whereas I think a lot of people in machine learning, they're a little bit more flexible. So they will be like, hey, it doesn't work on seizure detection. Let's try some other task, right? But having that direct feedback of like, hey, it doesn't work there, let's figure out why. I think that that feedback allows us to do better work. And I think that kind of process of exchanging ideas, validating it in a real system so that applications folks can try it out and give you feedback. That cycle has been very, very useful. And so that's one, having a diverse group of people. The other one is, and this is something I really appreciate from advice from Chris was try to understand the fundamental, right? And he's happy letting me go off and read some textbooks and playing with things because I think a lot of research ideas come from understanding the old literature and see how it fits with the new landscape. And so if you just new archive papers every day, that's great, but you also need to read textbooks. And that's one advice I got from Chris, which is understand the fundamentals. And I think that allows us to do more impactful work. [00:19:46]Alessio: How do you think about academia versus industry? I feel like AI / Machine Learning has been an area where up until three, four years ago, most of the cutting edge work was being done in academia. And now there's all these big industry research labs. You're obviously going to Princeton, so you're an academia believer. How should people think about where to go? Say I'm doing my master's, I have to decide between doing a PhD and going into OpenAI Anthropic. How should I decide? [00:20:15]Tri: I think they kind of play a complementary role, in my opinion. Of course, I also was considering different paths as well. So I think right now, scaling matters a lot, especially when you talk about language models and AI and so on. Scaling matters a lot. And that means that you need compute resources and you need infrastructure and you need engineers time. And so industry tends to have an advantage when it comes to scaling things. But a lot of the ideas actually came from academia. So let's take Attention, which got popular with the Transformer in 2017. Attention actually has been around for a while. So I think the first mention was in 2014, a paper from Bernadot and others and Yoshua Bengio, which is coming from academia. A lot of ideas did come from academia. And scaling things up, of course, I think OpenAI has been great at scaling things up. That was the bet that they made after, I think, GPT-2. So they saw that scaling these things up to back then was 1.5 billion parameter seemed to give you amazing capabilities. So they really committed to that. They really committed to scaling things. And that turned out to be, it's been a pretty successful bet. I think for academia, we're still trying to figure out exactly what we're doing in this shifting landscape. And so lots of folks have been focusing on, for example, evaluation. So I know the Stanford Center for Foundation Model led by Percy, they have this benchmark called HELM, which is this holistic benchmark. So trying to figure out, okay, characterizing the landscape of different kinds of models, what people should evaluate, what people should measure, and things like that. So evaluation is one role. The other one is understanding. So this has happened historically where there's been some development in the industry and academia can play a role in explaining, understanding. They have the luxury to slow down trying to understand stuff, right? So lots of paper on understanding what's really going on, probing these models, and so on. I think I'm not as familiar with the NLP literature, but my impression is there's a lot of that going on in the NLP conferences, which is understanding what these models are doing, what capabilities they have, and so on. And the third one I could see is that the academia can take more risky bets in the sense that we can work on stuff that is quite different from industry. I think industry, my impression is you have some objective. You're trying to say, hey, for this quarter, we want to scale the model in this particular way. Next quarter, we want the model to have these capabilities. You're trying to get objectives that maybe, I don't know, 70% that will work out because it's important for the company's direction. I think for academia, the way things work is you have many, many researchers or PhD students, and they're kind of pursuing independent directions. And they have a little bit more flexibility on, hey, I'm going to try out this seemingly crazy idea and see, let's say there's a 30% chance of success or something. And however you define success, for academia, a lot of the time, success just means like, hey, we found something interesting. That could eventually go into industry through collaboration and so on. So I do see academia and industry kind of playing complementary roles. And as for someone choosing a career, I think just more and more generally, industry would be probably better in terms of compensation, in terms of probably work-life balance. But my biased perspective is that maybe academia gives you a little bit more freedom to think and understand things. So it probably comes down to personal choice. I end up choosing to be a professor next year at Princeton. But of course, I want to maintain a relationship with industry folks. I think industry folks can provide very valuable feedback to what we're doing in academia so that we understand where the field is moving because some of the directions are very much influenced by what, for example, OpenAI or Google is doing. So we want to understand where the field is moving. What are some promising applications? And try to anticipate, okay, if the field is moving like this, these applications are going to be popular. What problems will be important in two, three years? And then we try to start thinking about those problems so that hopefully in two, three years, we have some of the answers to some of these problems in two, three years. Sometimes it works out, sometimes it doesn't. But as long as we do interesting things in academia, that's the goal. [00:25:03]Alessio: And you mentioned the eval side. So we did a Benchmarks 101 episode. And one of the things we were seeing is sometimes the benchmarks really influence the model development. Because obviously, if you don't score well on the benchmarks, you're not going to get published and you're not going to get funded. How do you think about that? How do you think that's going to change now that a lot of the applications of these models, again, is in more narrow industry use cases? Do you think the goal of the academia eval system is to be very broad and then industry can do their own evals? Or what's the relationship there? [00:25:40]Tri: Yeah, so I think evaluation is important and often a little bit underrated. So it's not as flashy as, oh, we have a new model that can do such and such. But I think evaluation, what you don't measure, you can't make progress on, essentially. So I think industry folks, of course, they have specific use cases that their models need to do well on. And that's what they care about. Not just academia, but other groups as well. People do understand what are some of the emerging use cases. So for example, now one of the most popular use cases is Chatbot. And then I think folks from Berkeley, some of them are from Berkeley, call them MLCs. They set up this kind of Chatbot arena to essentially benchmark different models. So people do understand what are some of the emerging use cases. People do contribute to evaluation and measurement. And as a whole, I think people try to contribute to the field and move the field forward, albeit that maybe slightly different directions. But we're making progress and definitely evaluation and measurement is one of the ways you make progress. So I think going forward, there's still going to be just more models, more evaluation. We'll just have better understanding of what these models are doing and what capabilities they have. [00:26:56]Alessio: I like that your work has been focused on not making benchmarks better, but it's like, let's just make everything faster. So it's very horizontal. So FlashAttention 2, you just released that on Monday. I read in the blog post that a lot of the work was also related to some of the NVIDIA library updates. Yeah, maybe run us through some of those changes and some of the innovations there. Yeah, for sure. [00:27:19]Tri: So FlashAttention 2 is something I've been working on for the past couple of months. So the story is the NVIDIA CUTLASS team, they released a new version of their library, which contains all these primitives to allow you to do matrix multiply or memory loading on GPU efficiently. So it's a great library and I built on that. So they released their version 3 back in January and I got really excited and I wanted to play with that library. So as an excuse, I was just like, okay, I'm going to refactor my code and use this library. So that was kind of the start of the project. By the end, I just ended up working with the code a whole lot more and I realized that, hey, there are these inefficiencies still in Flash Attention. We could change this way or that way and make it, in the end, twice as fast. But of course, building on the library that the NVIDIA folks released. So that was kind of a really fun exercise. I was starting out, it's just an excuse for myself to play with the new library. What ended up was several months of improvement, improving Flash Attention, discovering new ideas. And in the end, we managed to make it 2x faster and now it's pretty close to probably the efficiency of things like matrix multiply, which is probably the most optimized subroutine on the planet. So we're really happy about it. The NVIDIA Cutlass team has been very supportive and hopefully in the future, we're going to collaborate more. [00:28:46]Alessio: And since it's an NVIDIA library, can you only run this on CUDA runtimes? Or could you use this and then run it on an AMD GPU? [00:28:56]Tri: Yeah, so it's an NVIDIA library. So right now, the code we release runs on NVIDIA GPUs, which is what most people are using to train models. Of course, there are emerging other hardware as well. So the AMD folks did implement a version of Flash Attention, I think last year as well, and that's also available. I think there's some implementation on CPU as well. For example, there's this library, ggml, where they implemented the same idea running on Mac and CPU. So I think that kind of broadly, the idea would apply. The current implementation ended up using NVIDIA's library or primitives, but I expect these ideas to be broadly applicable to different hardware. I think the main idea is you have asymmetry in memory hierarchy, which tends to be everywhere in a lot of accelerators. [00:29:46]Alessio: Yeah, it kind of reminds me of Sara Hooker's post, like the hardware lottery. There could be all these things that are much better, like architectures that are better, but they're not better on NVIDIA. So we're never going to know if they're actually improved. How does that play into some of the research that you all do too? [00:30:04]Tri: Yeah, so absolutely. Yeah, I think Sara Hooker, she wrote this piece on hardware lottery, and I think she captured really well of what a lot of people have been thinking about this. And I certainly think about hardware lottery quite a bit, given that I do some of the work that's kind of really low level at the level of, hey, we're optimizing for GPUs or NVIDIA GPUs and optimizing for attention itself. And at the same time, I also work on algorithms and methods and transformer alternatives. And we do see this effect in play, not just hardware lottery, but also kind of software framework lottery. You know, attention has been popular for six years now. And so many kind of engineer hours has been spent on making it as easy and efficient as possible to run transformer, right? And there's libraries to do all kinds of tensor parallel, pipeline parallel, if you use transformer. Let's say someone else developed alternatives, or let's just take recurrent neural nets, like LSTM, GRU. If we want to do that and run that efficiently on current hardware with current software framework, that's quite a bit harder. So in some sense, there is this feedback loop where somehow the model architectures that take advantage of hardware become popular. And the hardware will also kind of evolve to optimize a little bit for that kind of architecture and software framework will also evolve to optimize for that particular architecture. Right now, transformer is the dominant architecture. So yeah, I'm not sure if there is a good way out of this. Of course, there's a lot of development. Things like, I think compilers will play a role because compilers allow you to maybe still be much more efficient across different kinds of hardware because essentially you write the same code and compiler will be able to make it run efficiently different kinds of hardware. So for example, there's this language Mojo, they're compiler experts, right? And their bet is AI models will be running on different kinds of devices. So let's make sure that we have really good compilers with a good language that then the compiler can do a good job optimizing for all kinds of devices. So that's maybe one way that you can get out of this cycle. But yeah, I'm not sure of a good way. In my own research, I have to think about both the algorithm new model and how it maps to hardware. So there are crazy ideas that seem really good, but will be really, really difficult to run efficiently. And so as a result, for example, we can't really scale some of the architectures up simply because they're not hardware friendly. I have to think about both sides when I'm working on new models. [00:32:50]Alessio: Yeah. Have you spent any time looking at some of the new kind of like AI chips companies, so to speak, like the Cerebras of the world? Like one of their innovations is co-locating everything on the chip. So you remove some of this memory bandwidth issue. How do you think about that? [00:33:07]Tri: Yeah, I think that's an interesting bet. I think Tesla also has this Dojo supercomputer where they try to have essentially as fast on-chip memory as possible and removing some of these data transfer back and forth. I think that's a promising direction. The issues I could see, you know, I'm definitely not a hardware expert. One issue is the on-chip memory tends to be really expensive to manufacture, much more expensive per gigabyte compared to off-chip memory. So I talked to, you know, some of my friends at Cerebros and, you know, they have their own stack and compiler and so on, and they can make it work. The other kind of obstacle is, again, with compiler and software framework and so on. For example, if you can run PyTorch on this stuff, lots of people will be using it. But supporting all the operations in PyTorch will take a long time to implement. Of course, people are working on this. So I think, yeah, we kind of need these different bets on the hardware side as well. Hardware has, my understanding is, has a kind of a longer time scale. So you need to design hardware, you need to manufacture it, you know, maybe on the order of three to five years or something like that. So people are taking different bets, but the AI landscape is changing so fast that it's hard to predict, okay, what kind of models will be dominant in, let's say, three or five years. Or thinking back five years ago, would we have known that Transformer would have been the dominant architecture? Maybe, maybe not, right? And so different people will make different bets on the hardware side. [00:34:39]Alessio: Does the pace of the industry and the research also influence the PhD research itself? For example, in your case, you're working on improving attention. It probably took you quite a while to write the paper and everything, but in the meantime, you could have had a new model architecture come out and then it's like nobody cares about attention anymore. How do people balance that? [00:35:02]Tri: Yeah, so I think it's tough. It's definitely tough for PhD students, for researchers. Given that the field is moving really, really fast, I think it comes down to understanding fundamental. Because that's essentially, for example, what the PhD allows you to do. It's been a couple of years understanding the fundamentals. So for example, when I started my PhD, I was working on understanding matrix vector multiply, which has been a concept that's been around for hundreds of years. We were trying to characterize what kind of matrices would have theoretically fast multiplication algorithm. That seems to have nothing to do with AI or anything. But I think that was a time when I developed mathematical maturity and research taste and research skill. The research topic at that point didn't have to be super trendy or anything, as long as I'm developing skills as a researcher, I'm making progress. And eventually, I've gotten quite a bit better in terms of research skills. And that allows, for example, PhD students later in their career to quickly develop solutions to whatever problems they're facing. So I think that's just the natural arc of how you're being trained as a researcher. For a lot of PhD students, I think given the pace is so fast, maybe it's harder to justify spending a lot of time on the fundamental. And it's tough. What is this kind of explore, exploit kind of dilemma? And I don't think there's a universal answer. So I personally spend some time doing this kind of exploration, reading random textbooks or lecture notes. And I spend some time keeping up with the latest architecture or methods and so on. I don't know if there's a right balance. It varies from person to person. But if you only spend 100% on one, either you only do exploration or only do exploitation, I think it probably won't work in the long term. It's probably going to have to be a mix and you have to just experiment and kind of be introspective and say, hey, I tried this kind of mixture of, I don't know, one exploration paper and one exploitation paper. How did that work out for me? Should I, you know, having conversation with, for example, my advisor about like, hey, did that work out? You know, should I shift? I focus more on one or the other. I think quickly adjusting and focusing on the process. I think that's probably the right way. I don't have like a specific recommendation that, hey, you focus, I don't know, 60% on lecture notes and 40% on archive papers or anything like that. [00:37:35]Alessio: Let's talk about some Transformer alternatives. You know, say Jonathan Franco loses his bet and Transformer is not the state of the art architecture. What are some of the candidates to take over? [00:37:49]Tri: Yeah, so this bet is quite fun. So my understanding is this bet between Jonathan Franco and Sasha Rush, right? I've talked to Sasha a bunch and I think he recently gave an excellent tutorial on Transformer alternatives as well. So I would recommend that. So just to quickly recap, I think there's been quite a bit of development more recently about Transformer alternatives. So architectures that are not Transformer, right? And the question is, can they do well on, for example, language modeling, which is kind of the application that a lot of people care about these days. So there are methods based on state space methods that came out in 2021 from Albert Gu and Curran and Chris Re that presumably could do much better in terms of capturing long range information while not scaling quadratically. They scale sub-quadratically in terms of sequence length. So potentially you could have a much more efficient architecture when sequence length gets really long. The other ones have been focusing more on recurrent neural nets, which is, again, an old idea, but adapting to the new landscape. So things like RWKV, I've also personally worked in this space as well. So there's been some promising results. So there's been some results here and there that show that, hey, these alternatives, either RNN or state space methods, can match the performance of Transformer on language modeling. So that's really exciting. And we're starting to understand on the academic research side, we want to understand, do we really need attention? I think that's a valuable kind of intellectual thing to understand. And maybe we do, maybe we don't. If we want to know, we need to spend serious effort on trying the alternatives. And there's been folks pushing on this direction. I think RWKV scale up to, they have a model at 14 billion that seems pretty competitive with Transformer. So that's really exciting. That's kind of an intellectual thing. We want to figure out if attention is necessary. So that's one motivation. The other motivation is Transformer Alternative could have an advantage in practice in some of the use cases. So one use case is really long sequences. The other is really high throughput of generation. So for really long sequences, when you train with Transformer, with flash attention and so on, the computation is still quadratic in the sequence length. So if your sequence length is on the order of, I don't know, 16K, 32K, 100K or something, which some of these models have sequence length 100K, then you do get significantly slower in terms of training, also in terms of inference. So maybe these alternative architectures could scale better in terms of sequence length. I haven't seen actual validation on this. Let's say an RNN model release with context length, I don't know, 100K or something. I haven't really seen that. But the hope could be that as we scale to long sequences, these alternative architectures could be more well-suited. Not just text, but things like high resolution images, audio, video, and so on, which are emerging applications. So that's one, long sequences. Number two is a high throughput generation, where I can imagine scenarios where the application isn't like an interactive chatbot, but let's say a company wants to batch as many requests as possible on their server, or they're doing offline processing, they're generating stuff based on their internal documents, that you need to process in batch. And the issue with Transformer is that during generation, it essentially needs to keep around all the previous history. It's called the KV cache. And that could take a significant amount of memory, so you can't really batch too much because you run out of memory. I am personally bullish on RNNs. I think RNNs, they essentially summarize the past into a state vector that has fixed size, so the size doesn't grow with the history. So that means that you don't need as much memory to keep around all the previous tokens. And as a result, I think you can scale to much higher batch sizes. And as a result, you can make much more efficient use of the GPUs or the accelerator, and you could have much higher generation throughput. Now, this, I don't think, has been validated at scale. So as a researcher, I'm bullish on this stuff because I think in the next couple of years, these are use cases where these alternatives could have an advantage. We'll just kind of have to wait and see to see if these things will happen. I am personally bullish on this stuff. At the same time, I also spend a bunch of time making attention as fast as possible. So maybe hatching and playing both sides. Ultimately, we want to understand, as researchers, we want to understand what works, why do the models have these capabilities? And one way is, let's push attention to be as efficient as possible. On the other hand, let's push other alternatives to be as efficient at scale, as big as possible, and so that we can kind of compare them and understand. Yeah, awesome. [00:43:01]Alessio: And I think as long as all of this work happens and open, it's a net positive for everybody to explore all the paths. Yeah, let's talk about open-source AI. Obviously, together, when Red Pajama came out, which was an open clone of the LLAMA1 pre-training dataset, it was a big thing in the industry. LLAMA2 came out on Tuesday, I forget. And this week, there's been a lot of things going on, which they call open-source, but it's not really open-source. Actually, we wrote a post about it that was on the front page of Hacker News before this podcast, so I was frantically responding. How do you think about what open-source AI really is? In my mind, in open-source software, we have different levels of open. So there's free software, that's like the GPL license. There's open-source, which is Apache, MIT. And then there's kind of restricted open-source, which is the SSPL and some of these other licenses. In AI, you have the open models. So Red Pajama is an open model because you have the pre-training dataset, you have the training runs and everything. And then there's obviously RandomLens that doesn't make it one-to-one if you retrain it. Then you have the open-weights model that's kind of like StableLM, where the weights are open, but the dataset is not open. And then you have LLAMA2, which is the dataset is not open, the weights are restricted. It's kind of like not really open-source, but open enough. I think it's net positive because it's like $3 million of flops donated to the public. [00:44:32]Tri: How do you think about that? [00:44:34]Alessio: And also, as you work together, what is your philosophy with open-source AI? Right, right. [00:44:40]Tri: Yeah, I think that's a great question. And I think about it on maybe more practical terms. So of course, Meta has done an amazing job training LLAMA1, LLAMA2. And for LLAMA2, they make it much less restrictive compared to LLAMA1. Now you can use it for businesses, unless you are a monthly active user or something like that. I think just this change will have a very significant impact in the kind of landscape of open-source AI, where now lots of businesses, lots of companies will be using, I expect will be using things like LLAMA2. They will fine-tune on their own dataset. They will be serving variants or derivatives of LLAMA2. Whereas before, with LLAMA1, it was also a really good model, but your business companies weren't allowed to do that. So I think on a more practical term, it's kind of shifting the balance between a closed-source model like OpenAI and Anthropic and Google, where you're making API calls, right? And maybe you don't understand as much of what the model is doing, how the model is changing, and so on. Versus now, we have a model with open weight that is pretty competitive from what I've seen in terms of benchmarks, pretty competitive with GPT 3.5, right? And if you fine-tune it on your own data, maybe it's more well-suited for your own data. And I do see that's going to shift the balance of it. More and more folks are going to be using, let's say, derivatives of LLAMA2. More and more folks are going to fine-tune and serve their own model instead of calling an API. So that shifting of balance is important because in one way, we don't want just a concentration of decision-making power in the hands of a few companies. So I think that's a really positive development from Meta. Of course, training the model takes a couple of millions of dollars, but engineers have and I'm sure they spend tons of time trying many, many different things. So the actual cost is probably way more than that. And they make the weights available and they allow probably a lot of companies are going to be using this. So I think that's a really positive development. And we've also seen amazing progress on the open source community where they would take these models and they either fine-tune on different kinds of data sets or even make changes to the model. So as an example, I think for LLAMA1, the context lane was limited to 2K. Like a bunch of folks figured out some really simple methods to scale up to like 8K. [00:47:12]Alessio: Like the RoPE. [00:47:13]Tri: Yes. I think the open source community is very creative, right? And lots of people. LLAMA2 will, again, kind of accelerate this where more people will try it out. More people will make tweaks to it and make a contribution and then so on. So overall, I think I see that as still a very positive development for the field. And there's been lots of libraries that will allow you to host or fine-tune these models, like even with quantization and so on. Just a couple of hours after LLAMA2 was released, tons of companies announcing that, hey, it's on our API or hosting and so on and together did the same. So it's a very fast-paced development and just kind of a model with available weights that businesses are allowed to use. I think that alone is already a very positive development. At the same time, yeah, we can do much better in terms of releasing data sets. Data sets tend to be... Somehow people are not incentivized to release data sets. So philosophically, yeah, you want to be as open as possible. But on a practical term, I think it's a little bit harder for companies to release data sets. Legal issues. The data sets released tend to be not as eye-catchy as the model release. So maybe people are less incentivized to do that. We've seen quite a few companies releasing data sets together. Released a red pajama data set. I think Cerebus then worked on that and deduplicate and clean it up and release slim pajama and so on. So we're also seeing positive development on that front, kind of on the pre-training data set. So I do expect that to continue. And then on the fine-tuning data set or instruction tuning data set, I think we now have quite a few open data sets on instruction tuning and fine-tuning. But these companies do pay for human labelers to annotate these instruction tuning data set. And that is expensive. And maybe they will see that as their competitive advantage. And so it's harder to incentivize these companies to release these data sets. So I think on a practical term, we're still going to make a lot of progress on open source AI, on both the model development, on both model hosting, on pre-training data set and fine-tuning data set. Right now, maybe we don't have the perfect open source model since all the data sets are available. Maybe we don't have such a thing yet, but we've seen very fast development on the open source side. I think just maybe this time last year, there weren't as many models that are competitive with, let's say, ChatGPT. [00:49:43]Alessio: Yeah, I think the open data sets have so much more impact than open models. If you think about Elusive and the work that they've done, GPT-J was great, and the Pythia models are great, but the Pyle and the Stack, everybody uses them. So hopefully we get more people to contribute time to work on data sets instead of doing the 100th open model that performs worse than all the other ones, but they want to say they released the model. [00:50:14]Tri: Yeah, maybe the question is, how do we figure out an incentive structure so that companies are willing to release open data sets? And for example, it could be like, I think some of the organizations are now doing this where they are asking volunteers to annotate and so on. And maybe the Wikipedia model of data set, especially for instruction tuning, could be interesting where people actually volunteer their time and instead of editing Wikipedia, add annotation. And somehow they acknowledge and feel incentivized to do so. Hopefully we get to that kind of level of, in terms of data, it would be kind of like Wikipedia. And in terms of model development, it's kind of like Linux where people are contributing patches and improving the model in some way. I don't know exactly how that's going to happen, but based on history, I think there is a way to get there. [00:51:05]Alessio: Yeah, I think the Dolly-15K data set is a good example of a company saying, let's do this smaller thing, just make sure we make it open. We had Mike Conover from Databricks on the podcast, and he was like, people just bought into it and leadership was bought into it. You have companies out there with 200,000, 300,000 employees. It's like, just put some of them to label some data. It's going to be helpful. So I'm curious to see how that evolves. What made you decide to join Together? [00:51:35]Tri: For Together, the focus has been focusing a lot on open source model. And I think that aligns quite well with what I care about, of course. I also know a bunch of people there that I know and trust, and I'm excited to work with them. Philosophically, the way they've been really open with data set and model release, I like that a lot. Personally, for the stuff, for example, the research that I've developed, like we also try to make code available, free to use and modify and so on, contributing to the community. That has given us really valuable feedback from the community and improving our work. So philosophically, I like the way Together has been focusing on open source model. And the nice thing is we're also going to be at the forefront of research and the kind of research areas that I'm really excited about, things like efficient training and inference, aligns quite well with what the company is doing. We'll try our best to make things open and available to everyone. Yeah, but it's going to be fun being at the company, leading a team, doing research on the topic that I really care about, and hopefully we'll make things open to benefit the community. [00:52:45]Alessio: Awesome. Let's jump into the lightning round. Usually, I have two questions. So one is on acceleration, one on exploration, and then a takeaway. So the first one is, what's something that already happened in AI machine learning that you thought would take much longer than it has? [00:53:01]Tri: I think understanding jokes. I didn't expect that to happen, but it turns out scaling model up and training lots of data, the model can now understand jokes. Maybe it's a small thing, but that was amazing to me. [00:53:16]Alessio: What about the exploration side? What are some of the most interesting unsolved questions in the space? [00:53:22]Tri: I would say reasoning in the broad term. We don't really know how these models do. Essentially, they do something that looks like reasoning. We don't know how they're doing it. We have some ideas. And in the future, I think we will need to design architecture that explicitly has some kind of reasoning module in it if we want to have much more capable models. [00:53:43]Alessio: What's one message you want everyone to remember today? [00:53:47]Tri: I would say try to understand both the algorithm and the systems that these algorithms run on. I think at the intersection of machine learning system has been really exciting, and there's been a lot of amazing results at this intersection. And then when you scale models to large scale, both the machine learning side and the system side really matter. [00:54:06]Alessio: Awesome. Well, thank you so much for coming on 3. [00:54:09]Tri: This was great. Yeah, this has been really fun. [00:54:11] Get full access to Latent Space at www.latent.space/subscribe

School of Embodied Arts Podcast with Jenna Ward
S8E6 - Embodied Flow with Tara Judelle

School of Embodied Arts Podcast with Jenna Ward

Play Episode Listen Later May 28, 2023 51:46


One would assume that yoga classes (where we do asanas, or postures) should be embodied. Given the deep spiritual roots of yoga from various Indian philosophies & teachers, you'd expect yoga to be something that bought you deeper into inhabiting and expressing yourself fully (my definition of embodiment).  Except I frequently find myself in yoga classes lacking that resonance.  Perhaps they are more focused on precise asanas with increasing complexity, or a workout to sweat. Either way, modern western yoga (asana) classes as I experience them sometimes ≠ embodied. In 2020 (or there about's) one of our graduate Feminine Embodiment Coaches Megan Hart mentioned to me she'd been studying Embodied Flow, which was “so much like Feminine Embodiment Coach, but in yoga form” I was VERY interested…. And today, finally, I've spoken with the founder of this beautiful embodied movement method. Meet Tara Judelle, the creator of Embodied Flow Yoga - a school of somatics, movement, and yoga based in non-dual tantric philosophy and humanistic psychology. She  has facilitated yoga spaces internationally for the over 20 years. She is the creator of Embodied Flow Yoga - a school of somatics, movement , and yoga based in non-dual tantric philosophy and humanistic psychology.  She is dedicated to facilitating  journeys that bring people into freedom and agency in the body-mind. Today's podcast is my favourite so far in our Season 8 Embodied Movement Series. While I've relished every conversation so far, this conversation with Tara holds a special place in my heart. Perhaps it's because we're both in cross-cultural relationships (Tara splits her time between US/Greece & travel for her teaching/retreats), or perhaps it's because Tara weaves so much deep knowledge & diverse philosophy into all she shares. Either way, Tara is a woman after my own heart (abet, doing it a very different way) and it's a joy to bring this conversation about Embodied Flow to you today! In this episode, we discuss: How Tara's traditional yoga-asana practice evolved by following the question “how do I become a liberated, embodied being?” Shouldn't all yoga (asana) classes be embodied? Tara shares with us some historical context of asana's evolution & migration to the West, and how that's shaped the practice Tara's journey from traditional yoga teacher, to creator of Embodied Flow, a meditation in motion, on & off the mat. Input/Output balances. We speak about balancing the vast qualities of data we receive in a day & shifting that data from a vast imaginal field, through the body as a way to reduce anxiety.   Resources mentioned in this podcast: Embodied Flow - Find a class, teacher or training Body Mind Centering, Bonnie Bainbridge Cohen The Rise of Superman: Decoding the Science of Ultimate Human Performance Steven Kotler Feminine Embodiment Coaching - an emotional embodiment & vulnerability-based professional training for coaches Find a Feminine Embodiment Coach in our Professional Directory School of Embodied Arts Leave a podcast review on iTunes here Thought or reflection to share? Leave a comment on Instagram here

東區德喜劇宅
東區德直播0505下|熱情和把妹有什麼關聯|如何充實聊天內容不冷場|成功是自己定義的|只有INPUT沒有OUTPUT的學習沒效率

東區德喜劇宅

Play Episode Listen Later May 11, 2023 20:32


東區德兩性線上課程問卷(第一手收到早鳥優惠) https://www.surveycake.com/s/wWe0X 5/14 「即興大咖秀:賀瓏」購票傳送門 https://bit.ly/3LJBqjF 7/15,16 東區德即興表演課 Level A (A12期) https://bit.ly/42s7q3a 兩性、喜劇創作、自媒體經營相關諮詢服務請私訊東區德 IG、FB 本集重點: 熱情和把妹有什麼關聯 如何充實聊天內容不冷場 成功是自己定義的 只有INPUT沒有OUTPUT的學習沒效率 小額贊助支持本節目: https://open.firstory.me/user/ckcz4nq0nu8ik08708rjk24en ------ 合作邀約請來信:dio3212@gmail.com 東區德 IG https://www.instagram.com/dio3212/ 東區德 FB https://www.facebook.com/DongQuDer 東區德大平台 https://linktr.ee/dio3212 Powered by Firstory Hosting

Health Comm Central
McGuire's Input-Output Matrix | Ep #23

Health Comm Central

Play Episode Listen Later Feb 8, 2023 26:21


McGuire's Input-Output Matrix, also called the Communication-Persuasion Matrix (1999), is a simple but powerful communication framework you can use to methodically analyze key elements in a message or campaign and track how effective they are at persuading your audience.  In this episode we unpack both the inputs and the outputs so you can use it to add or adjust the elements you need for success.  Resources:Note: Much of McGuire's work is available in libraries but not available to link online except behind a paywall.McGuire's revised, comprehensive matrix (use this one – same as above!):McGuire, W. J. (1999), Constructing Social Psychology: Creative and Critical Processes, Cambridge: Cambridge University Press.McGuire's earlier, less-complete matrix (seen widely online, but not as good as the revision):McGuire, William J. (1978), “An information processing model of advertising effectiveness” in Behavioral and Management Science in Marketing, ed. Harry J. Davis and Alvin J. Silk. New York: Ronald Press, 156-180.A few studies using McGuire's matrix:Lessons Learned From Community Workers Beat the Virus, a Multimedia Campaign Cocreated With Trusted Community Leaders | AJPH | Vol. 112 Issue S9 (aphapublications.org)Larry Chiagouris PhD & Iris Mohr PhD (2004) An Evaluation of the Effectiveness of Internet Advertising Tools, Journal of Internet Commerce, 3:3, 41-61, DOI: 10.1300/J179v03n03_03An Analysis of Tzu Chi's Public Communication Campaign on Body Donation (uri.edu) Bonus link: “More cowbell”When you need to add or adjust your inputs, think of this SNL sketch!https://www.youtube.com/watch?v=cVsQLlk-T0sPlease click the button to subscribe so you don't miss any episodes and leave a review if your favorite podcast app has that ability. Thank you!For more information, visit the Health Comm Central website at: http://www.HealthCommCentral.com© 2022 - 2023 Karen Hilyard, Ph.D. Connect with me on:LinkedIn: https://www.linkedin.com/company/health-comm-central/Twitter: @HealthCommCtrlInstagram: @health.comm.central

The Lavender Menace
online lesbian drama & discourse: wrapping up our 2022's media input & output

The Lavender Menace

Play Episode Listen Later Jan 9, 2023 116:05


Happy new year and welcome to episode 14 of season 4!!! It's the beginning of 2023, which means that it's time to reminisce on all the movies, books, shows, albums, and LGBT internet drama we discussed and reviewed in the previous year. But first, we answer a listener submitted hot-take from Clair, who tells us about their experiences with being accused of transphobia on the basis of being a lesbian while bisexuals are presumed to be trans allies by nature of their sexual identities. In discussing the sexualization of butches and mascs as it relates to issues of trans allyship, Sunny brings up our friend and oomf of the pod, @gabbyisbutch. We reflect on our album rankings of 2022 releases and whether our opinions have changed since recording our reviews, and add up the different forms of media we've consumed and discussed in previous episodes. For 2023, Renaissance pitches reading more bad books for our shared media portions of the pod, and Sunny pitches a dating app match-making situation for our listeners via Google Form. Let us know your thoughts by connecting with us on socials or emailing us at thelavendermenacepodcast@gmail.com. We go over our respective 2022 media consumption and reading/movie watching goals for 2023, with Renaissance logging 200+ films on Letterboxd and Sunny not hitting their Goodreads reading goal of 250 books. (They read 235.) Here's to another slayful media consumption and production year for The Lavender Menace! Thank you for joining us

Ben Barker Fitness
Find Friends to Elevate Your Fitness, Finances, Faith, Food, and Family

Ben Barker Fitness

Play Episode Listen Later Dec 20, 2022 10:24


Input = Output. If you are constantly on a diet of negativity and mediocrity, that's what your life will start to look like. I am on a mission to surround myself with people looking to push the limits and see just how good we can get! Get my "Just Start" 4 week workout plan ebook for FREE when you sign up for a free 7-day trial to my workout subscription here: https://www.benbarkerfitness.com

The Walk
Input, Output and ‘Noput'

The Walk

Play Episode Listen Later Dec 14, 2022 66:49


We often measure our life in economic terms of consumption and production. But we forget that besides input and output, we also need ‘noput': time to process, to think and to rest. I explain how this insight helps me to seek new balance. Like this show and want to help me with my mission? JoinContinue reading "Input, Output and ‘Noput'" The post Input, Output and ‘Noput' appeared first on Father Roderick.

The Walk
Input, Output and ‘Noput’

The Walk

Play Episode Listen Later Dec 14, 2022 66:49


We often measure our life in economic terms of consumption and production. But we forget that besides input and output, we also need ‘noput’: time to process, to think and to rest. I explain how this insight helps me to seek new balance. Like this show and want to help me with my mission? JoinContinue reading "Input, Output and ‘Noput’" The post Input, Output and ‘Noput’ appeared first on Father Roderick.

Connect The Dots B*tch
Family Obligations + Holiday Challenges

Connect The Dots B*tch

Play Episode Listen Later Nov 30, 2022 49:38


Amy dishes on her holiday celebrations and educates you on the tips and mindset shifts she uses to navigate the challenging environments and relationships during them. Episode discusses: Trauma, Staying Balanced, Input/Output of Energy, Self-Care, Difficult People, Family, Challenging Relationships. If you enjoyed this episode, leave a review on Apple or Spotify! New Connect The Dots B*tch Merch is available at amyfiedler.com

Born to Succeed-with Michael Merritt. “Growing or Wilting”—-Input=Output

"BORN TO SUCCEED" with Michael & Alyssa Merritt

Play Episode Listen Later Aug 22, 2022 7:33


In this episode I discuss how we feed, water, nourish, plant the right soil, pray and have faith and bam growth happens!!! We then have to prune to get rid of the excess and junk, so we can grow even stronger and taller and more beautiful!!!

Ringside: An American Dairy Goat Podcast
Input/Output: Get Results By What You Put Into Your Goats

Ringside: An American Dairy Goat Podcast

Play Episode Listen Later Jul 14, 2022 80:37


This week Danielle and Jon discuss ADGA news.  Once past the craziness they focus on the main topic which is Input/Output.  What does that mean?  Well tune in and find out!

Nurture The Mind
Episode 42 - Input Equals Output: The Universe Gives What's Been Earned

Nurture The Mind

Play Episode Listen Later Jul 9, 2022 21:42


In this week's chat I speak on the idea of Input = Output. Essentially what you put into the universe is what you will ultimately receive.I've experienced this reality many times over in the last year and a half. In this video I go over at least three different examples where my effort eventually payed off.I think this is such an underrated topic that more people should talking about and applying to their lives. ----- CHECKLIST -----Instagram: https://www.instagram.com/pootsiemama/Patreon: https://www.patreon.com/colepootsNurture the Mind podcast on Apple Podcasts: http://feeds.buzzsprout.com/1020460.rssNurture the Mind podcast on Spotify: https://open.spotify.com/episode/2YKpUDga00ynF09ENXB8cM?si=bc349cf69a6c4072#selfimprovement #motivation #dothework #lawofattraction

Whiskey Hue
Episode 83: Fordham Edition |

Whiskey Hue

Play Episode Listen Later Jul 5, 2022 76:59


Fordham Edition: Atul Prashar welcomes 2 of his FINANCING NEW MEDIA VENTURES students, Milica Jojic and Maher Alsakaff for a no-holds-barred conversation. 00:00 Intro 09:00 Elon Musk 22:30 Twitter 25:00 AOC 31:30 Socially Lib'd / Fiscally Conserv'd 35:30 Input = Output 39:00 Andrew Yang & RBG 44:30 Politics need a youthful infusion 49:50 TikTok 54:15 MIAMI 56:36 Pop Culture / BeReal App, What color is Pete Davidson?, Recent Album Drops, Un-Cancelled, Kardashians, Julia Fox 1:13:30 *Ish You Should Know

Micro Investor
Cardano Vasil Hard Testnet Launch | Whats Next For ADA?

Micro Investor

Play Episode Listen Later Jul 3, 2022 9:16


#cardano long-awaited Vasil Hard Fork has just launched on the Cardano testnet. According to Input-Output, it will take 4 weeks of testing until it will be ready for the mainet launch. The Vasil upgrade will bring significant performance and capability upgrades. This is the Cardano news we have been waiting for! I will continue to do ADA price analysis and Cardano updates. 0:00 Overview 1:09 Vasil Hard Fork 2:44 Why Cardano Runs Great 4:11 Cardano Ecosystem 5:22 Cardano Price Analysis #cardanoada #adacrypto #cryptonews You can access my buys and sells on Patreon/Discord or Membership on youtube

Scilence
Innervation: S3 Ep 1 Input / Output

Scilence

Play Episode Listen Later Jun 28, 2022 49:50


She is an engineer through and through. She sees the world through an engineering lens. Totally head led, totally, rational and loving protocol, because it allows her to think outside of the box and solve problems - like all great engineers do. She's a mechanical engineer and more, who believes in the power of repeatability. Practice is key, which she applies to all multi-dimensions of herself. Nika is such an interesting person and is a great speaker - I highly recommend this conversation, for a boost towards your own goals and dreams in life - Nika is a truly motivating force.

Emmanuel Tuscaloosa

Philippians 4:8-9 // 5.22.22

Emmanuel Tuscaloosa
Input/Output

Emmanuel Tuscaloosa

Play Episode Listen Later May 22, 2022 43:18


Philippians 4:8-9

York College Chapel Talks
Input, Output - Sarah Van Gomple

York College Chapel Talks

Play Episode Listen Later Feb 10, 2022 14:12


Sarah Van Gomple, education professor, challenges listeners to evaluate what they are being influenced by. Scripture: Proverbs 4:23, Romans 12:2

Cardano Live
Milkomeda Protocol for Cardano with Nicolas Di Prima | Cardano Live #37

Cardano Live

Play Episode Listen Later Sep 3, 2021 48:01


Nicolas DiPrima is now Lead Engineer at dcSpark and former developer of the Jormungandr node and Fenrir interface at Input Output. Today he chats with us to discuss the Milkomeda protocol and bootstrapping an EVM-based sidechain with native wrapped smart contract support, timed perfectly with the coming launch of smart contracts on Cardano. View links mentioned in the podcast and check out more information in the description on Youtube: https://youtu.be/K7JYQLWwcqc

Cardano Live
Vukasin Vukoje, Smart Contracts and dApps on Cardano | Cardano Live #19

Cardano Live

Play Episode Listen Later May 15, 2021 50:57


Vukašin Vukoje is the Product Manager at Input Output for native assets and smart contracts, developed the ERC20 converter, and he previously worked on Ethereum. Today on Cardano Live we will talk about the Cardano smart contracts, dApps, native tokens and how they are different on Cardano compared to Ethereum. Watch Episode 19, view links, and check out more information on Youtube: https://youtu.be/2V47TMM9Ls8

From the Ground Up Athletic Performance Podcast
From the Ground Up Athletic Performance Podcast Episode 4 Shawn Sherman of square 1 systems "input/output and movement solutions by square 1

From the Ground Up Athletic Performance Podcast

Play Episode Listen Later Apr 27, 2021 57:50


Welcome to episode 4 of From the Ground Up Athletic performance podcast. On this episode I sit down to discuss neural aspects with Shawn Sherman of Square 1 Systems. We discuss a variety of topics we begin the podcast by focusing the discussion on the neural lens and discuss rudimentary patterns and how they pertain to locomotion and his system. We define motor control and discuss how it can erode overtime. We discuss how compensation creeps into movement patterns and the effect it can have on healthy and optimal movement patterns. We discuss the role of isometrics in increasing neural drive and effectively restoring movement competency in square 1. We examine how his system differs from other systems on the market, and we end out by examining the role of the visual, somatosensory, and vestibular system play in posture and movement. I hope you enjoy the episode Shawn provides a lot of great perspectives and hopefully it encourages you to examine the role of the nervous system in locomotion, sporting events and overall effective lifestyles. Don't forget to check out square 1 systems Instagram page to keep up with all their content as well.

MSP 1337
Defining The Why...

MSP 1337

Play Episode Listen Later Mar 2, 2021 35:05


We have talked about compliance and frameworks. We have talked about products and services and the gaps they fill to improve our security posture. We have even spent time mapping those products and services to the controls they satisfy but, have we defined the why (knowing the risks) which is where it all needs to start. Join me as I sit down with James Bowers of Input Output as we talk through the why.

pivot parenting
Input = Output

pivot parenting

Play Episode Listen Later Feb 16, 2021 16:36


If you put dough into a pasta maker, you get noodles on the other side. Plant a tomato seed, and you can expect to harvest tomatoes. When parents apply the principle of "input = output" to their kids, like thinking that if we teach our kids to be honest and then expect them to always be honest, disappointment is never far behind. In this episode I cameo a few of my client's situations to show you how this line of thinking is not helpful, and I'll give you practical tools to drop the unnecessary suffering. To learn more about working with me, please visit heatherfrazier.com

The Deep End
We're a Bit Input Overloaded, You?

The Deep End

Play Episode Listen Later Aug 17, 2020


New, media, social media, family, conference, services, TEDTalks, seems like everyone has fit themselves into a tiny little screen. We're tired, drained and feel selfish for turning it off.So these are some questions we're asking ourselves today:+Input / Output: what are we consuming versus exhaling+Energy that goes in has to come out in one way or another, is it useful for this particular time?+How do we know what we're putting in is effecting us negatively or positively?+How do you know when you're on an input overload, or you don't have enough?+How are you choosing what goes in right now?

RD3
Engage in the Now: An Input, Output Activate Meditative Exercise

RD3

Play Episode Listen Later May 6, 2020 5:32


News cycle fatigue, Pat Croce, saying when, staying in the now, informal meditation, single tasking, and leveraging exercise.