POPULARITY
This episode Kirk Windstein joins Jamey to talk his new album Ethereal Waves, remembering amps of the past, working a shared stage, and more!Support Our Sponsors! Big thanks to the following folks....www.martyrstore.net (use code Sacred25 For 10% OFF)www.deathwishinc.comwww.metalfest.comwww.therave.com/metalfestSumerianRecords.com - Official Milwaukee Metal Fest & Jasta Show sponsorwww.bluegrapemusic.com - Official Milwaukee Metal Fest & Jasta Show sponsor! Check out bands like Grid Iron, Heavy/Hitter & many more!www.nuclearblast.comreigningphoenixmusic.com - Check out the new INCITE album coming August 25th!https://www.depop.com/jastaCopyCatsMedia.comSubscribe to the all new GaSDigital.comFollow Jamey On Patreonhttps://www.patreon.com/jastaFollow The Show On Social Mediahttps://twitter.com/jameyjastahttps://www.instagram.com/jameyjasta/Musician, former television host, and podcaster Jamey Jasta (Hatebreed, Kingdom of Sorrow, Jasta and the former host of MTV's Headbanger's Ball) interviews your heroes every Monday and Thursday. The newest 20 episodes are always free, but if you want access to all the archives, watch live, chat live, access to the forums, and get the show a week before it comes out everywhere elseSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
'Freedom of speech is really important, they're entitled to that, but they're not entitled to incite terrorism or break laws around hate speech' - DUP MLA Peter Martin and Malachi O'Doherty spoke to Peter, after police said they were reviewing footage from a Kneecap gig Hosted on Acast. See acast.com/privacy for more information.
This week Jamey sits with Mark Hunter from Chimaira to talk seeing your bands merch in movies, discovering treasures on Tubi, the stress of guest fronting a band, future tour plans and MORE!Support Our Sponsors! Big thanks to the following folks....www.martyrstore.net (use code MMF10)www.deathwishinc.comwww.metalfest.comwww.therave.com/metalfestwww.bluegrapemusic.com - Official Milwaukee Metal Fest & Jasta Show sponsor! Check out bands like Grid Iron, Heavy/Hitter & many more!www.nuclearblast.comreigningphoenixmusic.com - Check out the new INCITE album coming August 25th!https://www.depop.com/jastaSubscribe to the all new GaSDigital.comFollow Jamey On Patreonhttps://www.patreon.com/jastaFollow The Show On Social Mediahttps://twitter.com/jameyjastahttps://www.instagram.com/jameyjasta/Musician, former television host, and podcaster Jamey Jasta (Hatebreed, Kingdom of Sorrow, Jasta and the former host of MTV's Headbanger's Ball) interviews your heroes every Monday and Thursday. The newest 20 episodes are always free, but if you want access to all the archives, watch live, chat live, access to the forums, and get the show a week before it comes out everywhere elseSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Guest: Michelle Shephard, former Toronto Star reporter and co-author of “Code Name: Pale Horse: How I Went Undercover to Expose America's Nazis” Scott Payne spent his career deep undercover in the drug trade, criminal networks and among racist terrorists — and despite some harrowing close calls that saw him stripped naked and almost exposed, he lived to tell the tale. The woman he told it to — former Star reporter Michelle Shephard, who co-wrote a new book with Payne — sits down with This Matters to talk about what he went through and what he learned. This includes a time when he found himself in a field in the American south under a literal burning cross at a KKK rally, and how he learned that famous racist organization is now “your grandfather's white supremacists” given the threat posed by The Base, a newer racist terrorist group bent on spurring a race war to bring about the collapse of society. Shephard discusses how Payne tried to stay safe while putting criminals in jail, and the toll it took on his mental health and family life. PLUS: the time Payne had a hood pulled off his head and found himself being unexpectedly “knighted” into the Ku Klux Klan. This episode was produced by Julia De Laurentiis Johnston, Ed Keenan and Paulo Marques.
durée : 00:02:25 - Bientôt chez vous - Le gouvernement japonais veut encourager les habitants de Tokyo à s'installer dans les zones rurales. Pour cela, il met en place un programme de subventions pouvant aller jusqu'à 6 200 euros par personne. L'objectif : lutter contre la surpopulation de la capitale.
Send us a text!What is the Post War Consensus, and why is it causing such heated debate? What does it have to do with the Antioch Declaration? And speaking of that, why has the tide in Reformed circles shifted so rapidly since 2020?In this episode, I'll talk with J.D. Hall, who writes at Insight to Incite and has been following the evangelical subversion for over a decade. We'll also talk about Al Mohler, Russell Moore, and the intentional effort to push the SBC left.https://insighttoincite.substack.com/2025 New Christendom Press Conference: https://www.newchristendompress.com/2025Check out the Farmer Bill's Provisions's products. Click the link below to claim 20% off.https://farmerbillsprovisions.com/discount/HARDVisit KeepwisePartners.com or call Derrick Taylor at 781-680-8000 to schedule a free consultation.https://keepwise.partners/Talk to Joe Garrisi about managing your wealth with Backwards Planning Financial.https://www.northwesternmutual.com/financial/advisor/joe-garrisi/planning10 Ways to Make Money with Your MAXX-D Trailer.https://maxxdtrailers.com/10-ways-to-make-money-with-your-trailerSupport the show
At present, Ireland is the only country in Europe which doesn't acknowledge FASD as a disability. The disorder is the most prevalent Nuero-developmental disability and affects a greater number of people than autism, cerebral palsy and down syndrome combined. Four Clare councillors currently sit on the HSE Regional Health Forum West, while a number of others hold positions on school boards. Shannon Banks Sinn Féin Councillor, James Ryan, says Clare holds a unique position from which it can incite change.
Quinze ans après le déclenchement de l'insurrection de Boko Haram, la situation s'est sensiblement améliorée autour du bassin du lac Tchad, avec le retour de millions de déplacés et de réfugiés dans leur communauté d'origine, grâce aux efforts conjugués des armées des pays riverains du Lac Tchad. Certains observateurs locaux estiment que depuis fin 2020, Maiduguri redevient un hub économique. Des jeunes de la classe moyenne choisissent de se fixer et de construire leur vie dans la région, malgré les traumatismes liés à Boko Haram. C'est le cas de Hajara et Saleh, deux entrepreneurs trentenaires que notre correspondant a rencontrés. Saleh Babagana répond à un client sur sa montre connectée. Revendeur de téléphones d'occasion dans les rues de Maiduguri il y a encore quatre ans, ce grossiste distribue désormais des marques premium de smartphones dans une boutique climatisé. Et à 28 ans, Saleh n'a pas du tout l'intention de quitter le Nigeria :« Tout a commencé comme un pari pas sérieux. Un ou deux clients nous ont fait confiance, et maintenant, nous sommes bien établis en tant qu'entreprise ; beaucoup de nos clients viennent des pays voisins qui utilisent le CFA. Ils achètent nos produits au prix de gros, entre 1 000 à 2 000 pièces de nos smartphones. Et parfois jusqu'à 5 000 appareils d'une certaine marque de téléphone ont été achetées ici. »Comme Saleh, Hajara a vécu toute son adolescence à Maiduguri. Sous ses yeux, elle a vu mourir un camarade de classe, puis a échappé à un attentat à la bombe. Malgré ces traumatismes liés à Boko Haram, cette dirigeante de PME n'a jamais envisagé de quitter sa ville : « Avant d'ouvrir mon magasin, je faisais mes affaires à la maison. À cause des explosions de bombes et de Boko Haram, c'était la panique. Les gens s'enfuyaient de chez eux. Cela n'a pas été facile, mais grâce à Dieu, nous nous sommes bien débrouillés pour survivre. »« Migrer, ce n'est pas une option pour moi »Aujourd'hui, Hajara emploie une dizaine de personnes. Et les parfums d'intérieur qu'elle crée s'exportent au Niger, Cameroun, mais aussi au Ghana et jusqu'au Canada. Hajara se sent épanouie en vivant et en travaillant à Maiduguri :« J'ai choisi de rester pour inspirer les jeunes femmes d'ici. Beaucoup ont l'impression qu'on ne peut pas s'accomplir et que nous ne pouvons pas y arriver dans la vie. Alors que ce n'est pas le cas. Quelle que soit notre personnalité, on peut réussir chez soi. Tout ce qu'on souhaite dans la vie, on peut le réaliser. Alors migrer, ce n'est pas une option pour moi. »Hajara ne comprend pas pourquoi, au Nigeria, des dizaines de milliers de personnes de sa génération se risquent à tout sacrifier pour un visa pour l'Amérique ou pour l'Europe.À lire aussiNigeria: à Maiduguri et autour du bassin du lac Tchad, le retour de la sécurité ravive l'économie [1/2]
The Jepson Leadership Forum presents Peter Bearman, founding director of the interdisciplinary institute Incite at Columbia University and Jonathan R. Cole Professor of Social Science at Columbia University, for a discussion on "Class, Status, and Party: Polarization Dynamics and Our Fragile Democratic Experiment." March 4, 2025 The 2024-25 Jepson Leadership Forum invites scholars and experts to discuss how division and polarization affect American democracy. We will explore how and why divisions have manifested historically and currently in the United States, focusing on their impact on justice, education, politics, culture, technology, and class. Are division and the struggle to find common ground making us stronger or tearing us apart?
Send us a text This week Greg sat down with JD Hall. Jd is the Author of the Substack Insight to Incite as well as the Founder of Protestia and Pulpit & Pen. They discussed his 15+ year journey in the polemics arena, as well as the rising thoughts on the post war consensus within the younger generation. They also talked about where we see the church and large ministries going in the next five years as generational and socail shifts begin to happen. Enjoy! Book your next church conference HERE!Dominion Wealth Strategists: Full Service Financial Planning! Covenant Real Estate: "Confidence from Contract to Close" Facebook: Dead Men Walking PodcastYoutube: Dead Men Walking PodcastInstagram: @DeadMenWalkingPodcastTwitter X: @RealDMWPodcastExclusive Content: PubTV AppSupport the show Get your tickets for the Dangerous Friends Conference here! Enter "DMW25" at checkout to get a $25 discount on your ticket price!
Send us a textThis week Greg sat down with JD Hall. Jd is the Author of the Substack Insight to Incite as well as the Founder of Protestia and Pulpit & Pen. They discussed his 15+ year journey in the polemics arena, as well as the rising thoughts on the post war consensus within the younger generation. They also talked about where we see the church and large ministries going in the next five years as generational and socail shifts begin to happen. Enjoy!Private Family Banking: Protect your wealth!Book your next church conference HERE!Dominion Wealth Strategists: Full Service Financial Planning!
Send us a text This week Greg sat down with JD Hall. Jd is the Author of the Substack Insight to Incite as well as the Founder of Protestia and Pulpit & Pen. They discussed his 15+ year journey in the polemics arena, as well as the rising thoughts on the post war consensus within the younger generation. They also talked about where we see the church and large ministries going in the next five years as generational and socail shifts begin to happen. Enjoy! Book your next church conference HERE!Dominion Wealth Strategists: Full Service Financial Planning! Covenant Real Estate: "Confidence from Contract to Close" Facebook: Dead Men Walking PodcastYoutube: Dead Men Walking PodcastInstagram: @DeadMenWalkingPodcastTwitter X: @RealDMWPodcastExclusive Content: PubTV AppSupport the show Get your tickets for the Dangerous Friends Conference here! Enter "DMW25" at checkout to get a $25 discount on your ticket price!
For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
It was only a matter of time before the democratization of all things came to evangelicalism. Whether it was Elon Musk buying Twitter, “entrepreneurs” getting rich off meme coins, or AI giving anyone supposed expertise in whatever field they wanted, we are experiencing a technological revolution that is as disruptive as the printing press. Information, platforms, access, and ideas have been thrown wide open to anyone with the shrewdness, tact, and ambition to wield them. I'm not saying this is a bad or a good thing, it's just the way things are moving.No one is taking this harder than the evangelical institutions of the past. Seminaries, Universities, para-church ministries and church networks that were massively influential in the 1990s and early 2000s still have lots of land, money, and reach. But ultimately, despite their millions of dollars and decades long track record, they are rapidly losing influence. Men with no degrees, no pedigree, and minimal funds are capturing the hearts of the next generation of right wing and Christian men because they have what the institutions lack. Courage. The gatekeepers shriek and wail, but they are gatekeepers without a gate. This episode is brought to you by our premier sponsors, Armored Republic and Reece Fund, as well as our Patreon members and donors. You can join our Patreon at https://patreon.com/rightresponseministries or you can donate at https://rightresponseministries.com/donate.Join us today with a very special guest, JD Hall of Insight to Incite and Proestia, to talk about how the populist social revival has come to evangelicalism and its once narrow doors for credibility are blown off of the hinges.*MINISTRY SPONSORS:**Private Family Banking*How to Connect with Private Family Banking:1. FREE 20-MINUTE COURSE HERE: https://www.canva.com/design/DAF2TQVcA10/WrG1FmoJYp9o9oUcAwKUdA/view2. Send an email inquiry to chuck@privatefamilybanking.com3. Receive a FREE e-book entitled "How to Build Multi-Generational Wealth Outside of Wall Street and Avoid the Coming Banking Meltdown", by going to https://www.protectyourmoneynow.net4. Set up a FREE Private Family Banking Discovery call using this link: https://calendly.com/familybankingnow/30min5. For a Multi-Generational Wealth Planning Guide Book for only $4.99, use this link for my affiliate relationship with "Seven Generations Legacy": https://themoneyadvantage.idevaffiliate.com/13.html*Reece Fundhttps://www.reecefund.com/*Dominion: Wealth Strategists* is a full-service financial planning and wealth management firm dedicated to putting more money in the hands of the church. With an education focused approach, they will help you take dominion over your finances.https://reformed.money/
This week on Marketing O'Clock: Google is making headlines (AKA sitelinks) once again with some new developments. A recent study shows where AI is grabbing its citations from, and what they're made of. Meanwhile, Meta's got a brand-new ad type and is turning comments into cash. Plus, all the digital marketing news you missed this week!Visit us at - https://marketingoclock.com/
The Potter Discussion: Harry Potter, Fantastic Beasts and the Wizarding World Fandom
Send us a textIn this episode, we discuss how I would take over the world as Voldemort. Enjoy!Topics/Summary:· 1:59 Start with Hogwarts. Voldemort should have focused on Hogwarts more intensely early on, because most of his greatest enemies are there. He has to start planting spies much earlier if he wants to turn it into a more valuable asset of information. · 9:50 Take control of Gringotts and Olivander's. These two are the most important shops in the wizarding world. If Voldemort could take control of the wand production, he could influence who gets to learn magic at Hogwarts. If he controlled the bank, he could hire an army large enough to defeat Dumbledore ten times over.· 15:59 Infiltrate the Ministry. Voldemort does this, but it is too late. If I was him, I would start planting spies in the Ministry earlier, and hopefully work up to installing a puppet minister. How hard could it be to get Fudge out of office and support someone else for the position?· 20:51 Incite more killings and disappearances. The thing that scared the public the most was all the disappearances and killings of civilians. He does some of this, but if he could ramp it up, he might be able to drum up enough fear to influence the war.· 24:49 Protect the horcruxes. To be fair, he does this well. However, it obviously wasn't good enough. He should think of a way to better protect his horcruxes, which would guarantee his success. Having anything you want to hear or say? Click here for a voice submission or here for text. ThePotterDiscussion@gmail.comthepotterdiscussion.comNox
In this episode of Kari's Confessions, Kari Anderson, a seasoned nonprofit executive and principal at Incite! Consulting, delves into the value of using contractors and partners to enhance nonprofit capacity. With over 25 years of experience, Kari offers insights on how to leverage contractors effectively, especially for small and mid-sized organizations facing challenges like limited staff, budgets, and time. Key Episode Takeaways: Benefits of Utilizing Contractors: Contractors can fill crucial gaps in your organization, especially for roles like marketing, HR, or fundraising. They provide expertise for specific projects without the long-term expense of hiring full-time staff and can help alleviate burnout during busy seasons. Tips for Finding the Right Contractor: Learn how to identify the right fit for your organization by clearly defining the scope of work, gathering project bids, and ensuring cultural alignment. Kari emphasizes the importance of crafting detailed contracts and understanding the legal distinctions between contractors and employees. Roles and Responsibilities: Kari explains the obligations of both contractors and clients, including adhering to agreements, meeting deadlines, and providing the tools and access needed to complete tasks. She highlights the importance of following employment laws to maintain a compliant and ethical relationship. Kari encourages nonprofits to embrace contractors as a creative way to build capacity, while ensuring compliance with legal and ethical standards. Whether it's for accounting, seasonal fundraising, or administrative support, contractors can be a game-changer for nonprofits seeking to optimize their operations. Follow Kari on your favorite streaming platforms, including YouTube, Apple, and Spotify. For additional resources on organizational storytelling and tools, follow Kari on LinkedIn. Visit her website at Incite Consulting Group.
In this final episode of "Kari's Confessions," Kari Anderson, a seasoned nonprofit executive and principal at Incite! Consulting, explores the power of strategic planning in driving nonprofit success. With 25 years of experience in leadership roles, Kari shares insights on why strategic planning matters, how to overcome common barriers, and tips for creating a plan that serves as a dynamic tool for organizational growth. Kari discusses the significant role strategic planning plays in increasing fundraising success, improving board-staff relationships, and fostering accountability within nonprofits. Key Episode Takeaways: What Strategic Planning Is: Learn how strategic planning provides a guiding framework for setting short- and long-term goals, aligning board and staff efforts, and charting a clear path forward for your organization. Benefits and Barriers: Understand the positive impact of strategic planning, from increased revenue to stronger donor confidence, and learn how to address barriers such as time constraints, funding concerns, and scheduling challenges. Tips for Success: Kari offers practical advice on how to make the strategic planning process engaging and effective. From involving stakeholders to creating actionable steps and reviewing progress regularly, she outlines how to ensure the plan remains a living document. Kari highlights the importance of celebrating wins, maintaining momentum, and involving stakeholders in the planning process. She stresses that while strategic planning may seem daunting, it is an essential investment in the sustainability and growth of any nonprofit. Follow Kari on your favorite streaming platforms, including YouTube, Apple, and Spotify. For additional resources on organizational storytelling and tools, follow Kari on LinkedIn. Visit her website at Incite Consulting Group.
Sam Beskind is an Investor and Chief of Staff at Floodgate, a VC firm backing Prime Movers across numerous different industries. Before this role, Sam completed his undergraduate and first masters degree at Stanford and followed this with a Masters in Advanced Energy Systems at the Colorado School of Mines. He also previously interned at VDE Americans, a renewable energy consultancy, and Incite.org, an organization supporting mission driven enterprises. In this episode, Sam talks to us about his decision making process for degrees and opportunities, why he started his podcast, Built for Earth, and advice for students interested in the VC space. Sam finishes the episode by emphasizing the importance of building relationships and adding value to companies or organizations with your “superpower”. Keynotes: - How to approach picking your degree programs - The value of soft skills like communication and relationship building - Adding value wherever you go And follow us on: Newsletter: https://www.energy-terminal.com/newsletter-signup LinkedIn: https://www.linkedin.com/company/energy-terminal Instagram: https://www.instagram.com/energyterminal/
durée : 00:03:30 - Un monde connecté - par : François Saltiel - "Ressources, un défi pour l'humanité" est une BD signée Philippe Bihouix et Vincent Perriot. Un album qui déconstruit le mythe de la corne d'abondance dans lequel nous enferment les géants de la tech et nous invite à utiliser des technologies douces, réparables et adaptables.
Story #1: World Wars have been set off by assassin's bullets. We are living in a tinder box, and former Secretary of State Hillary Clinton continues to feed the fire after the second assassination attempt of former President Donald Trump. Story #2: Checking in on the 'Friends Of The Will Cain Show' fantasy football league. And how is the "Will versus the Experts" challenger going. Story #3: A conversation with Tony Robbins and Mo Ramchandani on their new movie exposing sex-trafficking in America, "City of Dreams" Tell Will what you thought about this podcast by emailing WillCainShow@fox.com Subscribe to The Will Cain Show on YouTube here: Watch The Will Cain Show! Follow Will on Twitter: @WillCain Learn more about your ad choices. Visit megaphone.fm/adchoices
Billy Morrison is a musician and artist who currently plays guitar with Billy Idol. He recently released a new solo record called "The Morrison Project." In this episode he discussed his journey from drug addiction to achieving success with the number one hit "Crack Cocaine." He emphasized the importance of making honest music and the strategic inclusion of guest stars like Billy Idol and Corey Taylor. Morrison recounted his friendship with Ozzy Osbourne, formed at a Christmas party. He highlighted his involvement with the charity Above Ground and his commitment to helping others. Morrison also shared his experiences with sobriety, the impact of the pandemic, and his ongoing art and music projects, including an upcoming video for "Incite the Watch" featuring Steve Vai & Corey Taylor. 00:00 - Intro00:15 - Becoming a Raider Fan 01:45 - Doing Drugs 03:25 - Having a #1 Song & Guests on Record 07:47 - Becoming Friends with Ozzy 10:45 - Meeting Celebrities 11:45 - Fan of Artists & Doing Paintings 13:08 - Sobriety, Art & Meaning 15:45 - New Solo Songs & Artistic Meaning 17:00 - It's Come to This & Pandemic 19:55 - Gratitude, Acceptance, Slumps & Helping Others 31:10 - New Song "Drowning" & Album Credits 36:15 - Corey Taylor & "Incite the Watch" 41:00 - Happier As Older & Learning Lessons Sooner 45:12 - Outro Billy Morrison website:https://billymorrison.com/Chuck Shute link tree:https://linktr.ee/chuck_shuteSupport the showThanks for Listening & Shute for the Moon!
ਬਿਲਾਵਲੁ ਮਹਲਾ ੫ ॥ ਐਸੇ ਕਾਹੇ ਭੂਲਿ ਪਰੇ ॥ ਕਰਹਿ ਕਰਾਵਹਿ ਮੂਕਰਿ ਪਾਵਹਿ ਪੇਖਤ ਸੁਨਤ ਸਦਾ ਸੰਗਿ ਹਰੇ ॥੧॥ ਰਹਾਉ ॥ ਕਾਚ ਬਿਹਾਝਨ ਕੰਚਨ ਛਾਡਨ ਬੈਰੀ ਸੰਗਿ ਹੇਤੁ ਸਾਜਨ ਤਿਆਗਿ ਖਰੇ ॥ ਹੋਵਨੁ ਕਉਰਾ ਅਨਹੋਵਨੁ ਮੀਠਾ ਬਿਖਿਆ ਮਹਿ ਲਪਟਾਇ ਜਰੇ ॥੧॥ ਅੰਧ ਕੂਪ ਮਹਿ ਪਰਿਓ ਪਰਾਨੀ ਭਰਮ ਗੁਬਾਰ ਮੋਹ ਬੰਧਿ ਪਰੇ ॥ ਕਹੁ ਨਾਨਕ ਪ੍ਰਭ ਹੋਤ ਦਇਆਰਾ ਗੁਰੁ ਭੇਟੈ ਕਾਢੈ ਬਾਹ ਫਰੇ ॥੨॥੧੦॥੯੬॥ Bilawal Fifth Mehal Why do you wander in delusion like this? You act, and incite others to act, and then deny it. The Lord is always with you; He sees and hears everything. || 1 || Pause || You purchase glass, and discard gold; you are in love with your enemy, while you renounce your true friend. That which exists, seems bitter; that which does not exist, seems sweet to you. Engrossed in corruption, you are burning away. || 1 || The mortal has fallen into the deep, dark pit, and is entangled in the darkness of doubt, and the bondage of emotional attachment. Says Nanak, when God becomes merciful, one meets with the Guru, who takes him by the arm, and lifts him out. || 2 || 10 || 96 || ਬਿਲਾਵਲੁ ਮਹਲਾ ੫ (ਹੇ ਭਾਈ! ਪਤਾ ਨਹੀਂ ਜੀਵ) ਕਿਉਂ ਇਸ ਤਰ੍ਹਾਂ ਕੁਰਾਹੇ ਪਏ ਰਹਿੰਦੇ ਹਨ । (ਜੀਵ ਸਾਰੇ ਮੰਦੇ ਕਰਮ) ਕਰਦੇ ਕਰਾਂਦੇ ਭੀ ਹਨ, (ਫਿਰ) ਮੁੱਕਰ ਭੀ ਜਾਂਦੇ ਹਨ (ਕਿ ਅਸਾਂ ਨਹੀਂ ਕੀਤੇ) । ਪਰ ਪਰਮਾਤਮਾ ਸਦਾ ਸਭ ਜੀਵਾਂ ਦੇ ਨਾਲ ਵੱਸਦਾ (ਸਭਨਾਂ ਦੀਆਂ ਕਰਤੂਤਾਂ) ਵੇਖਦਾਸੁਣਦਾ ਹੈ ।੧।ਰਹਾਉ। ਹੇ ਭਾਈ! ਕੱਚ ਦਾ ਵਪਾਰ ਕਰਨਾ, ਸੋਨਾ ਛੱਡ ਦੇਣਾ, ਸੱਚੇ ਮਿੱਤਰ ਤਿਆਗ ਕੇ ਵੈਰੀ ਨਾਲ ਪਿਆਰ (ਇਹ ਹਨਜੀਵਾਂ ਦੀਆਂ ਕਰਤੂਤਾਂ) । ਪਰਮਾਤਮਾ (ਦਾ ਨਾਮ) ਕੌੜਾ ਲੱਗਣਾ, ਮਾਇਆ ਦਾ ਮੋਹ ਮਿੱਠਾ ਲੱਗਣਾ (ਇਹ ਹੈ ਨਿੱਤ ਦਾ ਸੁਭਾਉ ਜੀਵਾਂਦਾ । ਮਾਇਆ ਦੇ ਮੋਹ ਵਿਚ ਫਸ ਕੇ ਸਦਾ ਖਿੱਝਦੇ ਰਹਿੰਦੇ ਹਨ) ।੧। ਹੇ ਭਾਈ! ਜੀਵ (ਸਦਾ) ਮੋਹ ਦੇ ਅੰਨ੍ਹੇ (ਹਨੇਰੇ) ਖੂਹ ਵਿਚ ਪਏਰਹਿੰਦੇ ਹਨ, (ਜੀਵਾਂ ਨੂੰ ਸਦਾ) ਭਟਕਣਾ ਲੱਗੀ ਰਹਿੰਦੀ ਹੈ, ਮੋਹ ਦੇ ਹਨੇਰੇ ਜਕੜ ਵਿਚ ਫਸੇ ਰਹਿੰਦੇ ਹਨ (ਪਤਾ ਨਹੀਂ ਇਹ ਕਿਉਂ ਇਸਤਰ੍ਹਾਂ ਕੁਰਾਹੇ ਪਏ ਰਹਿੰਦੇ ਹਨ) । ਹੇ ਨਾਨਕ! ਆਖ ਜਿਸ ਮਨੁੱਖ ਉੱਤੇ ਪ੍ਰਭੂ ਦਇਆਵਾਨ ਹੁੰਦਾ ਹੈ, ਉਸ ਨੂੰ ਗੁਰੂ ਮਿਲ ਪੈਂਦਾ ਹੈ (ਤੇ, ਗੁਰੂ ਉਸ ਦੀ) ਬਾਂਹ ਫੜ ਕੇ (ਉਸ ਨੂੰ ਹਨੇਰੇ ਖੂਹ ਵਿਚੋਂ) ਕੱਢ ਲੈਂਦਾ ਹੈ ।੨।੧੦।੯੬।
After Trump's attempts to distance himself from the Heritage Foundation's Project 2025, his runningmate has written that "it's time to circle the wagons and load the muskets" in the forward to a new book by the project's leading voice.On Today's Show:Alex Shephard, senior editor of The New Republic, talks about the latest national political news, including Trump's VP pick JD Vance's media rounds and more on Harris's VP pick Tim Walz.
Dogs Are Smarter Than People: Writing Life, Marriage and Motivation
Brainstorming . . . Even the word sounds a little creepy. Like there is a storm inside your brain. It sounds... It sounds sort of violent and hazardous and windy. In this podcast, we talk about the storms inside our brain and how those storms can become story ideas. Five Ways To Get Story Ideas Some authors have a really hard time just getting an idea for a new story. They burn out. They can't find anything that they think is 'good enough.' They just don't know where to start and that lack of a start makes them blocked. This is so sad! There are ways to fight it. One Way To Storm is BY Admiring Other's Work Think about ways that other people's stories influence you. If you're an Outlander fan, think about why. If you were to write your own kind of time travel story would it be like that? With a lot of spanking and stuff? Or something totally different. How would it be different? Another Way to Incite a Hailstorm of Questions Ask your self questions. It's all about 'What if?' What if Trump wasn't president in 2018? What if everyone had blue hair? What if the earth had two moons? What if dogs were really space aliens? Pogie the Dog: Wait. You mean they aren't? Carrie the Human: No, buddy... I mean... I don't think so? Third Way Where the Wind Is So Strong It Pushes Images into you Some of my best ideas have come on a treadmill watching the country music network or MTV or some random YouTube channel with the sound off and just seeing images. Eventually, an image will hit me so hard that I have to write a story about it. The happened with my story, Love (and Other Uses for Duct Tape). Fourth Way Of Icy Understanding Figuring things out. This is sort of like Another Way, but instead of deliberately asking yourself off-the-wall questions, ask questions about things that matter to you. A lot of my stories are because I don't understand something. Tips on Having a Gay (Ex) Boyfriend was because I couldn't understand a hate crime that had happened. I mean, you can never understand that kind of hate, but this one incident was so bizarre that the only way I could deal with it was to write my way through it. Fifth Way—An Emotional Blizzard Get emotional. What is it that always makes you laugh, cry with joy, weep with anger? What are the situations that pull at your heartstrings? Think about that as story. Write. Dog Tip for Life Inspiration is just attention. Notice what's around you. Then ideas will come. Writing Tip of the Pod Once you have your seed of information and your brain has successfully stormed, don't second guess your idea. Write it down. If you are a plot-first writer, think up the questions to flesh out your idea - who is the protagonist. What is she up against? What's her goal? How is she going to get it? Write it down. Do it. Don't block yourself. RANDOM THOUGHT LINKS APA article NPR SHOUT OUT! The music we've clipped and shortened in this podcast is awesome and is made available through the Creative Commons License. Here's a link to that and the artist's website. Who is this artist and what is this song? It's “Summer Spliff” by Broke For Free. WE HAVE EXTRA CONTENT ALL ABOUT LIVING HAPPY OVER HERE! It's pretty awesome. We have a podcast, LOVING THE STRANGE, which we stream biweekly live on Carrie's Facebook and Twitter and YouTube on Fridays. Her Facebook and Twitter handles are all carriejonesbooks or carriejonesbook. But she also has extra cool content focused on writing tips here. Carrie is reading one of her raw poems every once in awhile on CARRIE DOES POEMS. And there you go! Whew! That's a lot! Subscribe
When Renting Real Estate to your S-Corp leads to a deficiency $500,000+ The Gregory and Laura Schnackel Tax Court Saga: A Tale of Extravagance and DeceptionTaxnotes Case Review: https://www.taxnotes.com/research/federal/court-documents/court-opinions-and-orders/no-deductions-28000-month-condo-no-business-purpose/7kjhlAre you a tax professional looking for an online community focused on growing technical knowledge? Where all responses require a citation? Check out Incite! InCite Tax Community: https://www.incite.tax/In this episode, we delve into the dramatic and intriguing tax court case of Gregory and Laura Shackle as detailed in Tax Court Memo 2024-76. Gregory, the owner of an engineering and design S-Corp, purchases a lavish $3 million New York City condo, furnishes it with $300,000 worth of high-end items, and tries to pass off much of these costs as business expenses. Amidst extramarital affairs and questionable spending, Gregory fails to maintain proper records, resulting in significant penalties and tax deficiencies. Laura, with minimal involvement in the business, successfully applies for innocent spouse relief, while the courts determine the substantial amounts owed. This tale is a striking example of the potential fallout from attempting to misuse business write-offs, and the responsibilities that come with tax reporting, even when using a tax professional. The episode concludes with the fallout from Gregory and Laura's divorce and the consequential financial and personal unraveling.00:00 Introduction to the Case00:27 Background of Gregory and Laura00:38 Gregory's Business Ventures02:57 The New York Condo Purchase05:32 Questionable Business Practices09:02 Luxury Furnishings and Personal Use12:49 The Range Rover Purchase15:41 The Affair and Financial Misconduct17:56 Innocent Spouse Relief19:07 IRS Penalties and Court Rulings25:46 Lessons and Final Thoughts
Jesus, incite my heart in places where discouragement wants to live.Scripture: Hebrews 10:24Keywords: Motivation; encouragement; mentoring; love; kindness; making a difference.Best of, Summer.
Join John Briggs and Mathew Park as they dive into the world of creating a successful business in the gym industry! Mathew is the founder and CEO of Trainer Revenue Multiplier with a background in physical fitness training and body building. His company helps professionals in fitness scale smarter and retain clients to optimize success. In this interview Mathew lets John in on some of those proven strategies that can be applied to businesses of all kinds! Mathew Park:Website: https://trainerrevenuemultiplier.com/Socials: @trainerrevenuemultiplier Podcast: “Trainer Revenue Multiplier Show” on Spotify and Apple Podcasts John Briggs | Tax Geniusinfo@incitetax.com801-999-8295Visit our website @ Incite Tax Schedule A CallIRS Sucks T-shirts now for sale @ Incite Shop Follow us on…FacebookLinkedInInstagramTwitterYouTube
Join Incite CEO John Briggs on the InciteFUL Profit podcast for a compelling conversation with special guest April Porter. Attorney turned franchisee, April shares the story of leaving her career in the courthouse to open kickboxing gyms. She then discovers how her legal experiences and ability to navigate high-pressure environments could drive the success of her franchise and other small businesses. Tune in for valuable tips and strategies on effectively managing and growing a franchise. Don't miss this opportunity to learn from April's unique journey and gain actionable insights for your business ventures.. April Porter:Website: askaprilporter.comSocials: @askaprilporter Podcast: The Infinite Franchisee Show..#Incitetax #askaprilporter #InciteFULProfit #Podcast #JohnBriggs #AprilPorter #FranchiseSuccess #BusinessJourney #KickboxingGyms #SmallBusiness #Entrepreneurship #BusinessTips #FranchiseManagement #SuccessStrategies #BusinessInsights #CareerTransition #franchisee #businessgrowth #IRSsucks John Briggs | Tax Geniusinfo@incitetax.com801-999-8295Visit our website @ Incite Tax Schedule A CallIRS Sucks T-shirts now for sale @ Incite Shop Follow us on…FacebookLinkedInInstagramTwitterYouTube
News
Aujourd'hui Barbara Lefebvre, Antoine Diers et Jérôme Marty débattent de l'actualité autour d'Alain Marschall et Olivier Truchot.
Three members of Congress and now the President have stated that Trump supporters are “erectionists” and engaged in “erection” against the United States. Although funny, it could be understood in context with the words “incited an erection” since the “in” and “an” trick the brain into shortening the last word. But something else is going too. Sheila Jackson Lee, who was on the Science, Space and Technology Committee, commented during the eclipse that the moon was “made up mostly of gases.” The US military has posted a picture and video in the last month showing the improper assembly, holding, and firing of a firearm. The View's Sunny Houston went further by claiming cicadas, earthquakes and the eclipse were caused by Climate Change. The question is becoming, not so much the rhetorical, ‘how ignorant are these people', but instead if some of this is not on purpose to demoralized and change both history and science. -FREE ARCHIVE & RSS: https://www.spreaker.com/show/the-secret-teachingsTwitter: https://twitter.com/TST___RadioFacebook: https://www.facebook.com/thesecretteachingsWEBSITE (BOOKS, RESUBSCRIBE for early show access): http://thesecretteachings.infoPaypal: rdgable@yahoo.comCashApp: $rdgableBuy Me a Coffee: https://www.buymeacoffee.com/tstradioSUBSCRIBE TO NETWORK: http://aftermath.mediaEMAIL: rdgable@yahoo.com / TSTRadio@protonmail.com
Merriam-Webster's Word of the Day for May 13, 2024 is: instigate IN-stuh-gayt verb To instigate something is to cause it to happen or begin. // The pair was accused of instigating a plot to oust the newly elected mayor. See the entry > Examples: "The image of John, Paul, George and Ringo waving from the top steps of Pan Am Yankee Clipper flight 101 at 1.20pm on 7 February 1964 is among the most iconic in rock'n'roll history. … That aeroplane steps photo was pivotal in instigating a dynamic in rock music whereby boys played guitars to the wild adulation of girls, a misguided social 'norm' that became so deeply embedded in the music industry that we're only now beginning to untangle it." — Mark Beaumont, The Independent (London), 7 Feb. 2024 Did you know? It's time to investigate the true meaning of instigate. Instigate is often used as a synonym of incite (as in "siblings instigating a fight"), but the two words differ slightly in their overall usage. Incite usually stresses an act of stirring something up that one did not necessarily initiate ("the court's decision incited riots"), while instigate implies responsibility for initiating or encouraging someone else's action, and usually suggests dubious or underhanded intent ("he was charged with instigating a conspiracy"). Coming from a form of the Latin verb instigare ("to urge on or provoke"), instigate stepped into English in the 1500s, roughly a century after incite.
Merriam-Webster's Word of the Day for May 13, 2024 is: instigate IN-stuh-gayt verb To instigate something is to cause it to happen or begin by urging or goading others. Instigate is a synonym of provoke. // The pair was accused of instigating a plot to oust the newly elected mayor. See the entry > Examples: "The image of John, Paul, George and Ringo waving from the top steps of Pan Am Yankee Clipper flight 101 at 1.20pm on 7 February 1964 is among the most iconic in rock'n'roll history. … That aeroplane steps photo was pivotal in instigating a dynamic in rock music whereby boys played guitars to the wild adulation of girls, a misguided social 'norm' that became so deeply embedded in the music industry that we're only now beginning to untangle it." — Mark Beaumont, The Independent (London), 7 Feb. 2024 Did you know? It's time to investigate the true meaning of instigate. Instigate is often used as a synonym of incite (as in "siblings instigating a fight"), but the two words differ slightly in their overall usage. Incite usually stresses an act of stirring something up that one did not necessarily initiate ("the court's decision incited riots"), while instigate implies responsibility for initiating or encouraging someone else's action, and usually suggests dubious or underhanded intent ("he was charged with instigating a conspiracy"). Coming from a form of the Latin verb instigare ("to urge on or provoke"), instigate stepped into English in the 1500s, roughly a century after incite.
This week on Homeless in San Diego, we welcome filmmaker Jason Ritchie, a storyteller driven by the struggles of his own life. Ritchie was incarcerated for over 17 years in the California prison system and has overcome drug and alcohol addiction. He used education for his personal rehabilitation and believes it to be a crucial contributing factor for success in re-entry and recovery. Through Project Rebound, Ritchie has gone on to graduate from SDSU with an Interdisciplinary BA in Film, Journalism, and Art, and was accepted into a Master's program in Learning Design and Technology. His goal is to help others tell their stories of incarceration and addiction in the hopes of bringing awareness that these issues do not discriminate. Ritchie recently received a fellowship through Columbia University's INCITE program, an interdisciplinary institute that produces knowledge for public action at Columbia University. There, he is creating an experiential theatrical installation piece titled "Street Seen." The play is expected to debut in November and invites audiences to view life from the perspective of people who live on the streets. Don't miss this inspiring episode that challenges perceptions and showcases the strength found in second chances. https://www.theunhousedcollective.org/ Instagram: https://www.instagram.com/sandiegounhousedcollective/ https://www.voicesofourcity.org/ https://www.housingsandiego.org/heal-network https://www.housingca.org/our-work/run/ https://incite.columbia.edu/mission https://www.calstate.edu/impact-of-the-csu/student-success/project-rebound
"Union days that incite racial hatred will end" Myanmar Nway Oo Chronicle 12th Feb 2024 (Moemaka Article) Nway Oo Naing.This item has files of the following types: Archive BitTorrent, Item Tile, Metadata, PNG, Spectrogram, VBR MP3
Swati Mylavarapu is founder and managing partner of Incite.org where she invests catalytic capital and provides guidance to start-ups and non-profits developing world-positive innovation across climate tech, health tech, and social impact. Prior to founding Incite, Swati spent years working in the early stage technology industry as both an investor and an entrepreneur. She has been a Partner at Kleiner Perkins, served as National Investment Chair for Pete For America, and also co-founded Arena, which is building a new generation of leaders in American public service. In this episode we cover... [1:45] The inspiration for Incite [4:15] How Swati has seen impact investing change [6:44] How Incite's flexible investment model has manifested [10:36 ] What Incite is spending it's time on now [17:10] What Swati looks for in a founder [20:30] What makes a good VC investor [23:28] Advice for people looking to break into venture [27:50] How Swati builds mission-oriented teams [29:32] What drives Swati personally and professionally [32:17] A fun fact about Swati
Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ”Beyond Transformers & GPUs”, and “Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer
Matt Rogers is the co-founder and CEO of Mill, a new way to keep food waste in the food system. Reducing food waste from households presents one of the greatest opportunities for individuals to reduce their climate impact. Matt started his career at Apple where he was one of the first engineers on the iPhone. He then co-founded Nest, the first learning thermostat to help households reduce their energy usage, which was later acquired by Google. Matt is also a co-founder of Incite.org, which provides catalytic capital and guidance to start-ups and non-profits developing world-positive innovation. In this episode we cover: [1:15] Matt's Inspiration for Mill [2:56] How building Mill has differed from other products Matt has built [6:15] Why Matt chose the home as the best place to reduce food waste [8:35] Why Mill is turning food waste into chicken feed [10:32] How Mill is emphasizing communities and local food loops [12:50] Mill's ambitions and the importance of focus [14:52] Lessons from Nest that Matt is applying to Mill [17:50] Matt's perspective on effective program management and board meetings [21:00] Matt's lessons on fundraising [22:08] What Matt looks for as an investor [23:12] The differences between leading an early and growth stage company [25:14] How Matt takes care of himself as a founder [27:34] Matt's advice to aspiring climate leaders [29:18] A fun fact about Matt
John Briggs is not just known for his expressive t-shirts, but for going against the big 4 Accounting firms with his “new workday normal” Rockstar business culture and jaw-dropping quarterly revenue increases of 492%!But how did he do it? Many people have been dying to know John's secrets. Fear not, because he recently composed all his stellar business advice, from epic failures to his greatest successes, into his new book, The 3.3 Rule – A New Workday Normal.Through his 12+ years of Entrepreneurship he has learned that achieving a highly profitable business can be overwhelming, particularly with all the barriers and unknowns that new business owners are forced to deal with. Entrepreneurs can quickly become stressed and often, sadly, give up on the mission that they set out for because of burnout. In much the same way that the body needs blood to survive, our economy needs small businesses, the economy's lifeblood, to stay healthy and grow more resilient.John Briggs is a man on a mission! A mission to give entrepreneurs a new workday standard, so they avoid burnout and stay in business. The 3.3 Rule book showcases how to make more, by working less. He “walks the walk” by battling against the traditional CPA culture of “overwork, underpaid, pay your dues and suffer while you're at it” mentality by providing his team with this same new workday normal, even during busy tax season!Learn more:https://incitetax.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-john-briggs-founder-and-ceo-of-incite-tax
John Briggs is not just known for his expressive t-shirts, but for going against the big 4 Accounting firms with his “new workday normal” Rockstar business culture and jaw-dropping quarterly revenue increases of 492%!But how did he do it? Many people have been dying to know John's secrets. Fear not, because he recently composed all his stellar business advice, from epic failures to his greatest successes, into his new book, The 3.3 Rule – A New Workday Normal.Through his 12+ years of Entrepreneurship he has learned that achieving a highly profitable business can be overwhelming, particularly with all the barriers and unknowns that new business owners are forced to deal with. Entrepreneurs can quickly become stressed and often, sadly, give up on the mission that they set out for because of burnout. In much the same way that the body needs blood to survive, our economy needs small businesses, the economy's lifeblood, to stay healthy and grow more resilient.John Briggs is a man on a mission! A mission to give entrepreneurs a new workday standard, so they avoid burnout and stay in business. The 3.3 Rule book showcases how to make more, by working less. He “walks the walk” by battling against the traditional CPA culture of “overwork, underpaid, pay your dues and suffer while you're at it” mentality by providing his team with this same new workday normal, even during busy tax season!Learn more:https://incitetax.com/Influential Entrepreneurs with Mike Saundershttps://businessinnovatorsradio.com/influential-entrepreneurs-with-mike-saunders/Source: https://businessinnovatorsradio.com/interview-with-john-briggs-founder-and-ceo-of-incite-tax
Alix Winter is the Chief Data Scientist for the Racial Profiling and Biased-Based Policing Investigations Unit at New York City's Civilian Complaint Review Board. She is also an Affiliated Research Scholar at Columbia University's Incite. We discuss how court actors justify bail decisions and the outcomes of these decisions.
Did The FBI and their "Bad Actors" Incite The Violence At The Capitol Building?Call Your Representatives and demand an investigation into this.Good Americans may wrongfully be rotting in prison because of this.
Founder of Incite Tax and the author of Profit First For Microgyms, John Briggs has a theme: "The IRS Sucks". Business owners should keep more of their money to build long-term wealth. John joins the podcast today to share how he has built an amazing team of nontraditional accountants to help combat IRS bullying. John is a man on a mission to give entrepreneurs an actual work-life balance so they don't get burned out. He helps them increase their cash immediately so they can have confidence in their choices and become financially resilient. Key Takeaways: [2:19] - John shares his background and why he founded Incite Tax. [3:40] - He is passionate about helping entrepreneurs stay in business. [4:41] - Not enough capital and mismanaging capital are the number one pain points for franchisees. [5:33] - The demand for something expands to match its supply. [7:23] - John shares a way to manage money through multiple types of accounts. [8:53] - The first step of the Profit First method is to do a cash flow analysis. [12:03] - It is never too late to start thinking about a cash flow system. Start now. [14:06] - John explains why gyms and fitness businesses are a niche for Incite Tax. [17:23] - Because they are so experienced with the business model, Incite Tax can see the challenges before they do. [20:18] - You can negotiate lease agreements. [22:45] - John explains the systems he has built that help people avoid burnout. [26:37] - When is it time to think about adding another location to a business? [30:16] - Think about how to maximize the space you are in. [35:02] - It's not always the best idea to open a second location. Mentioned in This Episode: Profit First For Microgyms by John Briggs Incite Tax Home Page Incite Tax on Facebook Incite Tax on Instagram
Join Critical Resistance and abolitionists for a critical discussion on the ongoing war on Palestine. Prison industrial complex (PIC) abolitionists have always understood the work to dismantle the PIC to be connected to global movements against war, militarism, and colonialism. In the past few weeks, we've seen mass mobilization in solidarity with the Palestinian people as they face one of the deadliest assaults by the Israeli military in its history. On Wednesday, Nov 1, join us for a critical discussion on the ongoing war on Palestine. Dr. Angela Y Davis, Lara Kiswani (Executive Director of the Arab Resource and Organizing Center), Stefanie Fox (Executive Director of Jewish Voice for Peace), and Nadine Naber (INCITE! National) will join us in a discussion moderated by Mohamed Shehk (Campaigns Director of Critical Resistance) to help us understand the situation on the ground in Palestine, how our organizations and people everywhere can mount effective resistance to the genocidal war against Palestinians, and how we can use abolitionist strategies such as Dismantle-Change-Build, Divest/Invest & “Defund,” and “shrink and starve” to do so. Organized by Critical Resistance. This event is also a fundraiser for Middle East Children's Alliance (MECA), who are providing much needed aid to the people of Gaza. All funds will go to MECA after accessibility costs for this event. --------------------------------------------------------------- Speakers: Angela Y. Davis is Professor Emerita of History of Consciousness and Feminist Studies at UC Santa Cruz. An activist, writer, and lecturer, her work focuses on prisons, police, abolition, and the related intersections of race, gender, and class. She is the author of many books, from Angela Davis: An Autobiography (now available in a new edition from Haymarket Books) to Freedom Is a Constant Struggle. Lara Kiswani is the Executive Director of Arab Resource & Organizing Center (AROC), serving poor and working class Arabs and Muslims across the San Francisco Bay Area, and organizing to overturn racism, forced migration, and militarism. Stefanie Fox is the Executive Director of Jewish Voice for Peace. Nadine Naber is a scholar-activist and co-founder of organizations and programs such as: Mamas Activating Movements for Abolition and Solidarity; the Arab Women's Solidarity Association; Arab Movement of Women arising for Justice; the Arab American Cultural Center (UIC); and Arab and Muslim American Studies at UM, Ann Arbor. She is founder of Liberate Your Research Workshops. She has been a board member of groups like INCITE! Feminists of Color against Violence; the Women of Color Resource Center; the Arab American Action Network; Al-Shabaka; and the National Council of Arab Americans. She is Professor in the Gender and Women's Studies Program and the Global Asian Studies Program at the University of Illinois at Chicago. She is the author of many books focusing on Arabs, Arab Americans, and feminism within these communities. Moderator: Mohamed Shehk is the Campaigns Director of Critical Resistance --------------------------------------------------------------- This event is sponsored by Critical Resistance and Haymarket Books and is part of Until Liberation: A Series for Palestine by Haymarket Books, cosponsored by Palestinian American Organizations Network, Mondoweiss, Spectre, Dissenters, Tempest, Palestine Deep Dive, The New Arab, and more. Watch the live event recording: https://youtube.com/live/g9GjTMP9qZs Buy books from Haymarket: www.haymarketbooks.org Follow us on Soundcloud: soundcloud.com/haymarketbooks
The 1ME crew welcomes the homies from Just Practice Collaborative, a training and mentoring group focused on sustaining a community of practitioners that provide community-based accountability and support structures for all parties involved with incidents and patterns of sexual, domestic, relationship, and intimate community violence. Collaborative members Shira Hassan and Deana Lewis talk through the intentionality of their design, what Transformative Justice should and shouldn't be used for, and the importance of relationship to political experimentation. SHOW NOTES Critical Resistance - https://criticalresistance.org/ INCITE - https://incite-national.org/ Fumbling Towards Repair - https://www.akpress.org/fumbling-towards-repair.html Creative Interventions Toolkit - https://www.creative-interventions.org/toolkit/ Rachel Caidor - https://just-practice.org/rachel-caidor Saving Our Own Lives by Shira Hassan - https://www.haymarketbooks.org/books/1938-saving-our-own-lives Combahee River Collective - https://www.blackpast.org/african-american-history/combahee-river-collective-statement-1977/ Are the Cops in our Heads and Hearts By Paula X. Rojas - https://sfonline.barnard.edu/paula-rojas-are-the-cops-in-our-heads-and-hearts/ Pods and Pod Mapping Worksheet by BAY AREA TRANSFORMATIVE JUSTICE COLLECTIVE - https://batjc.wordpress.com/resources/pods-and-pod-mapping-worksheet/ NAVIGATING CONFLICT IN MOVEMENT ORGANIZATIONS by AORTA - https://static1.squarespace.com/static/5e9ddc272ee6fa03a5f1ccbe/t/606249bda86e8a2d9f9902bc/1617054141617/CONFLICT+IN+MOVEMENT+ORGANIZATIONS_handout.pdf Healing Justice Lineages by Cara Page & Erica Woodland - https://www.penguinrandomhouse.com/books/710523/healing-justice-lineages-by-cara-page/ Let This Radicalize You by Kelly Hayes and Mariame Kaba - https://www.haymarketbooks.org/books/1922-let-this-radicalize-you Practicing New Worlds by Andrea Ritchie - https://www.akpress.org/practicing-new-worlds.html
The 1ME crew welcomes the homies from Just Practice Collaborative, a training and mentoring group focused on sustaining a community of practitioners that provide community-based accountability and support structures for all parties involved with incidents and patterns of sexual, domestic, relationship, and intimate community violence. Collaborative members Shira Hassan and Deana Lewis talk through the intentionality of their design, what Transformative Justice should and shouldn't be used for, and the importance of relationship to political experimentation. SHOW NOTES Critical Resistance - https://criticalresistance.org/ INCITE - https://incite-national.org/ Fumbling Towards Repair - https://www.akpress.org/fumbling-towards-repair.html Creative Interventions Toolkit - https://www.creative-interventions.org/toolkit/ Rachel Caidor - https://just-practice.org/rachel-caidor Saving Our Own Lives by Shira Hassan - https://www.haymarketbooks.org/books/1938-saving-our-own-lives Combahee River Collective - https://www.blackpast.org/african-american-history/combahee-river-collective-statement-1977/ Are the Cops in our Heads and Hearts By Paula X. Rojas - https://sfonline.barnard.edu/paula-rojas-are-the-cops-in-our-heads-and-hearts/ Pods and Pod Mapping Worksheet by BAY AREA TRANSFORMATIVE JUSTICE COLLECTIVE - https://batjc.wordpress.com/resources/pods-and-pod-mapping-worksheet/ NAVIGATING CONFLICT IN MOVEMENT ORGANIZATIONS by AORTA - https://static1.squarespace.com/static/5e9ddc272ee6fa03a5f1ccbe/t/606249bda86e8a2d9f9902bc/1617054141617/CONFLICT+IN+MOVEMENT+ORGANIZATIONS_handout.pdf Healing Justice Lineages by Cara Page & Erica Woodland - https://www.penguinrandomhouse.com/books/710523/healing-justice-lineages-by-cara-page/ Let This Radicalize You by Kelly Hayes and Mariame Kaba - https://www.haymarketbooks.org/books/1922-let-this-radicalize-you Practicing New Worlds by Andrea Ritchie - https://www.akpress.org/practicing-new-worlds.html
In Episode 1489, Sean Woodley is joined by Katie Heindl (DIME, Basketball Feelings) to discuss some of the biggest and most interesting questions facing the Toronto Raptors going into the 2023-24 NBA season. Off the top, they examine the chicken-or-egg conundrum of whether a team needs good vibes in order to win, or whether winning is what brings good vibes, and how the coaching staff led by Darko Rajaković plays a role in all of it. Plus, how much can a coaching staff overcome a roster's limitations in order to fully revamp the team's style of play? Next, Sean and Katie discuss whether or not ball movement increasing can alone help turn around the moribund half court offense the Raptors have had over the last couple seasons. And to close, Sean and Katie examine just how much the Pascal Siakam and O.G. Anunoby free agency situations might bleed into the overall health and wellness of the team, and what the impact of another stressful deadline might be on the overall vibes and culture around the squad.Come and be part of our growing Locked On Raptors community over on Discord! https://discord.gg/vm6bW3pTSupport Us By Supporting Our Sponsors!Jase MedicalSave more than $360 by getting these lifesaving antibiotics with Jase Medical plus an additional $20 off by using code LOCKEDON at checkout on jasemedical.com.GametimeDownload the Gametime app, create an account, and use code LOCKEDONNBA for $20 off your first purchase.FanDuelThis episode is brought to you by FanDuel Sportsbook, Official Sportsbook of Locked On. Right now, NEW customers can bet FIVE DOLLARS and get TWO HUNDRED in BONUS BETS - GUARANTEED. Visit FanDuel.com/LOCKEDON to get started.FANDUEL DISCLAIMER: 21+ in select states. First online real money wager only. Bonus issued as nonwithdrawable free bets that expires in 14 days. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling Problem? Call 1-800-GAMBLER or visit FanDuel.com/RG (CO, IA, MD, MI, NJ, PA, IL, VA, WV), 1-800-NEXT-STEP or text NEXTSTEP to 53342 (AZ), 1-888-789-7777 or visit ccpg.org/chat (CT), 1-800-9-WITH-IT (IN), 1-800-522-4700 (WY, KS) or visit ksgamblinghelp.com (KS), 1-877-770-STOP (LA), 1-877-8-HOPENY or text HOPENY (467369) (NY), TN REDLINE 1-800-889-9789 (TN) Learn more about your ad choices. Visit podcastchoices.com/adchoices
In This Episode, You'll Learn How his “delight” project was about learning to pay attention Discovering that joy and delight are abundant if you are looking and paying attention Defining joy and all that accompanies this profound emotion How talking about joy and gratitude helps with the difficulties in life The importance of laughter and how it connects us to joy Why we need to slow down and listen to our bodies How busyness is a way of avoiding or escaping How writing can be a tool for self discovery and gaining a deeper understanding of ourselves To learn more, click hereSee omnystudio.com/listener for privacy information.