Podcasts about Abhinav

  • 275PODCASTS
  • 418EPISODES
  • 41mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 15, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Abhinav

Latest podcast episodes about Abhinav

Stumped
What is Virat Kohli's legacy in Test cricket?

Stumped

Play Episode Listen Later May 15, 2025 34:21


Alison Mitchell, Jim Maxwell and Charu Sharmu speak to former India Test opener, Abhinav Mukund, about Virat Kohli's impact on Test cricket following his shock retirement. Abhinav and Kohli made their Test debuts together in 2011. We look at Kohli's legacy on the longer form of the game and ask what his stepping down means for India and Test cricket as a whole? We also discuss the return of the IPL and how the new schedule may affect players and series' going forward.IMAGE: India's Virat Kohli gestures towards his wife Anushka Sharma in the stands as he celebrates reaching his century (100 runs) during day three of the first Test cricket match between Australia and India at Optus Stadium in Perth on November 24, 2024. (Photo by SAEED KHAN / AFP via Getty Images)

Vaad
संवाद # 249: Operation Sindoor implications - What will happen NOW? | Dr Abhinav Pandya

Vaad

Play Episode Listen Later May 8, 2025 65:46


Dr Abhinav Pandya, a Cornell University graduate in public affairs and a bachelor's from St. Stephen's College, Delhi, is a founder and CEO of Usanas Foundation, an India-based foreign policy and security think tank. He has authored books named 'Radicalization in India: An Exploration (2019)' and 'Terror Financing in Kashmir (2023)'.He had previously advised the former governor of Jammu and Kashmir on security issues during the critical times when Kashmir's special status, Article 370, was revoked.He has written extensively for several national and international newspapers, and worked with the International Labour Organization, the United Nations.His latest book is "Inside the terrifying world of Jaish-e-Mohammad'.

ThyGap Podcast (Telugu)
186. Open Bathroom with TG #23

ThyGap Podcast (Telugu)

Play Episode Listen Later Apr 29, 2025 73:17


Featuring ThyGappers - Nuthan, Srinivas, Saaketh, Memento, Abhinav, three43, Girija, Jyothi, Rafi, Amrutha, Praneeth S, Prateek, Anonymous, Gautham, Shaik!In conversation - Muthyala Muggu, Aha Naa Pellanta, Athadu, Mathu Vadalara 2, Double iSmart, April 1 Vidudala, Daaku Maharaj, Thandel, Love Story, 35 Chinna katha kaadu, Khaidi, Godavari, Does Karma exist?, GameChanger, Appudo Ippudo Eppudo, Zebra, Animal, Agent.____________________Subscribe, and Share!***Patreon: patreon.com/ThyGapInstagram: @_ThyGap |Twitter: @ThyGap |Email: mindthygap@gmail.com |Discord: https://discord.gg/mPS4aNWa94 |All Links: https://linktr.ee/thygap |

The Barron Report
From Manual Notes to AI: How Bikky is Changing the Restaurant Game

The Barron Report

Play Episode Listen Later Mar 6, 2025 26:25


In this eye-opening episode of The Restaurant Report, Abhinav Kapur, founder and CEO of Bikky, reveals how AI and data analytics are reshaping the restaurant industry. From voice ordering systems to customer retention strategies, Kapur shares insights gained from analyzing 350 million guest profiles across major brands like Bojangles and Dave's Hot Chicken. Learn why 80% of guests never return after their first visit, how the remaining 20% drive up to 75% of revenue, and why automation concepts like Sweetgreen's Infinite Kitchen are delivering dramatically improved profit margins. Whether you're running a QSR, fast casual, or traditional restaurant, this conversation offers practical guidance on using data to thrive in today's challenging market.~This episode is sponsored by: Gusto → https://gusto.pxf.io/PBN#1 rated HR platform for payroll, benefits, and moreWith Gusto's easy-to-use platform, you can empower your people and push your business forward. See why over 400,000 businesses choose Gusto.#RestaurantTech #AIinHospitality #CustomerRetentionGet Your Podcast Now! Are you a hospitality or restaurant industry leader looking to amplify your voice and establish yourself as a thought leader? Look no further than SavorFM, the premier podcast platform designed exclusively for hospitality visionaries like you. Take the next step in your industry leadership journey – visit https://www.savor.fm/Capital & Advisory: Are you a fast-casual restaurant startup or a technology innovator in the food service industry? Don't miss out on the opportunity to tap into decades of expertise. Reach out to Savor Capital & Advisory now to explore how their seasoned professionals can propel your business forward. Discover if you're eligible to leverage our unparalleled knowledge in food service branding and technology and take your venture to new heights.Don't wait – amplify your voice or supercharge your startup's growth today with Savor's ecosystem of industry-leading platforms and advisory services. Visit https://www.savor.fm/capital-advisory

ThyGap Podcast (Telugu)
180. Open Bathroom with TG #22

ThyGap Podcast (Telugu)

Play Episode Listen Later Feb 20, 2025 48:26


Featuring ThyGappers - Nuthan, Dinesh, Srinivas, Abhinav, Memento, MK Royce, three43, Shaik, Creative Kaptures, Krish & Praneeth S! In conversation - Appudo Ippudo Eppudo, RudraVeena, Kshana Kshanam, Anukokunda Oka Roju, Muthyala Muggu, Swag, Columbo, Zebra, Committee Kurrollu, Pekamedalu, Republic, Masooda, Pareshan, Month of Madhu, TFI Writers!____________________Subscribe, and Share!***Patreon: patreon.com/ThyGapInstagram: @_ThyGap |Twitter: @ThyGap |Email: mindthygap@gmail.com |Discord: https://discord.gg/mPS4aNWa94 |All Links: https://linktr.ee/thygap |

ThyGap Podcast (Telugu)
Open Bathroom with TG #22

ThyGap Podcast (Telugu)

Play Episode Listen Later Feb 20, 2025 48:26


Featuring ThyGappers - Nuthan, Dinesh, Srinivas, Abhinav, Memento, MK Royce, three43, Shaik, Creative Kaptures, Krish & Praneeth S! In conversation - Appudo Ippudo Eppudo, RudraVeena, Kshana Kshanam, Anukokunda Oka Roju, Muthyala Muggu, Swag, Columbo, Zebra, Committee Kurrollu, Pekamedalu, Republic, Masooda, Pareshan, Month of Madhu, TFI Writers!____________________Subscribe, and Share!***Patreon: patreon.com/ThyGapInstagram: @_ThyGap |Twitter: @ThyGap |Email: mindthygap@gmail.com |Discord: https://discord.gg/mPS4aNWa94 |All Links: https://linktr.ee/thygap |

The Simmer
Abhinav Kapur Co-founder and CEO, Bikky

The Simmer

Play Episode Listen Later Feb 11, 2025 38:59


We're back on the restaurant loyalty beat this week with Abhinav Kapur, who's led Bikky for nearly a decade. In this episode, we talk about developing loyal diners, email (and text!) marketing, and “tendies” from a new KFC spinoff, whatever those are. 

Vaad
संवाद # 231: Bitter truth of Islamic radicalization in India | Dr Abhinav Pandya

Vaad

Play Episode Listen Later Jan 27, 2025 60:08


Dr Abhinav Pandya, a Cornell University graduate in public affairs and a bachelor's from St. Stephen's College, Delhi, is a founder and CEO of Usanas Foundation, an India-based foreign policy and security think tank. He has authored two books, Radicalization in India: An Exploration (2019) and Terror Financing in Kashmir (2023).He had previously advised the former governor of Jammu and Kashmir on security issues during the critical times when Kashmir's special status, Article 370, was revoked.He has written extensively for several national and international newspapers, and worked with the International Labour Organization, the United Nations.

🏏Armchair Cricket Podcast 🎧
Armchair Cricket Podcast - Episode 277

🏏Armchair Cricket Podcast 🎧

Play Episode Listen Later Jan 20, 2025 102:43


Welcome to a new episode of the podcast! We are happy to have a new guest co-host, Abhinav joining us. Games Covered PAK v WI: 1st Test. Women's Ashes: ODI series and 1st T20i. INDw v IREw: 3rd ODI and series wrap-up. VHT 2025 finals. Other news IND'S Varun Aaron retires from all cricket. BAN's Shakib Al Hasan fails a second bowling action test in Chennai, remains suspended. BCCI lays down a ten-point guideline for men's national team. CT 2025 preview: AUS, ENG, SA, IND. NIG U-19 women's team win on their first ever fully completed game against NZ in T20i WC. ______________________________________________________________________________ Listen to us and get in touch: On ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ On ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ On ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Podbean On ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Pocket Casts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ On ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠RadioPublic⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠E-mail⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Please do subscribe to our podcast and let us know what you think in the comments section of the podcasting app, via mail or on social media. Leave us a 5-star rating on any platform or app (like apple podcasts) you use to listen to us. Thanks! Learn more about your ad choices. Visit podcastchoices.com/adchoices

Data Protection Gumbo
282: Shocking New Tactics Hackers Are Using to Steal Your Identity - Breez Security

Data Protection Gumbo

Play Episode Listen Later Jan 14, 2025 19:40


Abhinav Srivastava, Founder and CEO of Breez Security delve into the rising importance of identity security in combating credential compromises and safeguarding cloud and SaaS platforms. Abhinav explains how to use behavioral identity telemetry and AI to enhance detection and response capabilities. The conversation explores persistent security challenges like phishing, credential stuffing, and the complexities of implementing least privilege principles. Abhinav underscores the need for a balanced approach across prevention, detection, and response, emphasizing real-time identity monitoring and anomaly detection.

TheFutureEconomy.ca Podcast
How Connected Care is Transforming Modern Healthcare | Carlene Todd and Abhinav Kalra

TheFutureEconomy.ca Podcast

Play Episode Listen Later Dec 9, 2024 41:25


Canada's healthcare system stands at a transformative crossroads. Explore how connected care, modernized digital infrastructure, and unified health data strategies can unlock the value of health data and improve outcomes for all Canadians. Discover the challenges of data fragmentation, rising healthcare costs, trust-building in a decentralized system, and more. Don't miss this compelling interview, hosted by Carlene Todd, Vice President of Access at Roche Canada, with guest Abhinav Kalra, Executive Vice-President at Canada Health Infoway.Read the full interview and key takeaways: https://thefutureeconomy.ca/interviews/advancing-connected-care-transform-healthcare/Subscribe for exclusive previews of upcoming episodes and updates on new releases: https://bit.ly/3ri2IUu Follow us on social media: https://linkin.bio/thefutureeconomy.ca=====About TheFutureEconomy.ca=====TheFutureEconomy.ca is a Canadian online media outlet and thought leadership platform that produces interviews, panels and op-eds featuring leaders from industry, government, academia and more to define a strong vision for our future economy.Our content emphasizes our interviewees' insights and calls-to-action on what we must do now to improve the competitiveness and sustainability of Canada's future economy.Check out our website: https://thefutureeconomy.ca/ #FutureOfHealth #HealthcareInnovation #ConnectedCare #HealthCanada #CanadianHealthcare

Vaad
संवाद # 220: Pakistan planning Hamas style attacks in India? | Dr Abhinav Pandya's analysis on J&K

Vaad

Play Episode Listen Later Dec 6, 2024 62:07


Dr Abhinav Pandya, a Cornell University graduate in public affairs and a bachelor's from St. Stephen's College, Delhi, is a founder and CEO of Usanas Foundation, an India-based foreign policy and security think tank. He has authored two books, Radicalization in India: An Exploration (2019) and Terror Financing in Kashmir (2023).He had previously advised the former governor of Jammu and Kashmir on security issues during the critical times when Kashmir's special status, Article 370, was revoked.He has written extensively for several national and international newspapers, and worked with the International Labour Organization, the United Nations.

The Restaurant Growth Show
Fueling Restaurant Growth Through Data: A Chat with Bikky CEO Abhinav Kapur

The Restaurant Growth Show

Play Episode Listen Later Nov 12, 2024 38:17


In this episode, we are joined by Abhinav Kapur, co-founder and CEO of Bikky. Abhinav discusses his transition from finance to the tech startup scene, inspired by the challenges he witnessed in his mother-in-law's restaurant. He delves into how Bikky helps restaurants transform raw customer data into actionable insights to drive revenue growth. Abhinav explains the importance of understanding customer behavior, the impact of third-party data, and how AI is shaping the future of restaurant analytics. Don't miss this insightful conversation packed with valuable advice for restauranteurs looking to thrive in a digital-first world. 00:33 Abhinav's Background and Startup Journey 01:22 The Birth of Bikky 03:44 Impact of the Pandemic on Restaurants 05:01 Transition from Finance to Tech 10:18 How Bikky Helps Restaurants 12:47 Customer Data Insights and Analytics 16:55 Challenges and Opportunities in Restaurant Tech 31:35 Future Developments and AI in Bikky ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬  Audio Podcast Links Spotify | Apple | Google | Amazon | RSS | Download Leave your suggestions for new topics in the comments! We read every single one.

The Thriving Immigrant
Things to know about Home & Auto/Car Insurance in Canada.

The Thriving Immigrant

Play Episode Listen Later Sep 10, 2024 48:05


In today's episode, I'm excited to be joined by Abhinav Bajaj, an experienced insurance broker who moved to Canada over five years ago with one goal in mind—starting his own business. Although he didn't know what type of business at first, Abhinav found his niche in the world of insurance, building a successful brokerage.We dive into Abhinav's entrepreneurial journey and his insights into the insurance industry. We discuss:​His experience navigating entrepreneurship as an immigrant.​Why auto insurance tends to be expensive in Canada and what factors drive those costs.​The critical role of business insurance for entrepreneurs and small business owners.​Key things to consider when shopping for auto insurance.​How to secure the best deals on auto insurance in Canada, and much more!Stay tuned for a conversation packed with expert advice for business owners, drivers, and anyone interested in understanding insurance in Canada.Connect with Abhinav on LinkedInDisclaimer: The views and opinions shared on this channel are for informational and educational purposes only. This is NOT financial advice. Always do your own research and due diligence before investing.#immigrant #entrepreneurship #insurance

Restaurant LATE Night Show
CEO Abhinav Kapur from Bikky

Restaurant LATE Night Show

Play Episode Listen Later Aug 25, 2024 45:20


We're back with another exciting episode, live from New York! This week, we sit down with Abhinav Kapur, CEO of Bikky, to dive deep into how data is transforming the restaurant industry. If you're passionate about hospitality, food, and all things restaurant-related, you won't want to miss this one!

Sysco Canada Podcasts Wednesdays
CEO Abhinav Kapur from Bikky

Sysco Canada Podcasts Wednesdays

Play Episode Listen Later Aug 25, 2024 45:20


We're back with another exciting episode, live from New York! This week, we sit down with Abhinav Kapur, CEO of Bikky, to dive deep into how data is transforming the restaurant industry. If you're passionate about hospitality, food, and all things restaurant-related, you won't want to miss this one!

Nashville Restaurant Radio
Abhinav Kapur- CEO and Co-Founder-Bikky

Nashville Restaurant Radio

Play Episode Listen Later Aug 23, 2024 78:58


This episode is VERY important if you are a restaurant Owner or Manager!! Abhinav Founded a company called Bikky, based around his mother in laws (who owns a restaurant) need to market and thank the right people for eating at her restaurant. What he created was a solution to identify who your guest is, and what their behaviors will tell you. I would guess 99% of local, independent restaurants do not use this, and would benefit greatly. The problem is, there is SO MUCH DATA, that only chains really take full advantage as they have the staff to take the information and put it to use. I content that this is why chains flourish, and local restaurants close at an alarming rate. Watch as Crystal De Luna-Bogan and Brandon Styll ask the questions that matter to most small restaurant Owners. We always think it's a compliment when the guest says" that was fun, I haven't been asked those questions before" The actual interview with Abhinav starts at the 19 minute mark.

SBS Hindi - SBS हिंदी
India Report : Indian shooter Abhinav Bindra awarded prestigious 'Olympic Order' in Paris

SBS Hindi - SBS हिंदी

Play Episode Listen Later Aug 12, 2024 9:22


Listen to the latest SBS Hindi news from India. 12/08/2024

A Century Of Stories
E50 : Abhinav Bindra | India's First Individual Gold Medal

A Century Of Stories

Play Episode Listen Later Jul 29, 2024 6:58


Welcome to another episode of A Century Of Stories presented by IDFC FIRST Bank! Today, we dive into the extraordinary journey of Abhinav Bindra, who took the shot that changed our nation's sporting landscape forever. From being the youngest participant at the 1998 Commonwealth Games to clinching India's first-ever individual Olympic gold medal at the 2008 Beijing Games, Abhinav Bindra etched his name in Olympic history. We'll explore how this shooting prodigy rose through the ranks and overcame mental barriers to become a Khel Ratna awardee and an Olympic legend. As India's torchbearer at the Paris 2024 Olympics and an awardee of the Olympic Order, let's reflect on Bindra's impact on Indian sports and his legacy for the current generation of shooters competing at Paris. Subscribe for more such inspiring stories! New episodes out every Monday! #AbhinavBindra #IndianSports #Olympics #Paris2024 #ACenturyOfStories Open IDFC FIRST Bank savings account :  https://www.idfcfirstbank.com/personal-banking/accounts/savings-account?utm_source=ig&utm_medium=content&utm_campaign=June&utm_content=COS   Know more about Zero Fee Banking : https://www.idfcfirstbank.com/getmorefromyourbank?utm_source=youtube&utm_medium=centuryofstories&utm_campaign=cosepi1&utm_term=Aug23 Follow ‘A Century of Stories' official Instagram handle at @acenturyofstories Subscribe to A Century of Stories YT channel Listen to A Century of Stories across Audio Platforms Apple Podcasts | Spotify | Google Podcasts | Gaana | Amazon Music | Jio Saavn Follow our host Kunal on Instagram at @kunalvijayakar And don't forget to rate us!See omnystudio.com/listener for privacy information.

Coffee, Cricket Aani Barach Kaahi
Abhinav Bindra, the friend; Nana Patekar, the mentor: Anjali Bhagwat exclusive

Coffee, Cricket Aani Barach Kaahi

Play Episode Listen Later Jul 13, 2024 32:29


Abhinav Bindra and Anjali Bhagwat are the first names that come to mind when one thinks about shooting as a sport in India. In the early 2000s, Anjali dominated the sport and was the World Champion, having taken up the sport only during her NCC (National Cadet Corps). Her journey was far from beig easy. In the 2000 Sydney Olympics, Anjali reached Australia to compete but her ammunition did not get to Sydney due to bureaucratic issues. She also sheds light on why Abhinav Bindra deserved to win the gold medal at the 2008 Beijing Olympics and how shooting has changed in the last few decades. On Kattyawarchya Gappa, Anjali hits the bull's eye while taking us down the memory lane and taking a deep dive into the world of shooting   भारतात ऑलिंपिक आणि नेमबाजी म्हटलं, कि अभिनव बिंद्राबरोबरच मराठमोळ्या अंजली भागवतचं नावदेखील लगेच डोळ्यासमोर येतं. NCC मध्ये असताना अंजलीने नेमबाजी करायला सुरुवात केली आणि आपल्या कामगिरीने सगळ्यांचं लक्ष वेधून घेतलं. २०००च्या दशकात तिने आपल्या कामगिरीने जग जिंकलं.  तिचा हा प्रवास नक्की किती खडतर होता? २००० सिडनी ऑलिंपिकसाठी ती ऑस्ट्रेलियात पोहोचली खरी, पण तिच्या रायफलचं गोळाबारूद मात्र पोहोचू शकलं नाही. त्याच ऑलिंपिकमध्ये ती टेनिसपटू मोनिका सेलेसला भेटली, अर्थात त्यासाठी तिला अभिनव बिंद्रावर दादागिरी करावी लागली... पण अभिनवला सुवर्णपदक का मिळालं, हे सुद्धा अंजली सांगते. २००० च्या दशकातील नेमबाजी आणि २०२४ मध्ये भारतीय संघात जागा मिळवण्यासाठी असलेली चुरस आणि नेमबाजीचं बदलतं स्वरूप, हा सगळा प्रवास उलगडला आहे भारताची ऑलिंपियन नेमबाज अंजली भागवतने 'कट्ट्यावरच्या गप्पां'मध्ये अमोल गोखलेबरोबर

News and Views
Stories Behind Neeraj Chopra & Abhinav Bindra's Olympic Gold (ft. Manisha Malhotra)

News and Views

Play Episode Listen Later Jun 27, 2024 37:29


Do Olympic champions have a different mindset? In this podcast, Prateek Lidhoo asks this question to sports administrator Manisha Malhotra, who was a part of Abhinav Bindra and Neeraj Chopra's Olympic gold campaigns. She also recounts some stories behind their preparation, and what we can look forward to in the Paris and LA games.

The Feed
117- Converting traffic into customers with Abhinav Kapur of Bikky

The Feed

Play Episode Listen Later May 27, 2024 59:14


Abhinav Kapur is the Co-Founder & CEO of Bikky, a customer data analytics platform that helps multi-unit restaurants understand the frequency and lifetime value of their guests. In this episode, we'll talk about how Bikky is threading the needle between various data silos, how it helps operators understand guest retention, and the future of personalized loyalty.

Whiskey Hue
WH115 Abhinav Sathish, UVA Grad, Citadel bound joins to discuss Hedge Funds and a Deep Dive on Autonomous Driving.

Whiskey Hue

Play Episode Listen Later May 15, 2024 65:04


Abhinav Sathish, UVA Grad, Citadel bound, joins Atul to discuss Hedge Fund trading strategies, Levered Beta, Idiosyncratic insights, Momentum Stocks. Plus, the Dynamics of working in person and then a deep dive on FSD/Autonomous Driving. It's Tesla's tech vs. the world, who wins? If Finance and Tech are your thing, this podcast should be your thing too. Brown on brown crime as Johnny Walker Black returns as the WOTD. Part of the Prof P Series. 00:00 Intro 02:33 Citadel, Ken Griffin, Superstar Investor 11:15 24/7 Trading, Non-Competes, Hedge Funds 26:08 WOTD: Johnny Walker Black is Back.  Mad Dog 20/20. 30:00 Mobileye 37:10 Autonomous Driving: Deep Dive. LiDar vs. Dojo, L4, Mapping for Full Self Driving 56:20 WOTD: Review 57:30 SYSK- AI Go-To Sources Abhinav and I co-write the following article describing Elon Musk's desire to push the Dojo Camera technology as the go to standard for Autonomous Driving: https://tinyurl.com/ElonDojo

Concrete Conversations - The Indian Real Estate Podcast
How to deal with bad Air Quality in Cities with Abhinav Gupta, CEO of ActiveBuildings

Concrete Conversations - The Indian Real Estate Podcast

Play Episode Listen Later May 2, 2024 36:05


Welcome back for an episode with a recurring guest! Over the past few years, the air quality in urban India has gone from a seasonal problem in certain cities to a growing concern among the entire urban population in India. This can be seen in the AQI or Air Quality Index in most Indian cities, with more days of the year in the "severe" and harmful categories.Air pollution has many health effects, both direct and indirect, along with other factors like particulate matter. So what steps can be taken to mitigate the impact of poor air quality, both outdoors and in the spaces at which we spend the most time, our homes and workplaces?To answer this, we thought it would be the perfect time to reconnect with one of out first ever gusts on this podcast, and get an update on the issue of air quality in India! To talk to us about this, we have a returning guest in Mr. Abhinav Gupta, CEO of Active Buildings.With a background in engineering, Mr. Gupta started Active Buildings in 2016, to work on smart solutions that increase both the productivity and wellness of building occupants. In this episode Mr. Gupta revisits the current state of air quality in urban India, and talks about some interventions they have deployed in commercial, residential and even public spaces.So get ready for some Concrete Conversations!Follow the hosts on Instagram:Yash's Instagram: https://www.instagram.com/jumpform/Akshay's: https://www.instagram.com/kapz_99/Background score by Flowerbrain: https://linktr.ee/flowerbrain#TheIndianRealEstatePodcast #RealEstate #PodcastHave questions about Real Estate? Or topic you would love to hear more about on the Podcast? Connect with Concrete Conversations - The Indian Real Estate Podcast through the links below!Instagram: https://www.instagram.com/theindianrealestatepodcast/LinkedIn :https://www.linkedin.com/company/concrete-conversationsYouTube - https://www.youtube.com/channel/UCXn-Aw24pqfmULyym7hCi6Q

Positioning with April Dunford
What Customer-Centric Positioning Looks Like

Positioning with April Dunford

Play Episode Listen Later Apr 18, 2024 33:39


In today's episode, I talk with a technology entrepreneur about lessons related to his product initially being mispositioned in the market and also the challenges of positioning both free and paid versions of software solutions. Joining me is Abhinav Asthana, founder and CEO of Postman, an API platform for software developers to build and use APIs.In today's episode: * How Postman's positioning evolved from a product-focused approach to a platform-focused approach, with different positioning for end-users and executives.* Building Postman with the conviction that modern software is built on APIs, and evangelizing (positioning) this idea to investors.* Why top-down positioning needs to align with company strategy. * The value of working with analysts to understand customer needs and improve products.* Positioning software in terms of satisfying customer needs.* The importance of staying ahead of competitors in the enterprise software market.* The challenges of competing with free products and prioritizing user experience.* Abhinav advises talking to 50-100 customers per year to gain insights and build conviction in product decisions.—If you want to skip ahead: (2:24) Origin story of Postman. (7:19) Postman's growth, fundraising, and positioning challenges. (12:57) API platform growth and top-down sales strategies. (18:03) Taking an API-first approach in software development and marketing strategies. (28:19) Scaling velocity, competitors, and customer engagement in the enterprise software space. —Connect with Abhinav Asthana on LinkedIn: https://www.linkedin.com/in/abhinavasthana/ Learn about Postman's API platform: https://www.postman.com/ —Connect with April Dunford and learn about practical positioning that accelerates marketing and sales: Work with April: https://www.aprildunford.com/contact April's newsletter: https://aprildunford.substack.com/ April's LinkedIn: https://www.linkedin.com/in/aprildunford/ April's Instagram: ⁠https://www.instagram.com/aprildunford/ April's Twitter/X: https://twitter.com/aprildunford April's TikTok: https://www.tiktok.com/@positioningshow—Mentioned in this episode: “Breaking Changes,” a podcast where Postman Chief Evangelist Kin Lane and guests talk about the API universe: https://www.postman.com/events/breaking-changes/“The API-First Transformation,” a book by Kin Lane with a foreword by Abhinav Asthana: https://amzn.to/3PZzEOy —Get April Dunford's books and audiobooks: “Obviously Awesome: How to Nail Product Positioning so Customers Get It, Buy It, Love It.”“Sales Pitch: How to Craft a Story to Stand Out and Win.”Amazon US:

Slice of Healthcare
#437 - Abhinav Shashank, Co-Founder & CEO at Innovaccer

Slice of Healthcare

Play Episode Listen Later Mar 20, 2024 40:50


Join us on the latest episode, hosted by Jared S. Taylor! Our Guest: Abhinav Shashank, Co-Founder & CEO at Innovaccer.What you'll get out of this episode:Origins and Growth of Innovacer: Abhinav Shashank, CEO of Innovacer, shares his journey of building a successful healthcare tech company, discussing the challenges and triumphs of transitioning from a data platform for universities to a healthcare-focused entity.Pivotal Healthcare Shift with Mercy ACO: Innovacer's shift to healthcare was catalyzed by their project with Mercy Accountable Care Organization, leading to a focus on integrating various healthcare data for comprehensive patient views and proactive care.Advancements in AI and Healthcare Efficiency: Shashank highlights Innovacer's expansion into AI and how it significantly aids in reducing administrative burdens for healthcare professionals, improving patient care and physician well-being.Navigating Personal Challenges and Cultural Change: Despite experiencing a challenging period of personal depression, Shashank's return led to a greater emphasis on company culture, focusing on long-term sustainability and employee well-being.Embracing Purposeful Capitalism for Sustainable Growth: Innovacer, valued at over $3 billion, navigates the complex healthcare tech industry by prioritizing purposeful capitalism and sustainable growth, aiming for long-term impact over quick profits.To learn more about Innovaccer:Website: https://innovaccer.com/LinkedIn: https://www.linkedin.com/company/innovaccer/Guest's Socials:LinkedIn: https://www.linkedin.com/in/abhinavshashank/Our sponsors for this episode are:Sage Growth Partners https://www.sage-growth.com/Quantum Health https://www.quantum-health.com/Show and Host's Socials:Slice of HealthcareLinkedIn: https://www.linkedin.com/company/sliceofhealthcare/Jared S TaylorLinkedIn: https://www.linkedin.com/in/jaredstaylor/WHAT IS SLICE OF HEALTHCARE?The go-to site for digital health executive/provider interviews, technology updates, and industry news. Listed to in 65+ countries.

The Startup Operator
234: Bringing Robots to Life | Abhinav Das(CEO, Orangewood Labs)

The Startup Operator

Play Episode Listen Later Mar 8, 2024 55:44


In this captivating interview, Abhinav Das, the Co-founder and CEO of Orangewood labs shares his entrepreneurial journey, spanning multiple startups and groundbreaking innovations. From collaborating with co-founders to revolutionizing Indian manufacturing, Abhinav's insights inspire and inform. He discusses commercialization strategies, staying innovative, and imparts valuable advice for new founders. Join us as we delve into the challenges and triumphs of building startups, navigating complexities of building in the Robotics space. Topics:00:00 Intro01:45 The start of Abhinav's journey 15:14 Post Ahmedabad 20:54 Meeting his co-founders23:40 Complexities of working with CNC machines 25:40 Building robotic arms28:05 The 2nd startup33:12 Going from Idea to product to company 39:02 Improving India's manufacturing 42:04 Commercialising your products 44:29 Staying innovative 49:44 What's next for orangewood?51:31 Advice for new founders 52:40 Books and podcast recommendations ------------------------------------- Click here to get regular WhatsApp updates:https://wa.me/message/ZUZQQGKCZTADL1 ------------------------------------- Connect with Us: Linkedin: https://www.linkedin.com/company/startup-operator​Twitter: https://twitter.com/OperatorStartup​​ ------------------------------------- If you liked this episode, let us know by hitting the like button and share with your friends and family. Please also remember to subscribe to our channel and switch on the notifications to never miss an episode!

Business Podcast by Roohi | VC, Startups
Made in India Robots Ft. Abhinav Das(CEO of Orangewood Labs)

Business Podcast by Roohi | VC, Startups

Play Episode Listen Later Jan 27, 2024 13:50


This week I spoke with Abhinav Das(CEO of Orangewood Labs) We discuss about: 1. Made in India Robots 2. Sam Altman Advice 3. Bullish on Hardware 4. YC Advice Sponsor Corner Big shout to this month's sponsor The Undefeated Underdogs Podcast by Sharath Kuruganty for making this episode happen The Undefeated Underdogs is an excellent podcast(as a listener I can vouch for this) It's well-researched and Sharath's curiosity shines through as he explores the VC, startups, and creators' landscape:  Find the Undefeated Underdogs Podcast here: ⁠https://undefeatedunderdogs.com/⁠⁠- or wherever you get your podcasts Connect with Abhinav here: X: ⁠⁠⁠⁠https://twitter.com/Abhindas1 Connect with the host Roohi Kazi on the below platforms: Instagram-roohik2 LinkedIn: Roohi Kazi Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/roohi_kr⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Visit this link for more listening options/platforms for the Business Podcast by Roohi, and next step groups: ⁠https://bop.me/roohikaz⁠ Business Podcast by Roohi Website: ⁠⁠⁠⁠⁠⁠⁠⁠https://businesspodcast.transistor.fm/⁠⁠⁠⁠⁠⁠⁠⁠ Subscribe to the Business Podcast by Roohi newsletter here: ⁠https://businesspodcastbyroohi.substack.com/⁠ Subscribe to the YouTube channel here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://youtube.com/@bizpodroohi?si=V4VxfkFEO4AlEiIz⁠⁠⁠⁠⁠⁠⁠⁠⁠ If you enjoyed this episode, please leave a review of the podcast here on Apple Podcasts: ⁠⁠⁠⁠⁠⁠⁠⁠https://podcasts.apple.com/us/podcast/business-podcast-by-roohi-vc-startups/id1516165457?uo=4⁠⁠⁠⁠⁠⁠⁠⁠ Or rate it on Spotify Would really appreciate it! Follow Business Podcast by Roohi on LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/company/business-podcast-by-roohi/⁠⁠⁠⁠⁠⁠⁠⁠ DM me on X(formerly Twitter) or LinkedIn- if you are interested in sponsoring episodes of the podcast Email me at bizpodroohi@gmail.com- if you have any feedback or requests for guests

The BarberShop with Shantanu
Winners of India's Largest Sales Challenge Reveal How They Sold THOUSANDS of Razors | Grand Finale

The BarberShop with Shantanu

Play Episode Listen Later Dec 15, 2023 72:24


Welcome to the Finale episode of Bombay Shaving Company's Razorpreneur Challenge, a two month long journey in which participants from across India competed to sell the maximum number of razors and become winners of India's largest sales and entrepreneurship challenge.Meet our winners: Thomas, Dipti, Rahull, Santhosh, Puneeth, Mahesh, Harsh, Abhinav, Dharampal, and Krishna! They represent the main idea of the challenge, which is that the sales skillset is the most important to cultivate if one wants to become an entrepreneur! More than a fancy business degree, our winners showed us that sometimes, all you need is the charisma and enthusiasm to take to the streets. Join us for an insightful discussion as we delve deep into the core of selling fundamentals, exploring what it truly means to be a salesperson in today's dynamic business landscape, and understand these unique individuals and their unique stories.With Shantanu Deshpande (Founder, Bombay Shaving Company) and Aditya Sehgal (Ex Global COO, Reckitt) interacting with our winners, get ready to uncover invaluable insights, strategies, and anecdotes from these trailblazing entrepreneurs who have mastered the art of persuasion and drive in the world of sales.Introduction: 0:00 - 0:58Shantanu and Adi discuss the Razorpreneur Challenge: 0:59 - 18:30Welcome Razorpreneurs!: 18:31 - 21:59Razorpreneurs look back on their journeys: 22:00 - 55:15Understanding credit, wholesale markets, & other business essentials: 55:16 - 1:07:54Shantanu speaks about his on-ground sales experience: 1:07:55 - 1:08:52Concluding advice: 1:08:53 - 1:12:24

Express Conversations
Ep. 74: Abhinav Bindra

Express Conversations

Play Episode Listen Later Nov 3, 2023 64:27


In this episode we hear from IOC Athletes Commission member Abhinav Bindra on what hosting the Olympics could mean for India, engaging with the younger demographic, grooming talent through robust organisations and the growing self-belief of Indian athletes. The session was moderated by Senior Assistant Editor Mihir Vasavda.

Never on the Backfoot: A Podcast
231. CWC 23: #2-IND v AFG: Review + Analysis

Never on the Backfoot: A Podcast

Play Episode Listen Later Oct 13, 2023 42:36


Hi there! Welcome to Episode 231 of Never on the Backfoot Podcast. Rohit Sharma made a mockery of the famed Afghanistan attack to score an eight-wicket win in the World Cup match at the Arun Jaitley Stadium in New Delhi yesterday. Chasing a below-par 273 on a perfect batting wicket, the Indian skipper was at his belligerent best, just like in the previous World Cup when he scored four consecutive centuries. In this episode, we discuss all about that Afghanistan match and major takeaways. On the Podcast today, we have Abhinav to join us for this discussion. The Podcast is also available on Apple Podcasts, Spotify for Podcasters, Google Podcasts, Spotify, and many other platforms and spread the word. Check out @neveronthebackfoot on Instagram & Threads and @neverontheback1 on X for the latest facts, trivia, quizzes, and terminology.

ThyGap Podcast (Telugu)
104. Open Bathroom with TG #8

ThyGap Podcast (Telugu)

Play Episode Listen Later Aug 28, 2023 34:03


Ee vaaram lo featuring ThyGappers - Mr.Prudhviraj, Creative Kaptures, Praneeth S, Abhinav, ShutUp Vinay, and Prashanthi!____________________Subscribe, and Share!***Patreon: patreon.com/ThyGapInstagram: @_ThyGap |Twitter: @ThyGap |Email: mindthygap@gmail.com |Discord: https://discord.gg/mPS4aNWa94 |All Links: https://linktr.ee/thygap |

ThyGap Podcast (Telugu)
Open Bathroom with TG #8

ThyGap Podcast (Telugu)

Play Episode Listen Later Aug 28, 2023 34:03


Ee vaaram lo featuring ThyGappers - Mr.Prudhviraj, Creative Kaptures, Praneeth S, Abhinav, ShutUp Vinay, and Prashanthi!Subscribe, and Share!***Instagram: @_ThyGap |Twitter: @ThyGap |Vero: @ThyGap |Email: mindthygap@gmail.com |All Links: https://linktr.ee/thygap

Food on Demand
The Domino's are falling for 3rd party delivery (+ Abhinav Kapur of Bikky)

Food on Demand

Play Episode Listen Later Aug 11, 2023 44:39


In the 30th episode of the Food On Demand Podcast, hosts Tom and Bernadette interview Bikky co-founder and CEO Abhinav Kapur about the future of customer relationship management and loyalty in restaurants. They also cover MrBeast filing suit against Virtual Dining Concepts and Domino's signing with its first third-party delivery provider—Uber Eats.

Action and Ambition
How Abhinav Swarup is Creating Change in Businesses as a Finance Leader

Action and Ambition

Play Episode Listen Later Aug 1, 2023 20:52


In this episode, we are joined by Abhinav Swarup, the Vice President of Finance at Zeus Living, a new, modern mode of housing, making it easy to live well, wherever you go. He started his career at PricewaterhouseCoopers and has since worked as a finance leader at Amazon, Netflix, PayPal, and Patreon. He has an MBA from Yale University, a Post Graduate Diploma from IIM Lucknow, and an undergraduate degree in Electrical Engineering from IIT Roorkee. Tune in to learn more!

The India Energy Hour
State of the Indian Power Sector | ft. Abhinav Jindal

The India Energy Hour

Play Episode Listen Later Jul 1, 2023 50:47


In India, the coal based electricity dominates the power sector. But the country has set ambitious renewable power targets. This will bring about many techno-economic and socio-political changes. For example, Indian state owned enterprises in the power sector may have to redefine themselves, and there will be a need for new financial mechanisms and policy design for providing renewable energy a boost. To understand the state of play in the power sector, we interviewed Abhinav Jindal, senior researcher and energy economist, who has over two decades of experience working in the power sector. Abhinav is one of the most balanced voices on the subject of energy security and electricity transition. Full transcript of the episode is available in English and Hindi language. Presented by 101Reporters Follow TIEH podcast on Twitter, Linkedin & YouTube Abhinav Jindal is on Linkedin Our hosts, Shreya Jai on Twitter, Linkedin & Dr. Sandeep Pai on Twitter, Linkedin Podcast Producer, Tejas Dayananda Sagar on Twitter & Linkedin

The Parenting Reset Show
116. Sleep to Heal: Empowering Ourselves and Our Tweens and Teens with the Secret of Optimal Rest and Rejuvenation

The Parenting Reset Show

Play Episode Listen Later Jun 26, 2023 77:18


Tess Connolly LCSW talks with Dr Abhinav Singh,a physician with board certifications in sleep medicine and internal medicine. Currently, Dr. Singh serves as medical director at the Indiana Sleep Center, accredited by the American Academy of Sleep Medicine. He is also a clinical assistant professor at Marian University College of Osteopathic Medicine, where he developed and teaches a sleep medicine rotation to medical students. He is a fellow of the American Academy of Sleep Medicine and has received a Top Doctor award in sleep medicine for the last four years. Dr. Singh is a peer reviewer for the Journal of Clinical Sleep Medicine and Sleep Health (a journal of the National Sleep Foundation). A sleep physician for the NBA team the Indiana Pacers, Dr. Singh also serves on the medical review panel of SleepFoundation.org.  Charlotte Jensen co-authored with Dr Singh Sleep to Heal: 7 Simple Steps to Better Sleep, coming out in June 2023. Charlotte is a writer and editor specializing in technology, marketing, business, and the arts. For more than a decade, Jensen worked as senior writer, articles editor, and executive editor for Entrepreneur magazine, where she took a leading role in shaping editorial content and direction for the award-winning national consumer magazine and its readership of 1.2 million. Jensen's work has been featured in HuffPost, AOL Small Business, and a variety of small-business websites, and she is currently a copy editor for luxe lifestyle brand RH (Restoration Hardware). She has also provided first-draft edits for several nonfiction books, including Fight Cancer With Vitamins and Supplements: A Guide to Prevention and Treatment (Healing Arts Press). Jensen has a B.A. in Journalism from California State University, Long Beach. She lives & works in the San Francisco metro area.      Learn about Dr. Abhinav Singh and Charlotte Jensen, co-authors of the book 'Sleep to Heal', and how they came to work together. Dr. Abhinav Singh discusses his work in helping patients with sleep-related issues and his professional background. Dr. Abhinav Singh shares his insights on what he considers the greatest enemy to quality sleep. Dr. Abhinav Singh emphasizes the significance of sleep and explains how a lack of sleep can affect overall well-being.  Tess, Charlotte and Dr. Abhinav Singh discuss a particular section of the book, 'The Sleep Elevator,' that provided Tess valuable insights into understanding sleep. Tess and Dr. Abhinav Singh delve into the habits and practices necessary for good sleep, both for adults and children.   Tess and Dr. Abhinav Singh explore the topic of sleep supplements, and Dr. Abhinav Singh shares his perspective on their use.  Tess and Dr. Abhinav Singh discuss strategies for dealing with sleep disturbances while traveling across different time zones. Tess and Dr. Abhinav Singh explore the concept of the glymphatic system and its connection to brain health during sleep. Tess and Dr. Abhinav Singh talk about the benefits of napping and the positive impact of 900 seconds.  Dr. Abhinav Singh concludes by sharing his final thoughts and providing tips for families to improve their sleep habits, highlighting the six 'S' factors for quality sleep. Remember – tomorrow starts tonight!    Get the book here

Dr. Abhinav Gautam, Connective Tissue Restoration Expert

Play Episode Listen Later Jun 22, 2023 68:18


Dr. Abhinav Gautam joins a special Hotboxin' Episode to talk about helping The Champ during trying times in his boxing career, explains the benefits of what he services for many professional athletes, his company RELIEF and more! Go to ForThePeople.com/HOTBOXIN  Subscribe to Hotboxin' with Mike Tyson - http://bit.ly/38GAYR5  Subscribe to Hotboxin' Clips Channel - http://rb.gy/6pa3ef https://twitter.com/hotboxinpodcast https://www.instagram.com/hotboxinpodcast https://www.facebook.com/hotboxinpodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

Hotboxin' With Mike Tyson
Dr. Abhinav Gautam, Connective Tissue Restoration Expert

Hotboxin' With Mike Tyson

Play Episode Listen Later Jun 22, 2023 65:03


Dr. Abhinav Gautam joins a special Hotboxin' Episode to talk about helping The Champ during trying times in his boxing career, explains the benefits of what he services for many professional athletes, his company RELIEF and more! Go to ForThePeople.com/HOTBOXIN  Subscribe to Hotboxin' with Mike Tyson - http://bit.ly/38GAYR5  Subscribe to Hotboxin' Clips Channel - http://rb.gy/6pa3ef https://twitter.com/hotboxinpodcast https://www.instagram.com/hotboxinpodcast https://www.facebook.com/hotboxinpodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

WAGMI Ventures Podcast
Getting Web3 Social Right, with Abhinav Gaur (urFeed)

WAGMI Ventures Podcast

Play Episode Listen Later May 23, 2023 18:55


Abhinav Gaur is the Founder & COO @ urFeed  (https://urfeed.xyz). urFeed is a social video & streaming platform, co-owned by creators and their community. In this episode we discuss problems with traditional web2 social platforms, why their feed experience is better than the usual algorithmic feeds, migrating users from web2, the road of progressive decentralization, and much more.Recorded Tuesday May 16th, 2023.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
MPT-7B and The Beginning of Context=Infinity — with Jonathan Frankle and Abhinav Venigalla of MosaicML

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 20, 2023 66:43


We are excited to be the first podcast in the world to release an in-depth interview on the new SOTA in commercially licensed open source models - MosiacML MPT-7B!The Latent Space crew will be at the NYC Lux AI Summit next week, and have two meetups in June. As usual, all events are on the Community page! We are also inviting beta testers for the upcoming AI for Engineers course. See you soon!One of GPT3's biggest limitations is context length - you can only send it up to 4000 tokens (3k words, 6 pages) before it throws a hard error, requiring you to bring in LangChain and other retrieval techniques to process long documents and prompts. But MosaicML recently open sourced MPT-7B, the newest addition to their Foundation Series, with context length going up to 84,000 tokens (63k words, 126 pages):This transformer model, trained from scratch on 1 trillion tokens of text and code (compared to 300B for Pythia and OpenLLaMA, and 800B for StableLM), matches the quality of LLaMA-7B. It was trained on the MosaicML platform in 9.5 days on 440 GPUs with no human intervention, costing approximately $200,000. Unlike many open models, MPT-7B is licensed for commercial use and it's optimized for fast training and inference through FlashAttention and FasterTransformer.They also released 3 finetuned models starting from the base MPT-7B: * MPT-7B-Instruct: finetuned on dolly_hhrlhf, a dataset built on top of dolly-5k (see our Dolly episode for more details). * MPT-7B-Chat: finetuned on the ShareGPT-Vicuna, HC3, Alpaca, Helpful and Harmless, and Evol-Instruct datasets.* MPT-7B-StoryWriter-65k+: it was finetuned with a context length of 65k tokens on a filtered fiction subset of the books3 dataset. While 65k is the advertised size, the team has gotten up to 84k tokens in response when running on a single node A100-80GB GPUs. ALiBi is the dark magic that makes this possible. Turns out The Great Gatsby is only about 68k tokens, so the team used the model to create new epilogues for it!On top of the model checkpoints, the team also open-sourced the entire codebase for pretraining, finetuning, and evaluating MPT via their new MosaicML LLM Foundry. The table we showed above was created using LLM Foundry in-context-learning eval framework itself!In this episode, we chatted with the leads of MPT-7B at Mosaic: Jonathan Frankle, Chief Scientist, and Abhinav Venigalla, Research Scientist who spearheaded the MPT-7B training run. We talked about some of the innovations they've brought into the training process to remove the need for 2am on-call PagerDutys, why the LLM dataset mix is such an important yet dark art, and why some of the traditional multiple-choice benchmarks might not be very helpful for the type of technology we are building.Show Notes* Introducing MPT-7B* Cerebras* Lottery Ticket Hypothesis* Hazy Research* ALiBi* Flash Attention* FasterTransformer* List of naughty words for C4 https://twitter.com/code_star/status/1661386844250963972* What is Sparsity?* Hungry Hungry Hippos* BF16 FPp.s. yes, MPT-7B really is codenamed LLongboi!Timestamps* Introductions [00:00:00]* Intro to Mosaic [00:03:20]* Training and Creating the Models [00:05:45]* Data Choices and the Importance of Repetition [00:08:45]* The Central Question: What Mix of Data Sets Should You Use? [00:10:00]* Evaluation Challenges of LLMs [0:13:00]* Flash Attention [00:16:00]* Fine-tuning for Creativity [00:19:50]* Open Source Licenses and Ethical Considerations [00:23:00]* Training Stability Enhancement [00:25:15]* Data Readiness & Training Preparation [00:30:00]* Dynamic Real-time Model Evaluation [00:34:00]* Open Science for Affordable AI Research [00:36:00]* The Open Approach [00:40:15]* The Future of Mosaic [00:44:11]* Speed and Efficiency [00:48:01]* Trends and Transformers [00:54:00]* Lightning Round and Closing [1:00:55]TranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio partner and CTO-in-Residence at Decibel Partners. I'm joined by my co-host, Swyx, writer and editor of Latent Space.Swyx: Hey, and today we have Jonathan and Abhi from Mosaic ML. Welcome to our studio.Jonathan: Guys thank you so much for having us. Thanks so much.Swyx: How's it feel?Jonathan: Honestly, I've been doing a lot of podcasts during the pandemic, and it has not been the same.Swyx: No, not the same actually. So you have on your bio that you're primarily based in Boston,Jonathan: New York. New York, yeah. My Twitter bio was a probability distribution over locations.Swyx: Exactly, exactly. So I DMd you because I was obviously very interested in MPT-7B and DMd you, I was like, for the 0.2% of the time that you're in San Francisco, can you come please come to a podcast studio and you're like, I'm there next week.Jonathan: Yeah, it worked out perfectly. Swyx: We're really lucky to have you, I'll read off a few intros that people should know about you and then you can fill in the blanks.So Jonathan, you did your BS and MS at Princeton in programming languages and then found your way into ML for your PhD at MiT where you made a real splash with the lottery ticket hypothesis in 2018, which people can check up on. I think you've done a few podcasts about it over the years, which has been highly influential, and we'll talk about sparse models at Mosaic. You have also had some side [00:01:30] quest. You taught programming for lawyers and you did some law and privacy stuff in, in DC and also did some cryptography stuff. Um, and you've been an assistant professor at Harvard before earning your PhD.Jonathan:  I've yet to start.Swyx: You, you yet to start. Okay. But you just got your PhD.Jonathan:. I technically just got my PhD. I was at Mosaic which delayed my defense by about two years. It was, I was at 99% done for two years. Got the job at Harvard, Mosaic started, and I had better things to do than write my dissertation for two years. Swyx: You know, you know, this is very out of order.Jonathan: Like, oh, completely out of order, completely backwards. Go talk to my advisor about that. He's also an advisor at Mosaic and has been from the beginning. And, you know, go talk to him about finishing on time.Swyx: Great, great, great. And just to fill it out, Abhi, you did your BS and MS and MIT, you were a researcher at Cerebras, and you're now a research scientist at Mosaic. Just before we go into Mosaic stuff, I'm actually very curious about Cereus and, uh, just that, that space in general. Um, what are they doing that people should know about?Abhinav: Yeah, absolutely. Um, I think the biggest thing about CEREUS is that they're really building, you know, kind of the NextGen computing platform beyond, like GPUs.Um, they're trying to build a system that uses an entire wafer, you know, rather than cutting up a wafer into smaller chips and trying to train a model on that entire system, or actually more recently on many such wafers. Um, so it's, and it's really extraordinary. I think it's like the first time ever that kind of wafer scale computing has ever really worked. And so it's a really exciting time to be there, trying to figure out how we can map ML workloads to work, um, on a much, much bigger chip.Swyx: And do you use like [00:03:00] a different programming language or framework to do that? Or is that like..Abhinav: Yeah, so I mean, things have changed a bit since I was there.I think, um, you can actually run just normal tensor flow and pie torch on there. Um, so they've built a kind of software stack that compiles it down. So it actually just kind of works naturally. But yeah.Jonathan : Compiled versions of Python is a hot topic at the moment with Mojo as well. Swyx: And then Mosaic, you, you spearheaded the MPT-7B effort.INTRO TO MOSAIC [00:03:20]Abhinav: Uh, yeah. Yeah, so it's kind of like, it's been maybe six months, 12 months in the making. We kind of started working on LMs sort of back in the summer of last year. Um, and then we came with this blog post where we kind of profiled a lot of LMs and saw, hey, the cost of training is actually a lot lower than what people might think.Um, and then since then, you know, being inspired by kind of, you know, meta's release, so the LLaMA models and lots of other open source work, we kind of started working towards, well, what if we were to release a really good kind of 7 billion parameter model? And that's what MPT is. Alessio:You know, we mentioned some of the podcasts you had done, Jonathan, I think in one of them you mentioned Mosaic was not planning on building a  model and releasing and obviously you eventually did. So what are some of the things that got you there that maybe obviously LLaMA you mentioned was an inspiration. You now have both the training and like inference products that you offer. Was this more of a research challenge in a way, uh, that you wanted to do?Or how did the idea come to be?Jonathan: I think there were a couple of things. So we still don't have a first class model. We're not an open AI where, you know, our businesses come to use our one great model. Our business is built around customers creating their own models. But at the end of the day, if customers are gonna create their own models, we have to have the tools to help them do that, and to have the tools to help them do that and know that they work we have to create our own models to start. We have to know that we can do something great if customers are gonna do something great. And one too many people may have challenged me on Twitter about the fact that, you know, mosaic claims all these amazing numbers, but, you know, I believe not to, you know, call out Ross Whiteman here, but, you know, I believe he said at some point, you know, show us the pudding.Um, and so Ross, you know, please let me know how the pudding tastes. But in all seriousness, like I think there is something, this is a demo in some sense. This is to say we did this in 9.5 days for a really reasonable cost, straight through 200, an intervention. 200 K. Yep. Um, you can do this too.Swyx: Uh, and just to reference the numbers that you're putting out, this is the, the last year you were making a lot of noise for trading GPT 3 under 450 K, which is your, your initial estimate.Um, and then it went down to a 100 K and stable diffusion 160 k going down to less than 50 K as well.Jonathan: So I will be careful about that 100 K number. That's certainly the challenge I've given Abhi to hit. Oh, I wouldn't make the promise that we've hit yet, but you know, it's certainly a target that we have.And I, you know, Abhi may kill me for saying this. I don't think it's crazy. TRAINING AND CREATING THE MODELS [00:05:45] Swyx: So we definitely want to get into like estimation math, right? Like what, what needs to happen for those big order magnitude changes to in, in infrastructure costs. But, uh, let's kind of stick to the MPT-7B story. Yeah. Tell us everything.Like you have, uh, three different models. One of them. State of the art essentially on context length. Let's talk about the process of training them, the, uh, the decisions that you made. Um, I can go into, you know, individual details, but I just wanna let you let you rip.Abhinav: Yeah, so I mean, I think, uh, we started off with the base model, which is kind of for all practical purposes, a recreation of LLaMA 7B.Um, so it's a 7 billion perimeter model trained on the trillion tokens. Um, and our goal was like, you know, we should do it efficiently. We should be able to do it like, kind of hands free so we don't have to babysit the runs as they're doing them. And it could be kind of a, a launching point for these fine tune models and those fine tune models, you know, on, on the one hand they're kind of really fun for the community, like the story writer model, which has like a 65,000 length context window and you can even kind of extrapolate beyond that. Um, but they're, they're also kind of just tr inspirations really. So you could kind of start with an MPT-7B base and then build your own custom, you know, downstream. If you want a long context code model, you could do that with our platform. If you wanted one that was for a particular language, you could do that too.But yeah, so we picked kind of the three variance chat and instruct and story writer just kind of like inspirations looking at what people were doing in the community today. Yeah. Alessio: And what's the beginning of the math to come up with? You know, how many tokens you wanna turn it on? How many parameters do you want in a bottle? 7 billion and 30 billion seem to be kind of like two of the magic numbers going around right now. Abhinav: Yeah, definitely. Definitely. Yeah, I think like there's sort of these scaling laws which kind of tell you how to best spend your training compute if that's all you cared about. So if you wanna spend $200,000 exactly in the most efficient way, there'd be a recipe for doing that.Um, and that we usually go by the Chinchilla laws. Now for these models, we actually didn't quite do that because we wanted to make sure that people could actually run these at home and that they [00:07:30] were good for inference. So we trained them kind of beyond those chinchilla points so that we're almost over-training them.I think there's like a joke going on online that they're like long boy and that that came up internally because we were training them for really, really long durations. So that 7B model, the chinchilla point might be 140 billion tokens. Instead, we trained a trillion, so almost seven times longer than you normally would.Swyx: So longboi was the code name. So is it, is it the trading method? Is it the scaling law that you're trying to coin or is it the code name for the 64 billion?Jonathan: Uh, 64. It was just an internal joke for the, for training on way more tokens than you would via chinchilla. Okay. Um, we can coin it long boy and it, it really stuck, but just to, you know, long boys filled with two ELs at the beginning.Yeah. Cause you know, we wanted the lLLaMA thing in there as well. Jonathan: Yeah, yeah, yeah. Our darn CEO we have to rein him in that guy, you know, you can't, yeah. I'm gonna take away his Twitter password at some point. Um, but you know, he had to let that one out publicly. And then I believe there was a YouTube video where someone happened to see it mentioned before the model came out and called it the Long G boy or something like that.Like, so you know, now it's out there in the world. It's out there. It's like Sydnee can't put it back inSwyx: There's a beautiful picture which I think Naveen tweeted out, which, um, shows a long boy on a whiteboard.Jonathan: That was the origin of Long Boy. In fact, the legs of the lLLaMA were the two Ls and the long boy.DATA CHOICES AND THE IMPORTANCE OF REPETITION [00:08:45]Swyx: Well, talk to me about your data choices, right? Like this is your passion project. Like what can you tell us about it?Jonathan: Yeah, I think Abhi wanted to kill me by the end for trying to use all the GPUs on data and none of them on actually training the model. Um, at the end of the day, We know that you need to train these models and [00:09:00] lots of data, but there are a bunch of things we don't know.Number one is what kinds of different data sources matter. The other is how much does repetition really matter? And really kind of repetition can be broken down into how much does quality versus quantity matter. Suppose I had the world's best 10 billion tokens of data. Would it be better to train on that a hundred times or better to train on a trillion tokens of low quality, fresh data?And obviously there's, there's a middle point in between. That's probably the sweet spot. But how do you even know what good quality data is? And. So, yeah, this is, nobody knows, and I think the more time I spent, we have a whole data team, so me and several other people, the more time that we spent on this, you know, I came away thinking, gosh, we know nothing.Gosh, if I were back in academia right now, I would definitely go and, you know, write a paper about this because I have no idea what's going on.Swyx: You would write a paper about it. I'm interested in such a paper. I haven't come across any that exists. Could you frame the central question of such a paper?THE CENTRAL QUESTION: WHAT MIX OF DATA SETS SHOULD YOU USE? [00:10:00]Jonathan: Yeah. The central question is what mix of data sets should you use? Okay. Actually I've, you know, you had mentioned my law school stuff. I went back to Georgetown Law where I used to teach, um, in the midst of creating this model, and I actually sat down with a class of law students and asked them, I gave them our exact data sets, our data mixes, um, like how many tokens we had, and I said, Create the best data set for your model.Knowing they knew nothing about large language models, they just know that data goes in and it's going to affect the behavior. Um, and I was like, create a mix and they basically covered all the different trade-offs. Um, you probably want a lot of English language [00:10:30] text to start with. You get that from the web, but do you want it to be multilingual?If so, you're gonna have a lot less English text. Maybe it'll be worse. Do you wanna have code in there? There are all these beliefs that code leads to models being better at logical reasoning, of which I've seen zero evidence. Rep. It's not, um, I mean, really made a great code model, but code models leading to better chain of thought reasoning on the part of language or code being in the training set leading to better chain of thought reasoning.People claim this all the time, but I've still never seen any real evidence beyond that. You know, one of the generations of the GPT three model started supposedly from Code Da Vinci. Yes. And so there's a belief that, you know, maybe that helped. But again, no evidence. You know, there's a belief that spending a lot of time on good sources like Wikipedia is good for the model.Again, no evidence. At the end of the day, we tried a bunch of different data mixes and the answer was that there are some that are better or worse than others. We did find that the pile, for example, was a really solid data mix, but you know, there were stronger data mixes by our evaluation metrics. And I'll get back to the evaluation question in a minute cuz that's a really important one.This data set called c4, which is what the original T five model was trained on, is weirdly good. And everybody, when I posted on this on Twitter, like Stella Beaterman from Luther mentioned this, I think someone else mentioned this as well. C4 does really well in the metrics and we have no idea why we de-duplicated it against our evaluation set.So it's not like it memorized the data, it is just one web scrape from 2019. If you actually look at the T five paper and see how it was pre-processed, it looks very silly. Mm-hmm. They removed anything that had the word JavaScript in it because they didn't want to get like no JavaScript [00:12:00] warnings. They removed anything with curly braces cuz they didn't wanna get JavaScript in it.They looked at this list of bad words, um, and removed anything that had those bad words. If you actually look at the list of bad words, words like gay are on that list. And so there's, you know, it is a very problematic, you know, list of words, but that was the cleaning that leads to a data set that seems to be unbeatable.So that to me says that we know nothing about data. We, in fact used a data set called mc four as well, which is they supposedly did the same pre-processing of C4 just on more web calls. The English portion is much worse than C4 for reasons that completely escape us. So in the midst of all that, Basically I set two criteria.One was I wanted to be at least as good as mc four English, like make sure that we're not making things actively worse. And mc four English is a nice step up over other stuff that's out there. And two was to go all in on diversity after that, making sure that we had some code, we had some scientific papers, we had Wikipedia, because people are gonna use this model for all sorts of different purposes.But I think the most important thing, and I'm guessing abhi had a million opinions on this, is you're only as good as your evaluation. And we don't know how to evaluate models for the kind of generation we ask them to do. So past a certain point, you have to kinda shrug and say, well, my evaluation's not even measuring what I care about.Mm-hmm. So let me just make reasonable choices. EVALUATION CHALLENGES OF LLMs [0:13:00]Swyx: So you're saying MMLU, big bench, that kind of stuff is not. Convincing for youJonathan: A lot of this stuff is you've got two kinds of tasks. Some of these are more of multiple choice style tasks where there is a right answer. Um, either you ask the model to spit out A, B, C, or D or you know, and if you're more [00:13:30] sophisticated, you look at the perplexity of each possible answer and pick the one that the model is most likely to generate.But we don't ask these models to do multiple choice questions. We ask them to do open-ended generation. There are also open-ended generation tasks like summarization. You compare using things like a blue score or a rouge score, which are known to be very bad ways of comparing text. At the end of the day, there are a lot of great summaries of a paper.There are a lot of great ways to do open form generation, and so humans are, to some extent, the gold standard. Humans are very expensive. It turns out we can't put them into our eval pipeline and just have the humans look at our model every, you know, 10 minutes? Not yet. Not yet. Maybe soon. Um, are you volunteering Abhi?Abhinav: I, I, I just know we have a great eval team who's, uh, who's helping us build new metrics. So if they're listening,Jonathan:  But it's, you know, evaluation of large language models is incredibly hard and I don't think any of these metrics really truly capture. What we expect from the models in practice.Swyx: Yeah. And we might draw wrong conclusions.There's been a debate recently about the emergence phenomenon, whether or not it's a mirage, right? I don't know if you guys have opinions about that process. Abhinav: Yeah, I think I've seen like this paper and all and all, even just kind of plots from different people where like, well maybe it's just a artifact of power, like log scaling or metrics or, you know, we're meshing accuracy, which is this a very like harsh zero one thing.Yeah. Rather than kind of something more continuous. But yeah, similar to what Jonathan was saying about evals. Like there there's one issue of like you just like our diversity of eval metrics, like when we put these models up, even like the chat ones, the instruct ones, people are using 'em for such a variety of tasks.There's just almost no way we get ahead of time, like measuring individual dimensions. And then also particularly like, you know, at the 7B scale, [00:15:00] um, these models still are not super great yet at the really hard tasks, like some of the hardest tasks in MMLU and stuff. So sometimes they're barely scoring like the above kind of random chance, you know, like on really, really hard tasks.So potentially as we. You know, aim for higher and higher quality models. Some of these things will be more useful to us. But we kind of had to develop MPT 7B kind of flying a little bit blind on, on what we knew it was coming out and just going off of like, you know, a small set of common sensor reasoning tasks.And of course, you know, just comparing, you know, those metrics versus other open source models. Alessio: I think fast training in inference was like one of the goals, right? So there's always the trade off between doing the hardest thing and like. Doing all the other things quickly.Abhinav: Yeah, absolutely. Yeah, I mean, I think like, you know, even at the 7B scale, you know, uh, people are trying to run these things on CPUs at home.You know, people are trying to port these to their phones, basically prioritizing the fact that the small scale would lead to our adoption. That was like a big, um, big thing going on. Alessio: Yeah. and you mentioned, um, flash attention and faster transformer as like two of the core things. Can you maybe explain some of the benefits and maybe why other models don't use it?FLASH ATTENTION [00:16:00]Abhinav: Yeah, absolutely. So flash attention is this basically faster implementation of full attention. Um, it's like a mathematical equivalent developed by like actually some of our collaborators, uh, at Stanford. Uh, the hazy research. Hazy research, yeah, exactly.Jonathan: What is, what, what, what's the name hazy research mean?Abhinav: I actually have no idea.Swyx: I have no clue. All these labs have fun names. I always like the stories behind them.Abhinav: Yeah, absolutely. We really, really liked flash attention. We, I think, had to integrate into repo even as [00:16:30] as early as September of last year. And it really just helps, you know, with training speed and also inference speed and we kind of bake that into model architecture.And this is kind of unique amongst all the other hugging face models you see out there. So ours actually, you can toggle between normal torch attention, which will work anywhere and flash attention, which will work on GPUs right out of the box. And that way I think you get almost like a 2x speed up at training time and somewhere between like 50% to a hundred percent speed up at inference time as well.So again, this is just like, we really, really wanted people to use these and like, feel like an improvement and we, we have the team to, to help deliver that. Swyx: Another part, um, of your choices was alibi position, encodings, which people are very interested in, maybe a lot of people just, uh, to sort of take in, in coatings as, as a given.But there's actually a lot of active research and honestly, it's a lot of, um, it's very opaque as well. Like people don't know how to evaluate encodings, including position encodings, but may, may, could you explain, um, alibi and, um, your choice?Abhinav: Yeah, for sure. The alibi and uh, kind of flash attention thing all kind of goes together in interesting ways.And even with training stability too. What alibi does really is that it eliminates the need to have positional embeddings in your model. Where previously, if you're a token position one, you have a particular embedding that you add, and you can't really go beyond your max position, which usually is like about 2000.With alibies, they get rid of that. Instead, just add a bias to the attention map itself. That's kind of like this slope. And if at inference time you wanna go much, much larger, they just kind of stretch that slope out to a longer, longer number of positions. And because the slope is kind of continuous and you can interpret it, it all works out now.Now one of [00:18:00] the, the funny things we found is like with flash attention, it saved so much memory and like improved performance so much that even as early as I kind of last year, like we were profiling models with, with very long context lines up to like, you know, the 65 k that you seen in release, we just never really got around to using it cuz we didn't really know what we might use it for.And also it's very hard to train stably. So we started experimenting with alibi integration, then we suddenly found that, oh wow, stability improves dramatically and now we can actually work together with alibi in a long context lens. That's how we got to like our story writer model where we can stably train these models out to very, very long context lenses and, and use them performantly.Jonathan: Yeah.Swyx: And it's also why you don't have a firm number. Most people now have a firm number on the context line. Now you're just like, eh, 65 to 85Abhinav: Oh yeah, there's, there's a, there's a big age to be 64 K or 65 k. 65 k plus.Swyx: Just do powers of twos. So 64 isn't, you know. Jonathan: Right, right. Yeah. Yeah. But we could, I mean, technically the context length is infinite.If you give me enough memory, um, you know, we can just keep going forever. We had a debate over what number to say is the longest that we could handle. We picked 84 cakes. It's the longest I expect people to see easily in practice. But, you know, we played around for even longer than that and I don't see why we couldn't go longer.Swyx: Yeah. Um, and so for those who haven't read the blog posts, you put the Great Gatsby in there and, uh, asked it to write an epilogue, which seemed pretty impressive.Jonathan: Yeah. There are a bunch of epilogues floating around internally at Mosaic. Yeah. That wasn't my favorite. I think we all have our own favorites.Yeah. But there are a bunch of really, really good ones. There was one where, you know, it's Gatsby's funeral and then Nick starts talking to Gatsby's Ghost, and Gatsby's father shows up and, you know, then he's [00:19:30] at the police station with Tom. It was very plot heavy, like this is what comes next. And a bunch of that were just very Fitzgerald-esque, like, you know, beautiful writing.Um, but it was cool to just see that Wow, the model seemed to actually be working with. You know, all this input. Yeah, yeah. Like it's, it's exciting. You can think of a lot of things you could do with that kind of context length.FINE-TUNING FOR CREATIVITY [00:19:50]Swyx: Is there a trick to fine tuning for a creative task rather than, um, factual task?Jonathan: I don't know what that is, but probably, yeah, I think, you know, the person, um, Alex who did this, he did fine tune the model explicitly on books. The goal was to try to get a model that was really a story writer. But, you know, beyond that, I'm not entirely sure. Actually, it's a great question. Well, no, I'll ask you back.How would you measure that? Swyx: Uh, God, human feedback is the solve to all things. Um, I think there is a labeling question, right? Uh, in computer vision, we had a really, really good episode with Robo Flow on the segment. Anything model where you, you actually start human feedback on like very, I think it's something like 0.5% of the, the overall, uh, final, uh, uh, labels that you had.But then you sort augment them and then you, you fully automate them, um, which I think could be applied to text. It seems intuitive and probably people like snorkel have already raised ahead on this stuff, but I just haven't seen this applied in the language domain yet.Jonathan: It, I mean there are a lot of things that seem like they make a lot of sense in machine learning that never work and a lot of things that make zero sense that seem to work.So, you know, I've given up trying to even predict. Yeah, yeah. Until I see the data or try it, I just kind shg my shoulders and you know, you hope for the best. Bring data or else, right? Yeah, [00:21:00] exactly. Yeah, yeah, yeah.Alessio: The fine tuning of books. Books three is like one of the big data sets and there was the whole.Twitter thing about trade comments and like, you know, you know, I used to be a community moderator@agenius.com and we've run into a lot of things is, well, if you're explaining lyrics, do you have the right to redistribute the lyrics? I know you ended up changing the license on the model from a commercial use Permitted.Swyx: Yeah let's let them. I'm not sure they did. Jonathan: So we flipped it for about a couple hours. Swyx: Um, okay. Can we, can we introduce the story from the start Just for people who are under the loop. Jonathan: Yeah. So I can tell the story very simply. So, you know, the book three data set does contain a lot of books. And it is, you know, as I discovered, um, it is a data set that provokes very strong feelings from a lot of folks.Um, that was one, one guy from one person in particular, in fact. Um, and that's about it. But it turns out one person who wants a lot of attention can, you know, get enough attention that we're talking about it now. And so we had a, we had a discussion internally after that conversation and we talked about flipping the license and, you know, very late at night I thought, you know, maybe it's a good thing to do.And decided, you know, actually probably better to just, you know, Stan Pat's license is still Apache too. And one of the conversations we had was kind of, we hadn't thought about this cuz we had our heads down, but the Hollywood writer Strike took place basically the moment we released the model. Mm-hmm.Um, we were releasing a model that could do AI generated creative content. And that is one of the big sticking points during the strike. Oh, the optics are not good. So the optics aren't good and that's not what we want to convey. This is really, this is a demo of the ability to do really long sequence lengths and.Boy, you know, [00:22:30] that's, that's not timing that we appreciated. And so we talked a lot internally that night about like, oh, we've had time to read the news. We've had time to take a breath. We don't really love this. Came to the conclusion that it's better to just leave it as it is now and learn the lesson for the future.But certainly that was one of my takeaways is this stuff, you know, there's a societal context around this that it's easy to forget when you're in the trenches just trying to get the model to train. And you know, in hindsight, you know, I might've gone with a different thing than a story writer. I might've gone with, you know, coder because we seem to have no problem putting programmers out of work with these models.Swyx: Oh yeah. Please, please, you know, take away this stuff from me.OPEN SOURCE LICENSES AND ETHICAL CONSIDERATIONS [00:23:00]Jonathan: Right. You know, so it's, I think, you know, really. The copyright concerns I leave to the lawyers. Um, that's really, if I learned one thing teaching at a law school, it was that I'm not a lawyer and all this stuff is a little complicated, especially open source licenses were not designed for this kind of world.They were designed for a world of forcing people to be more open, not forcing people to be more closed. And I think, you know, that was part of the impetus here, was to try to use licenses to make things more closed. Um, which is, I think, against the grain of the open source ethos. So that struck me as a little bit strange, but I think the most important part is, you know, we wanna be thoughtful and we wanna do the right thing.And in that case, you know, I hope with all that interesting licensing fund you saw, we're trying to be really thoughtful about this and it's hard. I learned a lot from that experience. Swyx: There's also, I think, an open question of fair use, right? Is training on words of fair use because you don't have a monopoly on words, but some certain arrangements of words you do.And who is to say how much is memorization by a model versus actually learning and internalizing and then. Sometimes happening to land at the right, the [00:24:00] same result.Jonathan: And if I've learned one lesson, I'm not gonna be the person to answer that question. Right, exactly. And so my position is, you know, we will try to make this stuff open and available.Yeah. And, you know, let the community make decisions about what they are or aren't comfortable using. Um, and at the end of the day, you know, it still strikes me as a little bit weird that someone is trying to use these open source licenses to, you know, to close the ecosystem and not to make things more open.That's very much against the ethos of why these licenses were created.Swyx: So the official mosaic position, I guess is like, before you use TC MPC 7B for anything commercial, check your own lawyers now trust our lawyers, not mosaic's lawyers.Jonathan: Yeah, okay. Yeah. I'm, you know, our lawyers are not your lawyers.Exactly. And, you know, make the best decision for yourself. We've tried to be respectful of the content creators and, you know, at the end of the day, This is complicated. And this is something that is a new law. It's a new law. It's a new law that hasn't been established yet. Um, but it's a place where we're gonna continue to try to do the right thing.Um, and it's, I think, one of the commenters, you know, I really appreciated this said, you know, well, they're trying to do the right thing, but nobody knows what the right thing is to even do, you know, the, I guess the, the most right thing would've been to literally not release a model at all. But I don't think that would've been the best thing for the community either.Swyx: Cool.Well, thanks. Well handled. Uh, we had to cover it, just causeJonathan:  Oh, yes, no worries. A big piece of news. It's been on my mind a lot.TRAINING STABILITY ENHANCEMENT [00:25:15]Swyx: Yeah. Yeah. Well, you've been very thoughtful about it. Okay. So a lot of these other ideas in terms of architecture, flash, attention, alibi, and the other data sets were contributions from the rest of the let's just call it open community of, of machine learning advancements. Uh, but Mosaic in [00:25:30] particular had some stability improvements to mitigate loss spikes, quote unquote, uh, which, uh, I, I took to mean, uh, your existing set of tools, uh, maybe we just co kind of covered that. I don't wanna sort of put words in your mouth, but when you say things like, uh, please enjoy my empty logbook.How much of an oversell is that? How much, you know, how much is that marketing versus how much is that reality?Abhinav: Oh yeah. That, that one's real. Yeah. It's like fully end-to-end. Um, and I think.Swyx: So maybe like what, what specific features of Mosaic malibu?Abhinav: Totally, totally. Yeah. I think I'll break it into two parts.One is like training stability, right? Knowing that your model's gonna basically get to the end of the training without loss spikes. Um, and I think, you know, at the 7B scale, you know, for some models like it ha it's not that big of a deal. As you train for longer and longer durations, we found that it's trickier and trickier to avoid these lost spikes.And so we actually spent a long time figuring out, you know, what can we do about our initialization, about our optimizers, about the architecture that basically prevents these lost spikes. And you know, even in our training run, if you zoom in, you'll see small intermittent spikes, but they recover within a few hundred steps.And so that's kind of the magical bit. Our line is one of defenses we recover from Las Vegas, like just naturally, right? Mm-hmm. Our line two defense was that we used determinism and basically really smart resumption strategies so that if something catastrophic happened, we can resume very quickly, like a few batches before.And apply some of these like, uh, interventions. So we had these kinds of preparations, like a plan B, but we didn't have to use them at all for MPT 7B training. So, that was kind of like a lucky break. And the third part of like basically getting all the way to the empty law book is having the right training infrastructure.[00:27:00]So this is basically what, like is, one of the big selling points of the platform is that when you try to train these models on hundreds of GPUs, not many people outside, you know, like deep industry research owners, but the GPUs fail like a lot. Um, I would say like almost once every thousand a 100 days.So for us on like a big 512 cluster every two days, basically the run will fail. Um, and this is either due to GPUs, like falling off the bus, like that's, that's a real error we see, or kind of networking failures or something like that. And so in those situations, what people have normally done is they'll have an on-call team that's just sitting round the clock, 24-7 on slack, once something goes wrong.And if then they'll basically like to try to inspect the cluster, take nodes out that are broken, restart it, and it's a huge pain. Like we ourselves did this for a few months. And as a result of that, because we're building such a platform, we basically step by step automated every single one of those processes.So now when a run fails, we have this automatic kind of watch talk that's watching. It'll basically stop the job. Test the nodes cord in anyone's that are broken and relaunch it. And because our software's all deterministic has fast resumption stuff, it just continues on gracefully. So within that log you can see sometimes I think maybe at like 2:00 AM or something, the run failed and within a few minutes it's back up and running and all of us are just sleeping peacefully.Jonathan: I do wanna say that was hard one. Mm-hmm. Um, certainly this is not how things were going, you know, many months ago, hardware failures we had on calls who were, you know, getting up at two in the morning to, you know, figure out which node had died for what reason, restart the job, have to cord the node. [00:28:30] Um, we were seeing catastrophic loss spikes really frequently, even at the 7B scale that we're just completely derailing runs.And so this was step by step just ratcheting our way there. As Abhi said, to the point where, Many models are training at the moment and I'm sitting here in the studio and not worrying one bit about whether the runs are gonna continue. Yeah. Swyx: I'm, I'm not so much of a data center hardware kind of guy, but isn't there existing software to do this for CPUs and like, what's different about this domain? Does this question make sense at all?Jonathan: Yeah, so when I think about, like, I think back to all the Google fault tolerance papers I read, you know, as an undergrad or grad student mm-hmm. About, you know, building distributed systems. A lot of it is that, you know, Each CPU is doing, say, an individual unit of work.You've got a database that's distributed across your cluster. You wanna make sure that one CPU failing can't, or one machine failing can't, you know, delete data. So you, you replicate it. You know, you have protocols like Paxos where you're literally, you've got state machines that are replicated with, you know, with leaders and backups and things like that.And in this case, you were performing one giant computation where you cannot afford to lose any node. If you lose a node, you lose model state. If you lose a node, you can't continue. It may be that, that in the future we actually, you know, create new versions of a lot of our distributed training libraries that do have backups and where data is replicated so that if you lose a node, you can detect what node you've lost and just continue training without having to stop the run, you know?Pull from a checkpoint. Yeah. Restart again on different hardware. But for now, we're certainly in a world where if anything dies, that's the end of the run and you have to go back and recover from it. [00:30:00]DATA READINESS & TRAINING PREPARATION [00:30:00]Abhinav: Yeah. Like I think a big part, a big word there is like synchronous data pluralism, right? So like, we're basically saying that on every step, every GP is gonna do some work.They're gonna stay in sync with each other and average their, their gradients and continue. Now that there are algorithmic techniques to get around this, like you could say, oh, if a GP dies, just forget about it. All the data that's gonna see, we'll just forget about it. We're not gonna train on it.But, we don't like to do that currently because, um, it makes us give up determinism, stuff like that. Maybe in the future, as you go to extreme scales, we'll start looking at some of those methods. But at the current time it's like, we want determinism. We wanted to have a run that we could perfectly replicate if we needed to.And it was, the goal is figure out how to run it on a big cluster without humans having to babysit it. Babysit it. Alessio: So as you mentioned, these models are kind of the starting point for a lot of your customers To start, you have a. Inference product. You have a training product. You previously had a composer product that is now kind of not rolled into, but you have like a super set of it, which is like the LLM foundry.How are you seeing that change, you know, like from the usual LOP stack and like how people train things before versus now they're starting from, you know, one of these MPT models and coming from there. Like worship teams think about as they come to you and start their journey.Jonathan: So I think there's a key distinction to make here, which is, you know, when you say starting from MPT models, you can mean two things.One is actually starting from one of our checkpoints, which I think very few of our customers are actually going to do, and one is starting from our configuration. You can look at our friends at Rep for that, where, you know, MPT was in progress when Refl [00:31:30] came to us and said, Hey, we need a 3 billion parameter model by next week on all of our data.We're like, well, here you go. This is what we're doing, and if it's good enough for us, um, hopefully it's good enough for you. And that's basically the message we wanna send to our customers. MPT is basically clearing a path all the way through where they know that they can come bring their data, they can use our training infrastructure, they can use all of our amazing orchestration and other tools that abhi just mentioned, for fault tolerance.They can use Composer, which is, you know, still at the heart of our stack. And then the l l M Foundry is really the specific model configuration. They can come in and they know that thing is gonna train well because we've already done it multiple times. Swyx: Let's dig in a little bit more on what should people have ready before they come talk to you? So data architecture, eval that they're looking, etc.Abhinav: Yeah, I, I mean, I think we'll accept customers at any kind of stage in their pipeline. You know, like I'd say science, there's archetypes of people who have built products around like some of these API companies and reach a stage or maturity level where it's like we want our own custom models now, either for the purpose of reducing cost, right?Like our inference services. Quite a bit cheaper than using APIs or because they want some kind of customization that you can't really get from the other API providers. I'd say the most important things to have before training a big model. You know, you wanna have good eval metrics, you know, some kind of score that you can track as you're training your models and scaling up, they can tell you you're progressing.And it's really funny, like a lot of times customers will be really excited about training the models, right? It's really fun to like launch shelves on hundreds of gfs, just all around. It's super fun. But then they'll be like, but wait, what are we gonna measure? Not just the training loss, right? I mean, it's gotta be more than that.[00:33:00]So eval metrics is like a, it's a good pre-req also, you know, your data, you know, either coming with your own pre-training or fine-tune data and having like a strategy to clean it or we can help clean it too. I think we're, we're building a lot of tooling around that. And I think once you have those two kinds of inputs and sort of the budget that you want, we can pretty much walk you through the rest of it, right?Like that's kind of what we do. Recently we helped build CR FM's model for biomedical language a while back. Jonathan: Um, we can. That's the center of research for foundation models. Abhi: Exactly, exactly.Jonathan: Spelling it out for people. Of course.Abhinav: No, absolutely. Yeah, yeah. No, you've done more of these than I have.Um, I think, uh, basically it's sort of, we can help you figure out what model I should train to scale up so that when I go for my big run company, your here run, it's, uh, it's predictable. You can feel confident that it's gonna work, and you'll kind of know what quality you're gonna get out before you have to spend like a few hundred thousand dollars.DYNAMIC REAL-TIME MODEL EVALUATION [00:34:00]Alessio: The rap Reza from rap was on the podcast last week and, uh, they had human eval and then that, uh, I'm Jon Eval, which is like vibe based. Jonathan: And I, I do think the vibe based eval cannot be, you know, underrated really at the, I mean, at the end of the day we, we did stop our models and do vibe checks and we did, as we monitor our models, one of our evals was we just had a bunch of prompts and we would watch the answers as the model trained and see if they changed cuz honestly, You know, I don't really believe in any of these eval metrics to capture what we care about.Mm-hmm. But when you ask it, uh, you know, I don't know. I think one of our prompts was to suggest games for a three-year-old and a seven-year-old. That would be fun to play. Like that was a lot more [00:34:30] valuable to me personally, to see how that answer evolved and changed over the course of training. So, you know, and human eval, just to clarify for folks, human human eval is an automated evaluation metric.There's no humans in it at all. There's no humans in it at all. It's really badly named. I got so confused the first time that someone brought that to me and I was like, no, we're not bringing humans in. It's like, no, it's, it's automated. They just called it a bad name and there's only a hundred cents on it or something.Abhinav: Yeah. Yeah. And, and it's for code specifically, right?Jonathan: Yeah. Yeah. It's very weird. It's a, it's a weird, confusing name that I hate, but you know, when other metrics are called hella swag, like, you know, you do it, just gotta roll with it at this point. Swyx: You're doing live evals now. So one, one of the tweets that I saw from you was that it is, uh, important that you do it paralyzed.Uh, maybe you kind of wanna explain, uh, what, what you guys did.Abhinav: Yeah, for sure. So with LLM Foundry, there's many pieces to it. There's obviously the core training piece, but there's also, you know, tools for evaluation of models. And we've kind of had one of the, I think it's like the, the fastest like evaluation framework.Um, basically it's multi GPU compatible. It runs with Composer, it can support really, really big models. So basically our framework runs so fast that even Azure models are training. We can run these metrics live during the training. So like if you have a dashboard like weights and biases, you kind of watch all these evil metrics.We have, like, 15 or 20 of them honestly, that we track during the run and add negligible overhead. So we can actually watch as our models go and feel confident. Like, it's not like we wait until the very last day to, to test if the models good or notJonathan: That's amazing. Yeah. I love that we've gotten this far into the conversation.We still haven't talked about efficiency and speed. Those are usually our two watch words at Mosaic, which is, you know, that's great. That says that we're [00:36:00] doing a lot of other cool stuff, but at the end of the day, um, you know, Cost comes first. If you can't afford it, it doesn't matter. And so, you know, getting things down cheap enough that, you know, we can monitor in real time, getting things down cheap enough that we can even do it in the first place.That's the basis for everything we do.OPEN SCIENCE FOR AFFORDABLE AI RESEARCH [00:36:00]Alessio: Do you think a lot of the questions that we have around, you know, what data sets we should use and things like that are just because training was so expensive before that, we just haven't run enough experiments to figure that out. And is that one of your goals is trying to make it cheaper so that we can actually get the answers?Jonathan: Yeah, that's a big part of my personal conviction for being here. I think I'm, I'm still in my heart, the second year grad student who was jealous of all his friends who had GPUs and he didn't, and I couldn't train any models except in my laptop. And that, I mean, the lottery ticket experiments began on my laptop that I had to beg for one K 80 so that I could run amist.And I'm still that person deep down in my heart. And I'm a believer that, you know, if we wanna do science and really understand these systems and understand how to make them work well, understand how they behave, understand what makes them safe and reliable. We need to make it cheap enough that we can actually do science, and science involves running dozens of experiments.When I finally, you know, cleaned out my g c s bucket from my PhD, I deleted a million model checkpoints. I'm not kidding. There were over a million model checkpoints. That is the kind of science we need, you know, that's just what it takes. In the same way that if you're in a biology lab, you don't just grow one cell and say like, eh, the drug seems to work on that cell.Like, there's a lot more science you have to do before you really know.Abhinav: Yeah. And I think one of the special things about Mosaic's kind of [00:37:30] position as well is that we have such, so many customers all trying to train models that basically we have the incentive to like to devote all these resources and time to do this science.Because when we learn which pieces actually work, which ones don't, we get to help many, many people, right? And so that kind of aggregation process I think is really important for us. I remember way back there was a paper about Google that basically would investigate batch sizes or something like that.And it was this paper that must have cost a few million dollars during all the experience. And it was just like, wow, what a, what a benefit to the whole community. Now, like now we all get to learn from that and we get, we get to save. We don't have to spend those millions of dollars anymore. So I think, um, kind of mosaical science, like the insights we get on, on data, on pre-screening architecture, on all these different things, um, that's why customers come to us.Swyx: Yeah, you guys did some really good stuff on PubMed, G B T as well. That's the first time I heard of you. Of you. And that's also published to the community.Abhinav: Yeah, that one was really fun. We were like, well, no one's really trained, like fully from scratch domain specific models before. Like, what if we just did a biomed one?Would it still work? And, uh, yeah, I'd be really excited. That did, um, we'll probably have some follow up soon, I think, later this summer.Jonathan: Yeah. Yes. Stay tuned on that. Um, but I, I will say just in general, it's a really important value for us to be open in some sense. We have no incentive not to be open. You know, we make our money off of helping people train better.There's no cost to us in sharing what we learn with the community. Cuz really at the end of the day, we make our money off of those custom models and great infrastructure and, and putting all the pieces together. That's honestly where the Mosaic name came from. Not off of like, oh, we've got, you know, this one cool secret trick [00:39:00] that we won't tell you, or, you know, closing up.I sometimes, you know, in the past couple weeks I've talked to my friends at places like Brain or, you know, what used to be Brain Now Google DeepMind. Oh, I R I P Brain. Yeah. R i p Brian. I spent a lot of time there and it was really a formative time for me. Um, so I miss it, but. You know, I kind of feel like we're one of the biggest open research labs left in industry, which is a very sad state of affairs because we're not very big.Um, but at least can you say how big the team is actually? Yeah. We were about 15 researchers, so we're, we're tiny compared to, you know, the huge army of researchers I remember at Brain or at fair, at Deep Mind back, you know, when I was there during their heydays. Um, you know, but everybody else is kind of, you know, closed up and isn't saying very much anymore.Yeah. And we're gonna keep talking and we're gonna keep sharing and, you know, we will try to be that vanguard to the best of our ability. We're very small and I, I can't promise we're gonna do what those labs used to do in terms of scale or quantity of research, but we will share what we learn and we will try to create resources for the community.Um, I, I dunno, I just, I believe in openness fundamentally. I'm an academic at heart and it's sad to me to watch that go away from a lot of the big labs. THE OPEN APPROACH [00:40:15]Alessio: We just had a live pod about the, you know, open AI snow mode, uh, post that came out and it was one of the first time I really dove into Laura and some of the this new technologies, like how are you thinking about what it's gonna take for like the open approach to really work?Obviously today, GPT four is still, you know, part of like that state-of-the-art model for a [00:40:30] lot of tasks. Do you think some of the innovation and kind of returning methods that we have today are enough if enough people like you guys are like running these, these research groups that are open? Or do you think we still need a step function improvement there?Jonathan: I think one important point here is the idea of coexistence. I think when you look at, I don't know who won Linux or Windows, the answer is yes. Microsoft bought GitHub and has a Windows subsystem for Linux. Linux runs a huge number of our servers and Microsoft is still a wildly profitable company.Probably the most successful tech company right now. So who won open source or closed source? Yes. Um, and I think that's a similar world that we're gonna be in here where, you know, it's gonna be different things for different purposes. I would not run Linux on my laptop personally cuz I like connecting to wifi and printing things.But I wouldn't run Windows on one of my surfers. And so I do think what we're seeing with a lot of our customers is, do they choose opening IR mosaic? Yes. There's a purpose for each of these. You have to send your data off to somebody else with open eyes models. That's a risk. GPT four is amazing and I would never promise someone that if they come to Mosaic, they're gonna get a GPT four quality model.That's way beyond our means and not what we're trying to do anyway. But there's also a whole world for, you know, domain specific models, context specific models that are really specialized, proprietary, trained on your own data that can do things that you could never do with one of these big models. You can customize in crazy ways like G B T four is not gonna hit 65 K context length for a very long time, cuz they've already trained that [00:42:00] model and you know, they haven't even released the 32 K version yet.So we can, you know, we can do things differently, you know, by being flexible. So I think the answer to all this is yes. But we can't see the open source ecosystem disappear. And that's the scariest thing for me. I hear a lot of talk in academia about, you know, whatever happened to that academic research on this field called information retrieval?Well, in 1999 it disappeared. Why? Because Google came along and who cares about information retrieval research when you know you have a Google Scale, you know, Web Scale database. So you know, there's a balance here. We need to have both. Swyx: I wanna applaud you, Elaine. We'll maybe edit it a little like crowd applause, uh, line.Cuz I, I think that, um, that is something that as a research community, as people interested in progress, we need to see these things instead of just, uh, seeing marketing papers from the advertising GPT 4.Jonathan: Yeah. I, I think I, you know, to get on my soapbox for 10 more seconds. Go ahead. When I talk to policymakers about, you know, the AI ecosystem, the usual fear that I bring up is, Innovation will slow because of lack of openness.I've been complaining about this for years and it's finally happened. Hmm. Why is Google sharing, you know, these papers? Why is Open AI sharing these papers? There are a lot of reasons. You know, I have my own beliefs, but it's not something we should take for granted that everybody's sharing the work that they do and it turns out well, I think we took it for granted for a while and now it's gone.I think it's gonna slow down the pace of progress. In a lot of cases, each of these labs has a bit of a monoculture and being able to pass ideas [00:43:30] back and forth was a lot of what kept, you know, scientific progress moving. So it's imperative not just, you know, for the open source community and for academia, but for the progress of technology.That we have a vibrant open source research community.THE FUTURE OF MOSAIC [00:44:11]Swyx: There's a preview of the ecosystem and commentary that we're, we're gonna do. But I wanna close out some stuff on Mosaic. You launched a bunch of stuff this month. A lot of stuff, uh, actually was, I was listening to you on Gradient descent, uh, and other podcasts we know and love.Uh, and you said you also said you were not gonna do inference and, and, and last week you were like, here's Mosaic ML inference. Oops. So maybe just a, at a high level, what was Mosaic ml and like, what is it growing into? Like how do you conceptualize this? Jonathan: Yeah, and I will say gradient, when graded dissent was recorded, we weren't doing inference and had no plans to do it.It took a little while for the podcast to get out. Um, in the meantime, basically, you know, one thing I've learned at a startup, and I'm sure abhi can comment on this as well, focus is the most important thing. We have done our best work when we've been focused on doing one thing really well and our worst work when we've tried to do lots of things.Yeah. So, We don't want to do inference, we don't want to have had to do inference. Um, and at the end of the day, our customers were begging us to do it because they wanted a good way to serve the models and they liked our ecosystem. And so in some sense, we got dragged into it kicking and screaming. We're very excited to have a product.We're going to put our best foot forward and make something really truly amazing. But there is, you know, that's something that we were reluctant to do. You know, our customers convinced us it would be good for our business. It's been wonderful for business and we are gonna put everything into this, but you know, back when grading dissent came out, I [00:45:00] was thinking like, or when we recorded it or focused, oh God, like focus is the most important thing.I've learned that the hard way multiple times that Mosaic, abhi can tell you like, you know, I've made a lot of mistakes on not focusing enough. Um, boy inference, that's a whole second thing, and a whole different animal from training. And at the end of the day, when we founded the company, our belief was that inference was relatively well served at that time.There were a lot of great inference companies out there. Um, training was not well served, especially efficient training. And we had something to add there. I think we've discovered that as the nature of the models have changed, the nature of what we had to add to inference changed a lot and there became an opportunity for us to contribute something.But that was not the plan. But now we do wanna be the place that people come when they wanna train these big, complex, difficult models and know that it's gonna go right the first time and they're gonna have something they can servee right away. Um, you know, really the rep example of, you know, with 10 days to go saying, Hey, can you please train that model?And, you know, three or four days later the model was trained and we were just having fun doing interesting, fine tuning work in it for the rest of the 10 days, you know. That also requires good inference. Swyx: That's true, that's true. Like, so running evals and, and fine tuning. I'm just putting my business hat on and you know, and Alessio as well, like, uh, I've actually had fights with potential co-founders about this on the primary business.Almost like being training, right? Like essentially a one-time cost.Jonathan: Who told you it was a one time cost? What, who, who told you that?Swyx: No, no, no, no. Correct me. Jonathan: Yeah. Yeah. Let me correct you in two ways. Um, as our CEO Navine would say, if he were here, when you create version 1.0 of your software, do you then fire all the engineers?Of [00:46:30] course not. You never, like, MPT has a thousand different things we wanted to do that we never got to. So, you know, there will be future models.Abhinav: And, and the data that's been trained on is also changing over time too, right? If you wanna ask anything about, I guess like May of 2023, we'll have to retrain it further and so on.Right? And I think this is especially true for customers who run like the kind of things that need to be up to date on world knowledge. So I, I think like, you know, the other thing I would say too is that, The malls we have today are certainly not the best malls we'll ever produce. Right. They're gonna get smaller, they're gonna get faster, they're gonna get cheaper, they're gonna get lower latency, they're gonna get higher quality.Right? And so you always want the next gen version of MPT and the one after that and one after that. There's a reason that even the GPT series goes three, four, and we know there's gonna be a five. Right? Um, so I I I also don't see as a, as a one-time cost.Jonathan: Yeah. Yeah. And I, if you wanna cite a stat on this, there are very, very

The Mindful Marketer
From Global to Local : The Art of Listening (Episode 72)

The Mindful Marketer

Play Episode Listen Later May 13, 2023 39:48


“From Global to Local: The Art of Listening” with TATA Consulting CMO Abhinav KumarAs we launch Season Two, we explore the role of marketing in cultivating conscious and mindful communication. Abhinav Kumar, the CMO of TATA Consulting and FC Barcelona fan, joins us from Brussels to share his “global to local” communications playbook.Abhinav faces a daunting task. He oversees communications to engage and educate over 630,000 employees across 46 countries. Balance this goal with a charter to continue building a brand valued at $17.2B, and TATA's commitment to tackling grand global challenges: climate change and energy transition.Here's what we covered... ●The biggest technological obstacles and opportunities to being a “Chief Listener” for your organization.   ●The benefits and guardrails needed to achieve hyper-localization.   ●Abhinav's recommended resources to help you turn listening into loyalty.Don't miss a future show –register for our private “know ahead” list at themindfulmarketer.com

SportsTech Allstars: Startups & Key Initiatives
#162 Da One Global Ventures - A multi-stage celeb-led sportstech fund

SportsTech Allstars: Startups & Key Initiatives

Play Episode Listen Later Apr 29, 2023 38:20


Interviewing Abhinav Tandon, Founding Member and Global Spokesperson of Da One Global Ventures In this conversation, Abhinav talks about how they decided to headquarter in the middle east due to all of the interest around sport, the connectivity due to the location, the availability of funds and hunger for innovation. One of the 10 pillars of GCC was also sports. They also identified the importance of sports, tech and funds that Abu Dhabi very warmly welcomed them with. This is also what led them to operate out of the middle east. Da One leverages its robust network of VCs and sports industry leaders to empower startups to soar and scale, driving their operations to new heights. Da One also has an  E-Sports Studio that nurtures early-stage ventures, fostering innovative virtual sports experiences and fostering collaborative opportunities. And the Web 3.0 Studio champions startups that aim to launch and evolve blockchain projects, NFTs, and more, providing essential support for growth. To learn more, visit http://daoneglobal.vc Hosted by Rohn Malhotra from SportsTechX - Data & Insights about SportsTech startups and the surrounding ecosystem.

Weave Your Bliss
100: Using ChatGPT for Your Cosmic Business with Abhinav Chetan

Weave Your Bliss

Play Episode Listen Later Apr 24, 2023 54:59


The last time Saturn was in Aquarius, we got the internet, which was a huge game changer for the world; I feel like the phenomenal advances in artificial intelligence are the next big thing for this time when Saturn is in Aquarius. You may have heard of ChatGPT, but I guarantee that you'll learn more from my guest today! Join us!I'm joined by Abhinav Chetan, a digital marketing expert with 15 years of experience. He spent 12 years working for Google working with large brands, startups, and agencies in the US, UK, and India. As an entrepreneur, Abhinav is using tools like ChatGPT in the field of “generative AI,” and the possibilities are endless. He was recently recognized as one of the “Top 40 Under 40” marketers in India by Business World, and he runs a personal engagement platform at AbhinavChetan.com. In his free time, Abhinav is a certified yoga teacher and avid reader whom I am privileged to call a personal friend. Show Highlights:A look at Generative AI: what it is and what it can doWhy these new tools cannot be ignored in the world of digital marketingWhy today's AI technology gives more opportunity and power for leverage in marketing than ever beforeExamples, applications, and common fears of generative AIAn overview of Abhinav's career at Google and what he did thereWhy Abhinav became an entrepreneurWhat it means for non-profits and small businesses to “scale” (Abhinav's recipe for growth)Three AI tools that small businesses should start using right nowHow ChatGPT has changed Abhinav's work and impacted his businessHow AI helps entrepreneurs fulfill their purpose and create profit in less time than beforeHow Abhinav helps others learn about ChatGPT by offering workshopsThree pro tips in crafting a prompt for ChatGPT: give the prompt a persona, define a task, and add constraints (Listen to hear Abhinav's example!)Hear Abhinav's answers to rapid-fire questions about helpful advice, morning routine, and what he's reading/recommending.Parting words from Abhinav about the benefits and advantages of AIResources and LinksConnect with Abhinav ChetanPersonal site : https://abhinavchetan.com/Newsletter : https://abhinavchetan.beehiiv.com/Workshop : https://abhinavchetan.com/context-x-chatgpt-powering-content-with-ai/Books mentioned The Secret Life of Plants by Peter Tompkins and Christopher Bird The Hidden Life of Trees by Peter WohllebenI Am That by Sri Nisargadatta MaharajThe Man Who Knew Infinity by Robert KanigelToday is the last opportunity to get $500 off! There are very limited spots available for my eight-week program, the Cosmic Business Incubator. Learn how to go from side hustle to six figures and beyond in your spirit-led business.

GRTiQ Podcast
DappLooker - Blockchain Analytics & Indexer at The Graph

GRTiQ Podcast

Play Episode Listen Later Mar 31, 2023 35:41


Today I am speaking with two members of the DappLooker team,  Abhinav Singh and Vikash Choubey. You may have already heard about DappLooker, they continue to receive a lot of attention for their contributions both within The Graph and the broader Web3 community.DappLooker started as a blockchain analytics solution that enables users to explore blockchain data and create charts and visualizations, all without needing to be technically oriented or knowing how to code. You've probably seen some of the charts and visualizations about The Graph that the DappLooker Twitter account consistently shares. In addition to providing these types of analytics, DappLooker recently launched an Indexer on The Graph.During this interview,  Abhinav and Vikash talked about the origins of DappLooker, how it works, and how you can use it, and then we talk about when they became interested in The Graph and why they decided to join the community and launch an Indexer. Similar to other interviews we've had on the podcast, the DappLooker story begins with someone simply using The Graph to get started building in Web3, and then developing the conviction to go deeper and join the community to become a contributor.Show NotesThe GRTiQ Podcast takes listeners inside Web3 and The Graph (GRT) by interviewing members of the ecosystem.  Please help support this project and build the community by subscribing and leaving a review.Twitter: GRT_iQwww.GRTiQ.com 

81 All Out
What we talk about when we talk about pressure

81 All Out

Play Episode Listen Later Mar 21, 2023 100:12


In episode 156 of the 81allout podcast we are joined by former India Test cricketer and Ranji Trophy colossus Abhinav Mukund - who has now turned into an astute analyst on TV. Abhinav is piqued by Kartikeya Date's latest article in ESPNcricinfo (Do India choke in high-profile ODIs) and shares his perspective on how a player approaches (and talks about pressure), and weighs in on the question: is pressure a good parameter to explain the result of a match?  Talking Points: The times when a player feels he is in the 'middle of a volcano' How players talk about the game and their part in it What players mean when they talk about 'pressure' Does pressure affect performance in a sustained manner? Talking about skill v talking about mental faculties Is it fair for strangers to make assumptions about players' mental abilities? The problem with attributing the result to mental strength/weaknesses What happened to South Africa in the 2015 World Cup semi-final? The value of finding a mindspace when 'nothing matters'  Participants: Abhinav Mukund (@mukundabhinav) Siddhartha Vaidyanathan (@sidvee) Kartikeya Date (@cricketingview) Buy War Minus Shooting - Mike Marqusee | Buy Cricket Beyond the Bazaar - Mike Coward Related: ‘I have found my love for the game in the last couple of years' – 81allout podcast speaks to Abhinav Mukund on his career and challenges Do India choke in high-profile ODIs? Here's what the numbers say - Kartikeya Date - ESPNcricinfo Why there's no such thing as a finisher in ODI cricket - Kartikeya Date - ESPNcricinfo What pressure does to cricketers - Aakash Chopra - ESPNcricinfo The mother of all myths - The Cricket Monthly - Tom Eaton on the narrative of South Africa choking in World Cups What Is Clutch? A Look at the Most Overused Term in Sports - Rob Goldberg - Bleacher Report Cognitive Biases in Sports: The Irrationality of Coaches, Commentators and Fans - Samuel McNerney - Scientific American

The Parenting Reset Show
99. Sleep! Why it's so Important - from a Sleep Doctor

The Parenting Reset Show

Play Episode Listen Later Feb 7, 2023 59:13


Tess Connolly LCSW talks with Dr. Abhinav Singh,a physician with board certifications in sleep medicine and internal medicine. Currently, Dr. Singh serves as medical director at the Indiana Sleep Center, accredited by the American Academy of Sleep Medicine. He is also a clinical assistant professor at Marian University College of Osteopathic Medicine, where he developed and teaches a sleep medicine rotation to medical students. He is a fellow of the American Academy of Sleep Medicine and has received a Top Doctor award in sleep medicine for the last four years. Dr. Singh is a peer reviewer for the Journal of Clinical Sleep Medicine and Sleep Health (a journal of the National Sleep Foundation). A sleep physician for the NBA team the Indiana Pacers, Dr. Singh also serves on the medical review panel of SleepFoundation.org.  Charlotte Jensen co-authored with Dr. Singh Sleep to Heal: 7 Simple Steps to Better Sleep, coming out in June 2023. Charlotte is a writer and editor specializing in technology, marketing, business, and the arts. For more than a decade, Jensen worked as senior writer, articles editor, and executive editor for Entrepreneur magazine, where she took a leading role in shaping editorial content and direction for the award-winning national consumer magazine and its readership of 1.2 million. Jensen's work has been featured in HuffPost, AOL Small Business, and a variety of small-business websites, and she is currently a copy editor for luxe lifestyle brand RH (Restoration Hardware). She has also provided first-draft edits for several nonfiction books, including Fight Cancer With Vitamins and Supplements: A Guide to Prevention and Treatment (Healing Arts Press). Jensen has a B.A. in Journalism from California State University, Long Beach. She lives & works in the San Francisco metro area.         Dr. Abhinav Singh shares how their book came about and tells us how he came to specialize in ‘sleep'.  They talk about the importance of sleep.  Dr. Abhinav Singh explains how sleep works and helps us and our health.   They talk about night waking and how we can deal with this and how it can affect us.     They discuss sleep trackers and what Dr. Abhinav Singh thinks about them.  Dr. Abhinav Singh talks about how much sleep our teens require.   They discuss devices at bedtime and the effect on sleep for tweens and teens.  They talk about the best ambient temperature when sleeping.    They speak about dreaming and what dreams can mean.  Dr. Abhinav Singh talks about their book and shares with us a little insight into what it includes. Dr. Abhinav Singh shares with us his experience of doing his TEDx-style talk ‘What would you do for $13.5 million?' Dr. Abhinav Singh is most grateful to be able to help people to regain their health, a wonderful supportive family and co-author Charlotte Jensen.      Get the book here

Get in Loser, We’re Doing Witchcraft
Episode 45: Nightmares & Sleep Paralysis

Get in Loser, We’re Doing Witchcraft

Play Episode Listen Later Jan 30, 2023 49:54


Welcome back Witches! This week's episode will take a look at Nightmares & Sleep Paralysis!  We're going to discuss the science, the the supernatural, and ways to help prevent them from occurring.  So get in losers, and let's gain some knowledge on Nightmares and Sleep Paralysis. We would be forever thankful if you leave our podcast a 5-Star review. If you really loved the show and want more Get in Loser content, check out our Supercast link below, or search the Supercast website for Get in Loser, We're Doing Witchcraft. You can also find us at our Buy Me a Coffee link below.  There you can purchase a membership to our podcast and obtain exclusives like, getting episodes early, shout outs on the show, access to our “Ask me anything” forum, our monthly newsletter, a promo code for merchandise, and more. You can also find us on Facebook, Twitter and Instagram @GetinWitches, on TikTok @weredoingwitchcraft or email us at weredoingwitchcraft@gmail.com. You  can support our show through our Supercast: https://getinloserweredoingwitchcraft.supercast.com/ Buy Me a Coffee: https://www.buymeacoffee.com/getinwitches Music by Karl Casey @ White Bat Audio- The Witch ----more---- References Hershner, Shelley MD & Morse, Anne M. DO (2020). What is Sleep Paralysis? AASM Sleep Education. https://sleepeducation.org/sleep-disorders/sleep-paralysis/ Newsom, Rob (2022). Sleep Demon. Sleep Foundation. https://www.sleepfoundation.org/parasomnias/sleep-demon#:~:text=When%20sleep%20paralysis%20is%20accompanied,%2C%20or%20vestibular%2Dmotor%20hallucinations. Suni, Eric & Singh, Abhinav, Dr. (2022). Nightmares. Sleep Foundation. https://www.sleepfoundation.org/nightmares Davis, Owen. The Nightmare Experience, Sleep Paralysis, and Witchcraft Accusations. (2010). chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://uhra.herts.ac.uk/bitstream/handle/2299/2342/103471.pdf?sequence=1&isAllowed=n Farooq M, Anjum F. Sleep Paralysis. [Updated 2022 Sep 5]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK562322/ Roybel, Beth.  Sleep Paralysis. (2022). WebMD. https://www.webmd.com/sleep-disorders/sleep-paralysis Stanborough, Rebecca Joy. Understanding ‘Old Hag' Syndrome: What it Means When You're Paralyzed in Your Sleep. (2020) Healthline. https://www.healthline.com/health/sleep/old-hag-syndrome

The Wellness Revolution Podcast with Amber Shaw
TWR 118: A Mind-Body Connection Approach to Injury with Dr. Abhinav Gautam

The Wellness Revolution Podcast with Amber Shaw

Play Episode Listen Later Nov 10, 2022 52:13


Having physical discomfort and not knowing what your condition is after several test results can be frustrating. In our guest's line of work, he focuses on treating people with physical conditions that may not be immediately obvious. Rather than treating symptoms and taking medications, he takes a holistic approach and addresses the root of the problem. Dr. Abhi Gautam, an anesthesiologist and entrepreneur, joins Amber on the podcast today. Dr. Gautam is co-founder of Vitruvia™ and pioneer of the revolutionary RELIEF® procedure that has helped top performers and athletes increase their mobility and restore their bodies to pre-injury state with little to no recovery time. Dr. Gautam has worked with many high profile clients such as Tony Robbins, Miguel Cabrera, and David Ortiz of the Boston Red Sox Hall of Fame. In order to prevent pain from occurring in the first place, Dr. Gautam believes that a strong mind-body connection is essential. His analogy is that our bodies are like cars. If you ignore your car's check engine light long enough, something really bad will eventually happen. When we do not take a moment to breathe, close our eyes, and find quiet time to be introspective, the same thing happens to our bodies. Dr. Gautam argues that despite the divine and mysterious nature of our bodies, they work within a logical framework or schema that identifies us as one human species. According to Dr. Gautam, it's important to trust your body more than what you read or hear about health, but we must first learn to investigate what's happening internally. During this interview, Dr. Gautam will discuss human anatomy, connective tissue restoration, and how to connect to our bodies. In addition, he will talk about the latest injury prevention techniques as well as the technologies Vitruvia has developed to help those suffering from existing conditions recover and return to their favorite activities as quickly as possible. Key Highlights: The best way to prevent injury and care for our bodies before and after exercise The effects of pregnancy on joints and existing injuries over the long term The importance of listening to what your body is telling you Gautam explains how non-surgical treatment for joint and tissue problems can relieve pain without downtime. How can we prevent the need for pain treatment? Dr. Gautam's thoughts on foam rolling, yoga, pilates, stretching, and active release Gautam's observations of what causes pain and misalignment The importance of mind-body connection and how to begin to notice your body's small signals and symptoms Making time for self-love and taking care of yourself as much as you care for others How RELIEF® works and how to determine whether you qualify Episode resources: Click here to join the FREE 2-day event on November 15th & 16th: ambershaw.com/wellness-revolution-event Connect with Dr. Abhi: Website: vitruvia.co Instagram: @vitruvialife Connect with Amber Instagram: @msambershaw  TikTok: @msambershaw Website: ambershaw.com

The Tony Robbins Podcast
Your Success is 80% Psychology | How to Use Mindfulness and Meditation to Prime Your Nervous System for Peak Performance

The Tony Robbins Podcast

Play Episode Listen Later May 24, 2022 77:02 Very Popular


Mindfulness and meditation aren't just about spirituality or stress relief. In today's world of constant demands and digital distractions, mindfulness is about making your moments truly matter and living up to your full potential. It's about your power to put your attention where you want it to be and not on what someone else dangles on a screen in front of you. This is just one reason why Tony Robbins incorporates a gratitude meditation into his daily morning routine to prime his mind every single day. If you're familiar with any of Tony Robbins peak performance strategies, you know that your ability to maintain a peak state is a key component of living the life you desire. Your state is affected by the way you use your physical body, your ability to direct your mind's focus, and the language or meaning you give to your experiences. Knowing this, it's no wonder that we aren't able to be the best version of ourselves when we are in a stressed state. The incoherence of our head and heart, the jitters in our nervous system and body makes us less effective and less resourceful. In this 1-hour episode you'll hear from Tony Robbins, his wife Sage Robbins, and podcast host Mary B. as they discuss one of the most valuable skills to maximize your mindset and prime yourself for power, clarity, concentration, and focused attention.   SHOW NOTES [0:00] Hi it's Tony Robbins… [1:15] I used to think meditation was a waste of time [2:00] We don't experience life, we experience the life we pay attention to [2:39] Tony and wife Sage and traveled the globe together for 22 years [3:30] A podcast on meditation? I'd be outta here. But… [4:03] Twin hearts Sage and Tony Robbins [5:30] Why does time go so fast? [6:20] One of us is a meditator and one of us is not [7:12] Meditation definition | What is meditation, really? [8:20] Styles of meditation [9:57] What's your outcome? [10:30] The science of meditation [11:24] Be here now and notice [12:24] Our attention is the most important commodity [13:15] Self-judgement and imposter syndrome [15:50] Ellen Langer Harvard mindfulness [16:37] Just close your eyes for a minute [17:28] I'm thinking about a grilled cheese sandwich [18:23] Meditation is not something to achieve, it's something to experience [19:04] The throughline is gratitude [19:30] Thoughts are like clouds floating by [20:22] How do I stop the thoughts in my mind? [21:25] There's an entire universe inside of us [22:22] Collective mind [23:19] Sage and her baby in the rain [24:25] In the busy-ness we miss the nuances [25:00] Mary B: Meditation has gotten really trendy [26:25] Story: Full Moon Meditation with Tony, Sage, Billy Beck and Mary B. [29:13] We will never experience this moment of life again [29:43] Profound knowledge *Show notes continue on website page Original Music by Abhinav and Sage Robbins