Podcasts about elicit

  • 167PODCASTS
  • 235EPISODES
  • 39mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about elicit

Latest podcast episodes about elicit

Dementia Researcher Blogs
Rebecca Williams - AI and BlueSky: Embracing the Everyday Tech of Academia

Dementia Researcher Blogs

Play Episode Listen Later May 2, 2025 6:39


Rebecca Williams, narrates her blog written for the Dementia Researcher website. In this blog, Rebecca offers a pragmatic guide to everyday tech that can support academic work. From AI tools like ChatGPT and Elicit, to reference managers and visual design hacks in PowerPoint, she explores how to navigate, adopt, and balance new digital tools. Rebecca also champions the use of social media platforms like BlueSky to connect and amplify research, encouraging researchers to make technology work for them—efficiently, thoughtfully, and creatively. Find the original text, and narration here on our website. https://www.dementiaresearcher.nihr.ac.uk/blog-ai-and-bluesky-embracing-the-everyday-tech-of-academia/ -- Rebecca Williams is PhD student at the University of Cambridge. Though originally from ‘up North' in a small town called Leigh, she did her undergraduate and masters at the University of Oxford before defecting to Cambridge for her doctorate researching Frontotemporal dementia and Apathy. She now spends her days collecting data from wonderful volunteers, and coding. Outside work, she plays board games, and is very crafty. @beccasue99 -- Enjoy listening? We're always looking for new bloggers, drop us a line. http://www.dementiaresearcher.nihr.ac.uk This podcast is brought to you in association with Alzheimer's Association, Alzheimer's Research UK, Alzheimer's Society and Race Against Dementia, who we thank for their ongoing support. -- Follow us on Social Media: https://www.instagram.com/dementia_researcher/ https://www.facebook.com/Dementia.Researcher/ https://twitter.com/demrescommunity https://www.linkedin.com/company/dementia-researcher https://bsky.app/profile/dementiaresearcher.bsky.social

Room to Grow - a Math Podcast
Elicit and Use Evidence of Student Thinking

Room to Grow - a Math Podcast

Play Episode Listen Later Apr 15, 2025 42:19


In this episode of Room to Grow, Joanie and Curtis continue the season 5 series on the Mathematics Teaching Practices from NCTM's Principles to Actions, celebrating it's 10th anniversary. This month's practice is “Elicit and Use Evidence of Student Thinking.” In Principles to Actions, NCTM describes this teaching practice in this way:Effective teaching of mathematics uses evidence of student thinking to assess progress toward mathematical understanding and to adjust instruction continually in ways that support and extend learning.This meaty description provides the fodder for today's conversation. Our hosts consider what is meant by “effective teaching,” “assessing progress,” and “adjusting instruction continually,” and tie these ideas back to the important work of classroom educators.Additional referenced content includes:·       NCTM's Principles to Actions·       NCTM's Taking Action series for grades K-5, grades 6-8, and grades 9-12·       Want more ideas for eliciting student thinking in your classroom? Check these out:o   Descriptors of teacher and student behaviors for this practiceo   Thoughts and linked resources from the Colorado Department of Educationo   A classroom observation tool focused on this practice from the Minnesota Department of Education Did you enjoy this episode of Room to Grow? Please leave a review and share the episode with others. Share your feedback, comments, and suggestions for future episode topics by emailing roomtogrowmath@gmail.com . Be sure to connect with your hosts on X and Instagram: @JoanieFun and @cbmathguy.  

Fertility Wellness with The Wholesome Fertility Podcast
Ep 331 Unlocking Conscious Fertility: The Mind-Body Connection with Lorne Brown

Fertility Wellness with The Wholesome Fertility Podcast

Play Episode Listen Later Apr 8, 2025 61:21


On today's episode of The Wholesome Fertility Podcast, I am joined by fertility expert, acupuncturist, and conscious work practitioner, Lorne Brown @lorne_brown_official. Originally a CPA, Lorne's personal health journey led him to discover the transformative power of Chinese medicine, ultimately changing his career path. Now, as a leader in integrative fertility care and the host of The Conscious Fertility Podcast, Lorne bridges the gap between science and spirituality to help individuals optimize their fertility and overall well-being. In this episode, Lorne shares how conscious work plays a powerful role in fertility, explaining how subconscious beliefs and emotional resistance can impact reproductive health. He discusses the mind-body connection, the importance of inner healing, and how shifting from stress to flow can create profound changes. Whether you're on a fertility journey or simply looking to align with your highest self, this conversation is packed with insights on conscious transformation, holistic healing, and the power of perception.   Key Takeaways: Lorne's personal journey from accountant to acupuncturist and fertility expert. How Chinese medicine and holistic healing transformed his health and career. The mind-body connection and how stress impacts fertility. How subconscious beliefs shape our reality and can either block or support conception. The power of inner work and emotional healing in reproductive health. How shifting from resistance to receptivity can improve fertility outcomes. The role of consciousness in creating meaningful change in health and life. Insights from The Conscious Fertility Podcast and how Lorne helps patients find balance through a holistic and energetic approach. Guest Bio: Dr. Lorne Brown @lorne_brown_official is a leader in integrative fertility care, blending Chinese medicine, mind-body healing, and cutting-edge therapies. A former Chartered Professional Accountant (CPA), his personal health journey led him to acupuncture, herbal medicine, and holistic fertility support. As the founder of Acubalance Wellness Centre, he introduced low-level laser therapy (LLLT) for fertility and pioneered IVF acupuncture in Vancouver. He also created Healthy Seminars, an online education platform, and hosts The Conscious Fertility Podcast, where he explores the intersection of science, consciousness, and reproductive health. Websites/Social Media Links: Learn more about Lorne Brown, visit his website hereFollow Lorne Brown on InstagramListen to Conscious Fertility Podcast For more information about Michelle, visit www.michelleoravitz.com To learn more about ancient wisdom and fertility, you can get Michelle's book at: https://www.michelleoravitz.com/thewayoffertility The Wholesome Fertility facebook group is where you can find free resources and support: https://www.facebook.com/groups/2149554308396504/ Instagram: @thewholesomelotusfertility   -------- Disclaimer: The information shared on this podcast is for educational and informational purposes only and is not intended as medical advice. Please consult with your healthcare provider before making any changes to your health or fertility care. ----- Transcript: [00:00:00]  Welcome to the Wholesome Fertility [00:01:00] Podcast. I'm Michelle, a fertility acupuncturist here to provide you with resources on how to create a wholesome approach to your fertility journey. **Michelle Oravitz:** Welcome to the podcast, Lauren. **Lorne Brown:** Hey, Michelle, glad to be together with you over whatever we call this technology. I think yours is the Riverside. Yeah, I had a good time interviewing you for my Conscious Fertility podcast, so I'm looking forward to having more conversations with you because that was a lot of fun for me. **Michelle Oravitz:** It was a lot of fun for me too. And I actually it was really, really nice. And to see that we have very similar views just on reality and health and fertility, **Lorne Brown:** Yeah. **Michelle Oravitz:** it was a lot of fun. And so last week actually for everybody's listening, that was the first time we actually officially met via zoom. **Lorne Brown:** Yeah. But we know each other. We're part of the, the ABORM, right? The Acupuncture TCM Reproductive Board of Medicine but yeah, [00:02:00] like the first time you and I had real conversation rather than chat conversation. **Michelle Oravitz:** Which is awesome. I **Lorne Brown:** Yeah. **Michelle Oravitz:** it. And I think that we're so aligned in so many ways. I think that we both love the whole bridging of science and spirituality. We're kind of nerds in that department. **Lorne Brown:** Yeah. **Michelle Oravitz:** for people listening, I would love if you can introduce yourself. I know we also have, we started out with very different backgrounds. And went into acupuncture, you have like kind of a similar cause you started in accounting, right? **Lorne Brown:** Yeah, so, I am a CPA, so a Certified Professional Accountant back in the day they were called Chartered Accountants in Canada and because of health issues and having such a a response to Chinese medicine in particular eventually I, I was the, one of the controllers and tax guys at this time with ocean spray growers here in B. C. and I left that position so I could go back to school and study Chinese medicine as my second career. So that's kind of a little bit about my background. And then eventually **Michelle Oravitz:** [00:03:00] Like what made you think about doing Chinese medicine? **Lorne Brown:** I was ill. I had um, you know, back in the day, this is in the eighties and early nineties. So this Chinese medicine wasn't as available. This was before websites, right? Where you could really see what other people were doing and learning. And so I had severe gut issues, you know, diagnosis IBS, chronic fatigue, candida and you know, I got scoped through all each end and eventually and I tried different Western approaches and eventually it was the herb, Chinese herbal medicine actually that dramatically changed it so much. So, I mean, I have some memories. I did a bachelor of science first in math. That was my first thing. Then I went and did accounting in McGill. And and then I went and became a CPA, back then CA. They changed the letters for the designation. And I remember when I was at McGill I was already seeing alternative medicine doctors, in particular Chinese medicine. And I remember [00:04:00] s for the first time, how much clarity, because I had, I didn't realize how much brain fog I had. And so the clarity I had, I was in the classroom, I just realized how easy things were going in, and I was just remembering things, and I just felt like things were almost in slow motion in a good way, like a professional athlete when they can see the court. And physically, I just felt I had so much endurance, so much energy. I was just I felt great. And you know, when you've been feeling poorly for so long, That I thought that was normal. And then I got, you know, the illness was so bad while I was early days in my accounting studies at McGill. it interfered with my, my studies. It interfered my life. I almost couldn't get outta bed sometimes with the fatigue and the brain fog. And so I had an I had an aunt who was into this stuff. , I was, wasn't right. Remember, it came from Bachelor's Science Math in Duke County. I was, I think I was always open-minded. Look what I'm doing, but it wasn't kind of on my radar. And she's the one that suggested I see her Chinese herbalist. And you know, I was desperate. I was living in Montreal, Canada. She was living in Calgary, Alberta, Canada. So [00:05:00] I, I got on a plane and flew to see her person because I wouldn't know who to go see right back then. And you know, through dietary changes and herbal medicine. It, it transformed my life and funny story because, you know, I do acupuncture like you do. I always had a fear of needles, right? I never was a big fan of needles. So the first time I was getting acupuncture, the acupuncturist who treated me, I have everybody lying down, but he had me sitting up on the table. Right on the treatment table. I was sitting and he's putting these needles in me and he's like, are you okay? I guess he could see I was going a little green and I'm trying to be, you know, tough guy. And I'm like, yeah, yeah, I'm fine. Next thing I know flop, I passed out on the table. **Michelle Oravitz:** do. **Lorne Brown:** So. Yeah. So now I receive it. I love it. Now I give it. But I did. It's a mind over matter, right? I did have that fear of needles, which is why I started with the herbal medicine. Most people like, Oh, I'll do acupuncture, but they maybe have an aversion to the herbs or the taste of the herbs. I was the other way [00:06:00] around. I got introduced to Chinese medicine through the herbal medicine. And then I was like, Oh, I'll try the acupuncture too. and, you know, I stuck with it, obviously. And, and eventually went back to school and now I can I receive it and I can give it and I have so much compassion for those who have a fear of needles, but usually if they come in and try it, they realize it doesn't feel like needles that you're getting. And now with technology, I have low level laser systems as well. So I can do laser acupuncture for those people that just cannot. Experience acupuncture because it's so stressful for them. **Michelle Oravitz:** Yeah, for sure. So that's that's one of the things or sometimes starting them out with baby needles because the baby needles are really, really, really super thin. You can barely feel it. **Lorne Brown:** Yeah, I mean, I, I mean, I just give them the acupuncture for the first time and, and they're nervous. But, you know, they let me put in one needle, then another, then a third. And that's all I'll do for the first visit for people who have a big phobia. But like you and I know, and those that have received it, it's not like getting a [00:07:00] needle at the doctor when you get a shot or blood drawn. And so you really, you know, once they're in, it takes like a minute to put them in. Then you go and tell a beautiful rest, la la land for 30 to 45 minutes on the table. So all worth it for most. **Michelle Oravitz:** totally worth it. For sure. So talk about why you got into fertility specifically. **Lorne Brown:** Yeah, and I'll keep it short, but it was, it was never my intention. My intention was to treat gut issues, digestive issues, because that's what brought me to the medicine. So I thought I'd be, and that's what I set out to do, IBS, irritable bowel syndrome, Crohn's, colitis, severe bloating, constipation, diarrhea, that kind of stuff is what I thought I would be seeing. and I did see a lot of that, and in our medicine, when we treat, we do a very Detailed history and we treat holistically so we can't just focus on the gut health just like for fertility We don't just focus on the women's ovaries, right? We focus holistically and so most people that come to health professionals back then And [00:08:00] I started in 2000 and now still are female And so I'd always do a menstrual history and the the menstrual history is such a great guide for health, right? We can get so much information. That's why I prefer treating women over men. I treat both women who are menstruating. Help me diagnose them from a Chinese medicine perspective because I get so much information from their cycle history. And so as I was treating their bloating in their IBS, or they're alternating between, you know, constipation and diarrhea, or even colitis and Crohn's symptoms. They noticed their PMS went away, they noticed their menstrual pain went away, their irregular bleeding, the spotting, all those things changed. So I became popular. with women's health in general. So I was just doing women's health. So I was seeing people with perimenopause and menopausal symptoms and with painful periods. That was what I was seeing. And back then, again, the web wasn't a popular thing. I was advertising a magazine with a focus in women's health. And this woman who found me was going through an IVF and she was [00:09:00] going to see one of our colleagues, Randine Lewis, in Houston. So I'm in Vancouver and she flew to Houston to see Randine because this was before Zoom. And she, Randine told her she needs regular acupuncture at least once a week so she's going to enter herbal medicine. So she has to find somebody local because it wasn't reasonable or cost effective for her to fly weekly to Houston from Vancouver, right? Nobody was focusing on fertility, but she found me women's health. So she came to my clinic and told me her story and asked if I'd be willing to follow Randine's acupuncture prescriptions and her herbal suggestions and do that for her in Vancouver. And I kind of said cheek cheekily, but in a funny way, in a cute way, as a non aggressive way. So basically you want me to be like a monkey. And put the points where Randine tells you, tells me, and prescribe the herbs where Randine how Randine tells me. She goes, yeah. And I'm like, I'm in. That sounds great. I get to learn from somebody. Because what our audience doesn't know, [00:10:00] Randine was already focusing with fertility. And she had already had this draft book, which came out shortly after, called The Infertility Cure. First of many of her books. So, I thought it was a great opportunity to be able to learn from somebody with more experience and, and not have responsibility to the outcome. And so, and then women who are going through IVF and struggling with fertility, they talk and By 2004, I only would take reproductive health issues. That was all I would take because I was too busy, and I started hiring associates and training them because I couldn't handle the load myself. Now, here we are recording this in 2025 I have multiple associates in our clinic. And that do focus on fertility and myself personally, I still see a lot of reproductive health. But I'm so into the conscious work now. Cause I have low level laser therapy that we use for fertility, but I use that for so many other things. Brain health pain, pain injury. And I do a lot with pure menopausal symptoms. So, I would say, and half my practice, when I look at my [00:11:00] schedule is conscious work. Right? Is that mind body work? Half my practice is that. They still get acupuncture and low level laser therapy as part of the treatment but they're coming in with, I'm wanting belief change work. and I do see a lot of reproductive health, but I see everything now. So it's, it's kind of gone full circle. Because of the conscious work, because conscious work is my passion. And so whoever comes in the door that's looking for change, they may want a relationship change or want a relationship, job changes, finances. They want a baby, they want a healing. Basically, they want to be happy and they realize they can't get it from the outside. So they're looking for help on the inside to have that transformation. And that's why we use it for fertility because it's such a powerful tool when you can heal the mind, the body follows really well. **Michelle Oravitz:** Yeah. No doubt. So talk about the conscious work, specifically. What does it entail? Mm-hmm **Lorne Brown:** Yeah, well, I'm trained also as a clinical hypnotherapist, and I've done a lot of what they call energy psychology modalities. So I'm trained in [00:12:00] Psyche, emotional freedom technique, Bankstein healing method, you know, energy type medicine. But from the clinical hypnotherapy perspective and what I would call conscious work, it's inner work. It's waking up to your true nature. It's waking up to what some people would call higher self, what they would call consciousness witness consciousness. You'd have to be open and appreciate that there's more to this world than meets the eyes. And so we have a Newtonian science world, what's considered a materialistic world, and those are things that we can kind of measure. And then there's the science, the new science called quantum physics. Which understands there's so much more to this reality than what we see and when you have these shifts inside it has your your perception to the world You see it differently and you can think of it as if you live in a building Let's say your your life is a building, you know On the first floor if that's where you live, you're going to have a certain perspective of what your neighborhood is And it's going to be very limited because you can only see from the first floor. And as you move up, if the 20 store [00:13:00] building, if you live above 10 and you start to live on the 15th floor, you have a different perspective of what is in your neighborhood than the person who lives on the first floor. And so conscious work is about kind of getting to a different perspective. I we know, you know, through so much more research now that we perceive the world. Through the lenses of our subconscious programming, you know, and so how we see the world is through the lens of our subconscious and that subconscious programming is is inherited and imprinted on us inherited like literally few generations before we know this through um, research on Holocaust survivors and their children and grandchildren. And we know this through the study, the cherry blossom study on mice were stressed and traumatized and it got passed down to their grand pups. I won't go into the study because it's **Michelle Oravitz:** and DNA. **Lorne Brown:** Yeah, it gets tagged. It's not a genetic mutation, it's a tag. So it can, one generation get tagged, and one generation you can heal it. So, you see the world through the lens of your subconscious, and that lens is based on your history. And [00:14:00] so, I heard a teacher of consciousness once say, Reality's white snow, let's pretend that. And then you have red glasses. I have orange glasses. Some of the listeners have blue, green, white, yellow. We're all seeing white snow, but we're all experiencing it, perceiving it differently because of our lens. And if we want to have a different experience to see that reality, we got to change our lens. **Michelle Oravitz:** Yes. **Lorne Brown:** You know, or we're both fans of Joe Dispenza, right? We both run retreats, and **Michelle Oravitz:** we're Joe Dispenza groupies. **Lorne Brown:** yeah, I like, I like his work. I like his retreats and his books. And in his book, Breaking the Habit of Being Yourself, I think it's where he said it. I've read all of his books and been to many retreats, but I really liked how he said your personal reality is based on your personality. And you can't have, how do you expect to have a different reality if you bring your current personality into your future? You're gonna get the same thing. Right. And so this is about having that shift because, you know, we're going kind of into a rabbit hole here, but if you're open for it, **Michelle Oravitz:** No, I'm totally open for it. And my, my listeners are used [00:15:00] to it, **Lorne Brown:** okay, you know, God, I see they're allowed to, or Gandhi, I've seen this quote attributed to both, but it kind of goes like your beliefs lead to your thoughts, which lead to your feelings, which lead to your actions and behaviors, which lead to your habits. which leads to your destiny. Basically they're saying is your behaviors are always congruent with your beliefs. And when they conflict the program, the belief is going to win. And if you do a behavior long enough, it becomes your habit. So it becomes a reality. So we often want to go and work on the outside world. We often want to go work on a behavior, but the behavior stems from a belief or a program often unconscious. And so we'll self sabotage ourselves, even though we really want to lose that weight. We go and we diet, we exercise, but that's a behavior. But if you have a program that, you know, I'm not beautiful, right, or I'm not thin enough, then the subconscious wants congruency, and it will find a way to sabotage that. [00:16:00] Consciously or unconsciously, it'll happen. And so rather than going to work on the behavior, we go to work on the program, and then it flows down, and the behavior changes naturally. **Michelle Oravitz:** It's so true. And it's almost that, you know, that saying whether you think whether you Think you can or can't  **Lorne Brown:** you're right. Yeah. **Michelle Oravitz:** it's just a matter of what we choose and I think the key with this is that people don't even realize It's almost like they're so asleep in the matrix **Lorne Brown:** Yeah. **Michelle Oravitz:** is such a great movie, by the way, because of that reason, it really shows us how, if we just knew that that was the case, **Lorne Brown:** Yeah. **Michelle Oravitz:** had those beliefs and it impacts our reality, then we would make a difference. But I think the problem is, is not even knowing that it's even there. **Lorne Brown:** Yeah. Well, of course, and I don't know if the age has changed, but it was my observation that around age 40, people start to realize that they need to do their inner work. the drug doesn't work anymore. The antidepressant isn't working, [00:17:00] or they're in a third relationship. It's not working. They change cities. Like it's not working. The changing the outside is only temporary. So somewhere around 40, maybe it's younger now cause things seem to be speeding up, but around age 40 people come in there and they don't know what they're looking for, but they know they're looking for it. And you and I have language for this, right? They're looking for inner work, conscious work, but they kind of know that I know by getting a new relationship, it's not going to help. I got it. Something's not right. about me. And I, you know, I'm going to give an example because the relationship one comes up a lot in my practice when people come and see me. and I share this as an example of self sabotaging programs and why I like the conscious work. And we can talk about how this plays with fertility as well and baby manifestation. This actually wasn't my patient, but it was somebody who shared it. And I loved this case so much because it, it really is a great explanation of of belief change. So She was around 45. She was a lawyer and she had become aware that she was somehow sabotaging relationships. No matter what [00:18:00] relationship she went in, like she would find some not such great guys in her opinion, but she actually realized she found some good guys too. But for some reason, even she knew there was a button and she, she knew she shouldn't push that button, but she would push the button even in her mind when she knew this isn't going to work out. And the, and the relationship would collapse. So at her clinical hypnotherapy session, She got regressed and in this regression, she's experiencing herself as a four year old and she's remembering her mom is making dinner for her and her older sister was around seven and she promises the girls that they get popsicles if they eat all their dinner. So her older sister. Eats her dinner fairly quickly and gets a popsicle. And she, she being for living in that theta brainwave living in the moment, it's not eating quickly. And all of a sudden she sees her sister with a popsicle and she goes, I want a popsicle and her mom's tired end of day. And she angrily says, no, you haven't eaten your dinner. You don't get your dinner to you. You don't get your popsicle till you finish your [00:19:00] dinner. And it probably wasn't said in a loving way. And this triggered the four year old. And like many four year olds, she got. You know, she had a little four year old temper tantrum, and that set off her mom, and then she got sent off. To her room without dinner and without popsicle. And in her story, she's thinking in her dialogue that mommy likes, mommy likes and loves my sister more than me. Mommy doesn't love me. I'm not lovable. And she has this aha moment when that program really started for her. I'm not lovable. Now, remember I said the subconscious and the conscious want congruency. The heart and mind want congruency. When it conflicts, the heart, the shen, the subconscious, wins. And so, she would have a relationship, and if this guy was doting and loving her, her subconscious goes, that's not who we are, we're unlovable. And she would Consciously or unconsciously sabotage the relationship. So in hypnotherapy work, we're able to bring her 45 year old self back and reparent doing her [00:20:00] child work and shift that. And I often say in my practice, I have a an approach. Notice, accept, choose again. Notice everything is neutral and we give it meaning. Neutral. She just did not get a popsicle. Neutral. The meaning she gave it was I'm not lovable, right? And children that are in theta, meaning they're in, they're sponges. They don't have that prefrontal development to discern things. They just take things in and we don't know why. But you know, if you're a product of divorce, which a lot of people are It's usually for the children. It does some form of scarring, subconscious scarring, right? Because the children feel like they're responsible. It's their fault. So guilt shows up or shame shows up. Not safe. So all these programs come up and when I distill them down, I see people that are worth hundreds of millions of dollars. I see people that can't afford my services, right? And based on what they get paid, right? And when you distill it down, the stories are, can be very different, but when you still it down, it's I'm not enough, right? I'm not lovable. [00:21:00] I'm not pretty enough. I'm not thin enough. I'm not smart enough. It's kind of, I'm not enough when you distill it down, whether you're worth a couple hundred million or whether you're scraping things together. So. Notice everything is neutral. We give it meaning. And when we believe in the story, we make it real. So this is not to believe in the story. And that's kind of that materialistic side, right? And we use these tools conscious work to go in and clean up the operating system. And here's an important point I want to share with our listeners is You know, you have this hardware, but the hardware functions depending on the software and I got multiple stories like this, but I'll give you a couple, you know, they have done research on those with multiple personality disorders and depending on the personality, right? One will need reading glasses. One will not. One's blood tests will be diabetic and the other one will not. Right? I mean.  **Michelle Oravitz:** to orange juice. **Lorne Brown:** Yeah, when we allergic not so same physical body. So from a journalistic point of view, this makes no sense, but from a quantum perspective, it does. Right. And and we've heard people [00:22:00] with near death experiences. I've, I've heard through a colleague of one before, and I just, I'd met one recently, actually, and she's written a book on it, Anita, where she, yeah, it's great, right? **Michelle Oravitz:** Yeah. Yeah. **Lorne Brown:** So, you know, her story is she. Developed cancer, funny thing, not so funny, but she always had a fear that she would die and get cancer. So, you know, you got to be careful where you're putting your focus, right? She did everything she could to not get cancer. She got cancer and she was ridden with tumors and she's in the hospital and her husband's by her side. And the story goes that she goes unconscious. So they tell her, say goodbye. She, this is it. She's, you know. She's going to die and she's got, they got on some medications too, I believe for pain relief. And I think it was a day or two later, she opens her eyes and she has an experience of a near death experience where we won't go into it today where she sees other. Family members are beings, but not the personalities like she just knew who they were, but she realizes she's coming back and she knew she was coming back [00:23:00] different. It wasn't like a full lobotomy, like 180 degree turn, but she had a personality change, right? And she knew her cancer is gone. And when she woke up, she tried to convince her husband her cancer was gone. And he's like, you know, no, you know, they got the doctors. She was able to re Share stories of conversations that they had outside when she was in the coma in another room. She forbade him. She could, you know, she knew what the doctor's shoes look like, right? Everything. So **Michelle Oravitz:** that's that bird's eye view. **Lorne Brown:** she was outside the body, but her cancer went away without any medication. After that, she woke up from a coma. And her cancer just resolved herself. So there's that personality. So her personality changed and her physical body changed, right? Because of this and going back to our friend Joe Dispenza, Dr. Joseph Dispenza and your listeners check out his book. They're supernatural the placebo and breaking the habit of being yourself. That's a really good one breaking the habit Right. It's a good one to start with. He talks about you can use matter to change matter, which can be slow. That's for our fertility patients taking supplements. [00:24:00] That's IVF, that's diet matter, change matter, or you can use energy to change matter, which can be spontaneous. Like what happened with Anita, which when her cancer went away, right? Is it went away pretty quickly, right? **Michelle Oravitz:** There's people with well, we see it all the time at Joe Dispenza's work stage four cancer. It just, it goes away. **Lorne Brown:** Yeah. So that's working with a different, dimension of yourself, right? If you want to speak. So the conscious work that I use is how to tap into that, how to tune into it. And it came from my experience, right? I, I've learned this and developed this from many people I've studied with. And I'm a kinesthetic learning. That's learner. That's why I've learned psych KFT, Marissa peers, rapid transformational therapy, Ericksonian The guy just. Love it, right? I think it started from insecurity. Not enough, not smart enough. So I kept on doing things which brought me my success outside, but inside it wasn't enough. So I kept on learning and learning and learning. And then eventually, you know, you're brought to your knees, which I was. debilitating anxiety. And I go in and do the [00:25:00] inner work and I have the transformation. And then I'm kind of at peace. Don't feel like I need to do too much. But now there's this new drive, this overflowing, wanted to share. It's a different feeling. It's comes from peace. It doesn't exhaust you. Right. And so I think on the outside, if I was looking at me, I looked. Similar as in go, go, go. Always learning, always doing right. But I was coming from fear and lack for many years, my doing and stuff. So my doing just got me more fear and lack because I could never feel that void. Now I'm going, going, going, but it's coming from feeling more whole and complete and I'm not attached whether I do it or not, right? I'm not attached to it so much. And but yet I'm still doing it. But now I feel Charged by it. **Michelle Oravitz:** That's so great. I mean, don't you see the yin and the yang too, in a lot of this **Lorne Brown:** Oh, yes. Yeah. Yeah. **Michelle Oravitz:** the harmony, the **Lorne Brown:** Yeah, and you got to keep going into the end So you then you have the young and it happens, right? So, you know, I go inside I become quiet and and then all of a sudden all this [00:26:00] activity and inspire thought comes through me And then I I want to go in and see if I can manifest it, right? **Michelle Oravitz:** Yeah. And everything kind of goes in pulses, you know, there's a, there's pulses, even with like experiences that we have in life, there's ebbs and flows. I think that we get impatient or we think that it's going to be forever, but nothing lasts forever. It's like the good news and the bad news, nothing lasts forever. **Lorne Brown:** Right? Yeah, it's the good news and the bad news. Yeah, in that sense, don't be attached. **Michelle Oravitz:** Yeah, true. **Lorne Brown:** Which is a practice. **Michelle Oravitz:** it is, and it's something that the ancients have been telling us this whole time. They've told us to go within, they've told us not to be too attached, to learn from nature, to learn from what's around us. to flow, flow with it. **Lorne Brown:** And a tip for our listeners, because again, I teach what I've experienced. Many people may be going, well, I've read these books and I know all this stuff and I haven't had a shift. I was that guy where I had read everything and took courses, but I didn't do the process work. I, I conceptually understood it. I could teach it. But I wasn't living it. And it wasn't until I actually did the process work that the [00:27:00] transformation started happening, the awakening started happening. And so that's kind of, you know, with my patients, when I work with them, they want to get in the head and understand, which I love. We got to understand when you understand the why behind it, they say that the how becomes easier. The why is, you know, how does it work? And then the how is, what are you going to do? But if it's just an intellectual discussion you'll have a mind shift. But you won't have a trait change. And what's the difference? A mind shift is that temporary, you feel excited, this makes sense. It feels excited, but it's a shift. It's like when you pull an elastic band apart, it's neuro elasticity, it stretches out, this feels good. But within an hour or two, or a day or two, it goes back to its normal shape. So you haven't made a neuroplastic change, you just made a mindset shift. And if you do that daily, multiple times, it eventually become neuroplastic. And what I mean neuroplastic is if you stretch out a piece of soft plastic and you let go, it stays stretched. So that's the trait change. So repetition or doing many things that create a mind shift regularly often will give you [00:28:00] neuroplasticity changes, right? That hold becomes a trait. That's that, you know, do certain actions over and over again. So that's one way. But then there's other. faster ways to do neuroplastic changes, which doesn't just require repetition. That is one of them, but there's other processes I use. Part of my hypnosis practices and other energy psychology tools is what they're often called now to help make that neuroplastic change, not just from repetition, but from doing these Process work and we call it process work because it's not it's not done. It's a it's a bottom up process versus a top down So i'm not a counselor a therapist. That would be somebody who's doing a top down Let's talk about this and there's some benefit to it. The clinical hypnotherapist perspective is a bottom up meaning Your tyra box said this once your issues are stuck in your tissues So when you have these emotions rarely does somebody say I feel it in my head It does happen once in a while. Most people feel it in their throat, in their chest, in their stomach. It's in your cells. And we got science to talk about [00:29:00] how the microbiome changes with stress and emotions. **Michelle Oravitz:** images of people, all people that were angry, all people that were sad. And they would notice that it would light up in certain spots consistently in the body, which is really fascinating. You can probably find it online. **Lorne Brown:** cool. Absolutely. And, you know, we know like we got serotonin receptors in the gut. Now the heart's being known as a, as a second brain may have more what the read off of it more than the brain and, and then dispensa and heart math talk about heart brain coherence. So we're. You know, I look at it this way is, you know, back in the day of Galileo and Newton, the days when we thought that the sun revolved around the earth and the earth was flat, it was hard for society to shift and science to shift, right? Cause everything we understood the way we could look, it was like, no, no, the world's flat. It look at it, you can tell, look, look outside, doesn't look round or look, look, you can tell that. the sun is going around the earth. Look in the sky. It's so obvious. And you [00:30:00] can't tell me the earth is spinning. We would feel it, right? And now today, most people realize that the earth is round, not flat. There are so few flatters out there. They realize the earth is spinning and that the earth goes around the sun. But there's your perception, you know, there's the first floor view. From my view, the sun is going around the earth. I see it rise and set, right? I can see it float around. I'm standing still. I'm pretty sure about it, but that's a illusion. It's not a complete correct perception on that first floor when you go to a higher floor. So in this case, when we go into space, We can see that it's actually the earth that goes around the sun and the earth is round. And then if we go to a higher floor, we're going to probably get a whole other understanding of what's going on in this human experience and purpose and what's your individual purpose. And people have spoken of it. I haven't tapped into that aspect. I've had those. Non medicated, so non psychedelic experiences where I've tapped into profound peace, where I've tapped into bliss.[00:31:00]  I've also, through psychedelics, I've only done it once, so I'll never do it again, where I tapped into my shadow, right? Accelerated my journey, but I wouldn't wish that upon anybody, going into my shadow work unprepared. **Michelle Oravitz:** 'cause if you, you have to be ready for it. That's **Lorne Brown:** I wasn't ready for it. I, I, I cheated. I cheated with psychedelics. And it put me into my shadow grateful now because and here's a litmus test for myself. So I share this with the listeners as well. If you. don't like your life now, then I'm pretty sure you're still living in kind of a victim mode. You don't like your past and you'll have all the evidence to say why you don't like it. And if you can love your past, no matter how bad it is, then I know you love your now. I know you love your life. Why? Because You realize that who you are today is based on everything that's happened to you and you and because you love where you are today, you would never want to change your past because you love your day. Doesn't mean you want to relive your past, but you're grateful for. You don't regret it because you love today. [00:32:00] But if you hate your past, then it's I'm pretty sure you really don't love it. your day. And there are some terrible things that have happened to people. And I've seen people who've had terrible acts done to them. They would never ask to go do it again, like, but they also say, I love my life now. And so I wouldn't change anything in my past. So that shows you that's healed, right? That vibration that's healed. And so, because there's only this moment. So I find conscious work powerful when you bring it to reproductive health. I want to quote our Randine Lewis friend who wrote the book, The Infertility Cure, many books, but I remember hearing her talk about when women get into a later stage of their reproductive years, especially into their forties she said, you know, at the beginning, you know, reproduction is, it's a, it's a youth game, Jing, we call it essence Jing, it's the physicality, right? You got to have good physicality and it, and that happens with the youth. We see it around us, right? Like, a 90 year old and a 20 year old, the same person or different [00:33:00] physically. But there's something about spiritual maturity and sometimes, and this is where it kind of ties into Dr. Jo Dispenza, matter change matter. So that's the physical, the Jing. And then there's energy that can change matter. And that's what we call the Shen, the spirit tapping into that consciousness. And she says, when you're younger, you can be spiritually mature because you have such good Jing, it overrides everything. And so you can be a drug addict. And you're 20s and getting pregnant all the time, right? Poorly eating, all that stuff. And then if you get into your 40s, the physicality you want, but it's not enough, you need to, as she said, have your shit together. So that's, I'm quoting her. And sometimes that's when we see what we call miracles. It overrides the physical. And you really need to do that spiritual, the spiritual maturity happens. And so, you know, have both. Add to that her excitement with donor egg back in the day when we were having this conversation was she couldn't wait to meet the Children that were born through donor egg cycles because she [00:34:00] says currently this was way back when in early 2000 people were born with either young mothers, so physically strong, spiritually immature. They're in their twenties, early thirties or they're born with women in the early forties. physically not as strong, but spiritually more mature. So they didn't have both. She goes, but with the donor egg cycle, they get the gene from the, the egg. So a physical, physically strong, younger woman, and they are gestated. And raised by spiritually mature women. It's going to be the first time where they get both strength from the physical and strength from the spiritual. So she was quite excited. It was a different perspective to look at the Dorae. She was like, I wonder what kind of children these are going to be, right? So,  **Michelle Oravitz:** amazing. And actually it's really interesting. I don't know if you've seen this yourself, but sometimes the donor egg and the child looks like the mother. **Lorne Brown:** yeah, well, not surprising. I, I, I can't quote you on this, but I remember that they've done this in animals where you put him in a different, like, I don't know, [00:35:00] a donkey into a horse or something like, and it comes out looking more like the the mother. Like the, the horse. So, because don't forget you start as, you know, You know, a bunch of cells, right, you know, when you go in and you're grown, so you are influenced because you're, you're taking in in Chinese medicine talks about this, the emotional well being of the mother during pregnancy will impact the nervous system and the emotional personality of that child. And so what you're eating and what you're doing is helping grow that child. So we have what we call prenatal Jing, you know, for our listeners. So you get that from the mother, the father, and then. throughout pregnancy. And then postnatal Jing is what you, what happens after you're born. So your diet lifestyle. And so everything is impacting you up until you're born. That's what we'd call your genes. And in Chinese medicine called pre pre pregenetic destination, right? Prenatal, prenatal essence. I don't know if I said, if I use the right word, prenatal essence or prenatal Jing is what happens. So, yeah, I love [00:36:00] that story that she looked a little bit like the mother, not surprising. **Michelle Oravitz:** Yeah. And I've actually seen it because I, one of them she's somebody that I'm friends with on Facebook and she's also been on the podcast, Nancy Weiss. She's a spirit baby medium, is a whole other **Lorne Brown:** Yeah. **Michelle Oravitz:** topic. Right. But she. donor embryos and one of her daughters, she put a side by side picture of herself when she was younger and the daughter, and it was crazy. How similar they looked and then I've heard another story of somebody with freckles that she's had freckles But the mother of the donor did not and her husband did not So she always wanted a child with freckles and sure enough one of them got freckles  **Lorne Brown:** Very cute. Yeah, And that, there's so much things we don't understand and the donor egg cycle, I don't know if you've seen this, but with my patients, they only have one regret and it's a great regret that I've always heard when I've heard any regrets, I don't hear it often, but I hear it [00:37:00] and they say that the only regret I have is that I didn't do this donor egg cycle sooner because I don't, I realized I could have been with this baby I, I waited, I, you know, cause they're doing other things and understand there's a process to come to this place where you're ready to do donor a. But that's a great regret. Meaning they love this baby like from day from day one implantation, right? They have this connection. They're their mother. And and. It's, it's, that's great news, right? Cause so many people understandably have to get their head around about not using their own genetic material, right? And when you get there, when you surrender, which is part of conscious work, right? And the resistance drops and you get into flow and receptivity, the experience can be beautiful. And then regardless, even if you don't, when that baby's born, you're like, what the heck? I've been waiting for this forever. **Michelle Oravitz:** Yes. And that's another thing. So looking at the same thing from different lenses and different perspectives, and then you can kind of think, [00:38:00] okay, I may have wanted it to go this way, but perhaps it can go another way. And I'll still get the end goal, which is really to become a mother. **Lorne Brown:** Yeah, that's the end goal. And that's what we want to focus on. And from the conscious work, you know, we, we hear so often in manifestation work and in teachers of consciousness, not to be attached to form an outcome. And I'm a practical guy. So the left brain, my math background, my accounting, I'm, what I would say my feet are on the ground and my header is in the clouds, not just, you know, some people either their head in their clouds. So some people in our industry just head in the clouds. So it's hard to bring it to this earth or my old profession as a accountant, the feet are on the ground, right? I feel like I'm, I'm doing both of that.  So. I want to share this because this worked for me. And again, I often share is, you know, it's easy to say don't attach to form an outcome. That's easy to say you're not the one that has, you want this form an outcome. So it's, you can't fool the universe. You can't pretend, right? Really pretend, but you can do [00:39:00] practices. And I have found this line and I didn't come up with this. I heard this from somebody else and I was like, brilliant. And it works for me and it's worked for hundreds of other people I've worked with this or something better. Yeah. I want this or something better that had such a different vibration to it because you didn't choose your desire So I will never say you can't have you can't want this You can't desire this because you didn't choose it. I I prefer chocolate ice cream over strawberry. I can't tell you why it's just it is I just like I want chocolate ice cream. I don't really want strawberry ice cream. It's just What is, and so, but when you have a desperate need for it, that if I can't have this, then you create resistance and that impacts the field and that cannot be healthy. But if you have a desire, you want it, but you also know you're going to be okay, whether you have it or not, that doesn't add resistance to the field. And so often we, cause if you get focused on has to be this way, then you're not leaving yourself open to other things that [00:40:00] can bring you that same experience. Right? Because what does the baby bring to you? Right? You know, why do you want the baby? What's it gonna bring? What's gonna be different? What are you gonna experience? You know this kind of work, right? Because then you could get little, I call them Drift logs or kisses on the cheek from the universe where you know what it feels like you're practicing what it feels like and it's This or this or something better and then all of a sudden it that same experience comes to you But it's a different manifestation physically. So you're like, oh You know getting that feeling and so you're you're starting to get it from other places as well You're experiencing it. And when I say get it from other places I want to use that loosely is you have learned to Elicit that experience inside of you and then you're starting to see it manifested on the outside so because you don't want to have to get it from the outside because again, then you're not whole and complete This whole work is about becoming whole and complete where it's cut. You are it's It's you're making it inside of it. You're tapped into a part of yourself higher than I guess the ego self to use that language. And then it becomes fun to [00:41:00] see if you can manifest it on the outside, but you're already experiencing the feeling. Hence it's easy not to be attached because you're already feeling the joy or the love or the nurturing of something else, right? And the being of service to something else, you're already bringing up that experience. So you don't need it on the outside, but then all of a sudden you see it on the outside and that just bumps it up a bit. It amplifies it. And so you get, but it's temporary, that amplification. And then when you come back to your set point, that set point is peace and joy anyhow. So you're good. **Michelle Oravitz:** So it's unconditional peace and joy. It doesn't have a condition on it. You choose to just have that. **Lorne Brown:** Yeah. **Michelle Oravitz:** you can, and I think that that's the big thing is that people don't realize that they can actually do that. They could bring it up through just meditation and different practices that they can bring it up in themselves. **Lorne Brown:** Yeah. You tap into that. And I mean, I've, I've had that. I have glimpses. I have experiences of it. And for now the language is I'm, I'm tapping into my true nature and everybody has this true nature, your witness consciousness, your higher self, you want to give it a word. [00:42:00] And. I think we might have talked about this when I interviewed you on the Conscious Fertility podcast, but it's not all positive. It feels good. You still get uncomfortable feelings. You're just not at the full effect of them. So you experience the sadness. You can experience fear. You can experience guilt or hopelessness, but it moves through you like a song on a radio, 90 seconds, and it passes through you. And then you're back to that peace. And So if you're able to not get into the story and you can experience it, you still feel these uncomfortable feelings, but there's a, there's could be an underlying peace or even beauty behind some of those feelings. You're just not at the full effect of them and they just don't last for, for weeks. **Michelle Oravitz:** Yeah. Well, the untethered soul, I think that was like a big game changer for me, that book **Lorne Brown:** Michael Singer's book. Yeah. **Michelle Oravitz:** Singer, he's amazing. And I think that it really was about like allowing discomfort to happen without judgment, without that kind of good or bad, that neutrality, just kind of allowing it to happen. And I have an [00:43:00] example because I burned myself. I remember it was a Friday night and I was exhausted. I was so tired. I couldn't wait to sleep. And I burned my thumb. was like, man, and it was a stupid thing. Cause I was so tired and I touched something and I knew I shouldn't have done, it was just like, without thinking. And I was like, how am I going to sleep with this burning sensation? It was like the worst feeling ever. You know, it's like when you first burn yourself. And I remember thinking to myself, maybe it was like my higher guidance, something resist the burn. So I was like, okay, let me try this. literally felt, I closed my eyes and like, I imagined myself just kind of going through the fire with my hand and almost. Accepting it, inviting it, allowing it. And literally within five minutes, the burn went away. **Lorne Brown:** Yeah, and that's the quantum. That's energy changing matter and you use the awesome word resistance Right resistance is futile to quote the Borg from Star Trek Resistance is futile for those Trekkies out there When you add resistance basically you amplify the burn you amplify the [00:44:00] suffering or take from the Buddhist quote pain is inevitable the burn hurts Suffering is optional. That's where you amplify and when you can lean into it versus it's counterintuitive because we should run away from it. We think, right? And I had that similar experience in the nineties. I I had read, I read dr joe dispenses book, but I didn't understand it. I kind of read it, but Didn't catch very much of it the first read and one day when I was studying to write the exams to become a chartered accountant, a CPA I had sadness come over me real, and it was a new thing. I wasn't something I really experienced this kind of sadness that I could recall. And I don't know why I did this, but there's again, another part of you leading the way here. I decided to, in the middle of the day, I had shared accommodations. I was living with a female and she had Yanni and the Ghetto Blaster. Back in the day, it was Ghetto Blasters. with cassettes, maybe CDs. She had some incense burners. So I lit that and there was like lavender rose in it. And I went in the [00:45:00] bath and just decided to experience the sadness. So as I'm listening to the sad music, there's some incense and candle lit in the middle of the day in the bath, hot bath. I'm so going into the sadness. Tears are rolling down my eyes. And in a moment I'm in full bliss. Like I'm like bliss. Like. But I I don't do drugs, but what except for that psychedelic experience, what, what a good high would be like, it was like, and honestly, if that's what it feels like, I understand why people would do drugs. It was just bliss. And I'm like, you know, try to be sad. Because I was like, this feels great. Can I be sad? I couldn't be sad. And it was only later I had that experience first. And then I read dispenses book. Sorry, not just Ben's, Eckhart Tolle's book, Eckhart Tolle, The Power of Now is what I meant. And the line where he says, you, when you're present, you can't suffer, because when you're regretting the past or fear in the future, you're not in the present. But if you're in the present, he says, even sadness can be turned into bliss. And when I read that line in the book, [00:46:00] I had my aha moment because I had that experience. And now the process that I do in my conscious work is about lowering the resistance. Somebody says, what are you doing? You're tuning into your, your wist witness consciousness. You mentioned Michael Singer, the untethered soul. He often says he doesn't use tools or do tools, but he kind of does. And and I have a process that I believe brings down the resistance. My experience, people, I've worked with and then you have that flow and receptivity and sometimes I just have peace. Maybe it's at, you know, if my, if I'm frustrated or fear, it's a seven out of 10, it'll come down to say a two or one. So peace in an unhappy situation still, right? But peace. So the resistance is low. Yeah, **Michelle Oravitz:** flow in that moment. And it's interesting because I, my litmus test is, are you present? Really? That's the question. I, a lot of people that I work with is, are you present? Like, cause many times when they share things that are uncomfortable for them, they're not really in the present moment. They're either [00:47:00] expecting a future or thinking about a past or something that happens. So the present moment's always the antidote. To everything. If we **Lorne Brown:** present. And that's what the mind does. It's the nature of the mind. You can't get mad at the mind for thinking because that's its nature to be like getting upset with water for being wet, right? It's its nature. So you're fighting with reality. However, there's tools to help you get present and these uncomfortable feelings can become portals to presence. Right. And you're not wallowing them and, and embellishing them, you know, you're not inflating them. You're leaning into them and observing them. So I think what's happening, my experience, my understanding to this point is when we really get practice at noticing and observing them and accepting them, I think we're tuning, we go into present moment, but we do this by tuning into our witness consciousness because the mere fact of witnessing them, not, it shouldn't be this way. It's not fair, like getting into the head. But. **Michelle Oravitz:** neutral watcher. **Lorne Brown:** get into the watching, just getting practice at watching, then you [00:48:00] tune into your witness consciousness and that nature of you is peace and joy. So you tune into it. So wherever you put your energy is what's going to grow. So if you believe in the story and you're at the effect of the story, then you're You're unconscious and you're experiencing it. You're suffering right now. You've amplified the negative situation if you're able to observe it I'm not saying you'll like it. We're not doing a spiritual bypass here, but getting practice at observing at it I believe you tune into the witness consciousness and It's nature's peace and joy and the metaphor I use for this Michelle is when we so Tell me how this lands for you and I'm curious for your audience because this for me was my another aha moment just like what's going on here because I'm having these experiences and I want to have language to share with the people I work with. So if you buy an apple, you have to consciously you Michelle ego Michelle has to pick up the apple and chew it. But after that, Michelle, you're not going release salivary enzymes in your mouth. Like I got to do that. Nobody talked to me. Nobody talked to me. I'm getting acid into [00:49:00] my stomach now. Okay, I cannot. Walk up the stairs because my intestines are now absorbing the all these B vitamins or same thing when you sleep when you go to sleep You're unconscious. You're not breathing yourself. You're not pumping your blood Or pumping your heart circulating your blood your autonomic nervous system is doing this another part your subconscious program is doing this, right? The autonomic nervous system. Well same thing. I don't believe for me that I let go of these programs or emotions anymore. Not Lauren Brown ego. Just like I don't release the salivary enzyme. All I have, I believe it's my witness consciousness does this. It's what's metabolizing these uncomfortable feelings and old programs. And how do we do this? Well, first you have to make the unconscious conscious. So that's my notice step. Everything is neutral and then we give it meaning. Don't believe in the story. When you do, you make it real. So don't take it personally. Then I have multiple tools during the accepting part to surrender to what is, not fight it. Doesn't mean you're resigned to it. Doesn't mean you like it. We're just accepting that this is how I feel right now. And you [00:50:00] accept it and you start to observe it and get really, this is a skill. You get practice at observing it. And by that observing, you tune into the witness consciousness and it is what lets go the feelings. It's what metabolizes it. So, so. It's the intelligence. And so give it a conscious divine. I don't know if it's a part of me or part. I don't know. All I know is Lauren Brown is not doing it. Just like Lauren Brown gets to choose to bite the apple. Lauren Brown gets to choose to notice, not take it personally and observe it. That's all I do. The digestion of the apple is outside of my ego, my conscious mind, the digestion and the alchemy of these emotions where I was sad, went from sad to bliss. Right or go from fear to just feeling at peace. I'm not doing that I don't believe I let go of it and this ties into Michael Singers He says that these I don't know what he calls them Sankara's or something these these these energy blocks. They're [00:51:00] there So you're not experiencing your true nature You're all blocked up with these old programs and beliefs and feelings, but when they get released they move up and out You have this space now where you get to experience yourself. So that's how he describes it. Does, I mean, the, the metaphors and the concepts, yeah, the bottom line is you got to do the work you get. That's my point. It's nice to understand. A lot of us cannot confirm or prove anything, but when you have the experience, you don't care because the experience is peace and peace. It was nice. **Michelle Oravitz:** It is. **Lorne Brown:** I'm not at the, I'm not at the state, I'm not at the stage where I can equally treat fear and, and peace or fear and love together. Like some people say you get to a place where you don't, you don't judge either. You're, they're just vibrations. You're okay. I definitely prefer peace and joy and bliss over fear, shame, guilt, just so you know. Yeah. **Michelle Oravitz:** really our true default **Lorne Brown:** Yeah, **Michelle Oravitz:** is in that nature and that's the Buddha [00:52:00] nature. That's kind of like **Lorne Brown:** yeah, **Michelle Oravitz:** like form and we learn the other things. **Lorne Brown:** yeah, **Michelle Oravitz:** habituated through habits. So bringing this into fertility, which I think is actually very relevant, even though, you know, it's kind of like this big grand concept, it could totally apply to going through IVF, going through the resistance. And also in the IVF, you get so focused on the numbers and the analytical, where sometimes you need to kind of. move back and allow yourself the space and the, and to really take care of your wellbeing. And that's kind of like a, my big thing about that, which always tends to kind of fall in the back burner burner. **Lorne Brown:** yeah, yeah, you're going through the journey and anyhow, so that's all thing pain is Inevitable suffering is optional. I don't think anybody would want to go through an IVF However, if you're going through it, you could go kicking and screaming and suffer through it, or you can go through it and, and not amplify the difficulties of it. And that, again, is a skill set, because [00:53:00] IVF is not easy. As you know, the research shows it's like getting a cancer diagnosis or terminal diagnosis, infertility. So I want to clarify that we're not dismissing it. The conscious work is about being authentic. It's actually about feeling your feelings. However, with a different lens and developing a skill set, a process, so you can metabolize it, right? But yeah, if you're going to go on this journey, if you're in this journey, you didn't choose it, but you're in it. And so how do you use it as, as they say in the conscious teachings, how do you make it as, how is this happening for you versus to you? What does that mean? How do I get out of victim mode? Because it doesn't serve you to being accountable, responsible. What does that mean? Accountable responsible does not mean you blame yourself or you blame other accountable. Responsible means that if you're having the experience, then that's all you need to know that you're responsible for healing it because you're the one having the experience. If you if you it wasn't your responsibility, then you wouldn't be having that experience. And there's so many experiences [00:54:00] happening around the world at one time, and each individual is only aware of so many the ones that they're aware of that are triggering them that they're experiencing. That's, that's all you need to know that that means you're accountable, responsible for that. The stuff that's happening around the world that doesn't trigger you, it's not your responsibility to do the inner work around it. **Michelle Oravitz:** Yeah. Well, I mean, I can keep talking to you forever and of course we just talked about one subject, so perhaps I'll bring you back for other ones as well. But this is this is definitely the kind of thing that I'm very interested in and I nerd out on this all the time. It really is something I think about every single day. I think that it is when you really are bringing up your consciousness and becoming more aware in your life and. Really being the creator of your life or owning that you are a creator in your life I just think it brings another element of purpose and meaning everything. **Lorne Brown:** Yeah. We all want to be happy. And we think different things outside of us will make us happy. This work brings that kind of [00:55:00] happiness. And if, to kind of wrap this part up on consciousness from the materialistic and then the quantum perspective, you know, when we, when we're unconscious, or when we're in that state of fear, we don't feel safe, right? Then our body goes into survival mode, right? The fight or flight. And so, our resources are not available for healing. creativity and reproduction because they're in survival mode, you know, blood gets drained from the, the thinking brain goes, the blood gets drained from the digestion reproduction. And so, but when you feel safe, which is what conscious work is, so here's on the material level, you free up resources for healing, creativity, reproduction. And we know this, that the unsafe hormones of cortisol. and adrenaline and epinephrine, all those things affect inflammation, the body, the effect, your immune system, your hormonal system, your gut microbiome. And when you feel safe, you're releasing the

VISLA FM
More Breaks - Tabris with Kessler (Elicit) 02.24.25 | VISLA FM

VISLA FM

Play Episode Listen Later Feb 24, 2025 117:58


More Breaks - Tabris with Kessler (Elicit) 02.24.25 | VISLA FM by VISLA

Legal Issues In Policing
E93| Lengthy drive to elicit confession. Starlight tour or legitimate police procedure?

Legal Issues In Policing

Play Episode Listen Later Feb 6, 2025 29:06


Provide your feedback here. Anonymously send me a text message. In this episode, Mike discusses the Manitoba Court of Appeal decision R. v. Pietz, 2025 MBCA 5 where police arrested a man in relation to the presumed death of another. After unsuccessfully trying to obtain a confession from the man, police took him for a lengthy drive in an effort to locate the victim's body. During the ride, police kept the man in handcuffs, used offensive and profane language, and did not provide him with shoes, a jacket or a blanket while he was outside the police vehicle in chilly weather. Did the man's removal from police headquarters in the middle of the night without his consent — along with the conditions of the ride — render the detention arbitrary under s. 9 of the Charter? And was an additional s. 10(b) advisement about the right to consult counsel required for this procedure? Listen now and learn a little — or a lot!Low court ruling The 2025 International Use of Force Expert Conference April 29-May 1, 2025This conference is designed for professionals who have an interest in developing a deeper understanding of this subject matter area, or in building the foundational skills towards becoming a court-qualified use of force expert.Thanks for listening! Feedback welcome at legalissuesinpolicing@gmail.com

#dobetter Pod
Do Better Pod Live Oct 2024: Motivational Interviewing with Callie Plattner

#dobetter Pod

Play Episode Listen Later Jan 28, 2025 48:40


In this episode, Dr. Megan and Joe interview Callie Plattner about Motivational Interviewing! AI SUMMARY FROM FATHOM Key Takeaways - Motivational interviewing (MI) is an evidence-based communication approach that can significantly enhance therapeutic relationships and outcomes in behavior analysis - MI skills (OARS: Open-ended questions, Affirmations, Reflections, Summaries) require intentional practice but can be transformative for client interactions - Recent research shows strong social validity for integrating MI into ABA training and practice, addressing known skill deficits in therapeutic alliance building Topics Background and Relevance of MI in ABA - Multiple recent studies (2018-2023) highlight BCBAs' lack of skills in building therapeutic relationships - Less than 6% of surveyed BCBAs had practical training in MI-related skills during education - MI has extensive evidence base in other helping professions (e.g., addiction treatment, healthcare) - Key MI outcomes align with ABA needs: increased treatment adherence, goal clarity, session attendance Core MI Skills (OARS) - Open-ended questions: Elicit detailed responses beyond yes/no (e.g., "Tell me about your weekend") - Affirmations: Recognize client strengths and efforts (e.g., "You're clearly a dedicated parent") - Reflections: Demonstrate active listening and check understanding (e.g., "It sounds like you're worried about...") - Summaries: Synthesize key points and transition topics Implementing MI in ABA Practice - Pause before offering solutions; ask additional questions and reflect to ensure full understanding - Use OARS flexibly, not necessarily in order - Practice in various contexts (e.g., emails, casual conversations) to build fluency - Adjust approach based on individual client communication preferences Research and Training Developments - 100% of BCBAs in a study agreed MI skills should be developed in the field - Organizations like Mosaic Pediatric Therapy integrating MI into staff training and onboarding - Growing number of ABA-specific MI resources and conference presentations emerging Next Steps - Attendees encouraged to explore MI resources shared during session (articles, books, recorded trainings) - Consider attending upcoming MI workshops (e.g., Stone Soup Conference, Maryland ABA in December) - Potential future Do Better workshop on MI with Callie Plattner in 2025 - Field to continue work on incorporating MI into ABA coursework, fieldwork, and continuing education

Business for Creatives Podcast
Clients Ghosting?

Business for Creatives Podcast

Play Episode Listen Later Dec 3, 2024 34:09 Transcription Available


Send us a textBoo! Have you ever had a client on the hook and then poof! Gone...

Fit Biz U
FBU 452: How to Elicit More Social Proof + Testimonials from Clients

Fit Biz U

Play Episode Listen Later Dec 2, 2024 23:51


Getting testimonials and social proof is hugely important for your online fitness business, but sometimes it can feel embarrassing or overwhelming to ask your clients to write out or make a video talking about their transformation. Today, Jill is sharing her best strategies for getting feedback, including weekly and monthly check-ins, video testimonials, and case studies. Testimonials aren't just good for you—they're good for your clients, too, because it reminds them of how much progress they're making. Getting social proof doesn't have to be a huge ordeal, with Jill's tips and strategies, you can start getting powerful testimonials this week.   Get on the Interest List for FBA: https://jillfitfree.com/fba-waitlist/   Jill is a fitness professional and business coach who effectively made the transition from training clients in person and having no time to build anything else to training clients online and actually being more successful. Today, Jill helps other coaches to do the same.   Connect with me! Instagram: @jillfit | @fitbizu Facebook: @jillfit Website: jillfit.com

Update@Noon
"Our mandate is to prevent, combat and stop elicit mining activities. It was not for people to starve to death" - Police welcome High Court ruling on illegal miner standoff

Update@Noon

Play Episode Listen Later Nov 25, 2024 17:42


The Gauteng High Court in Pretoria has dismissed the main application launched by organisation, Society for the Protection of our Constitution amid the operation by law enforcement officials to have suspected illegal miners resurface at the Stilfontein mine shaft in the North West. During the hearing last Thursday, the applicant argued that the intercepting of essentials goods by police, puts the suspected illegal miners at risk of dying from starvation and experiencing dehydration. They also argued that  the illegal miners were being denied access to medication for chronic illnesses. Zoleka Qodashe is here in studio to tell us more...

Know Your Physio
Eli Wininger: Running 251 Miles Through Moab, Pushing Limits to Elicit Neuroplasticity, and Pursuing Goals Beyond Yourself

Know Your Physio

Play Episode Listen Later Nov 4, 2024 86:16 Transcription Available


Send us a textIn this compelling episode, I sit down with Eli Wininger, an ultramarathon runner, former IDF Special Forces commander, and a profound advocate for resilience and purpose. Known for his feats of endurance, Eli's journey takes us from the battlefield to the grueling landscapes of ultramarathons, revealing the depth of his mental and physical fortitude. His latest achievement—a 251-mile race in Moab dedicated to each of the hostages taken on October 7th—demonstrates how he channels his experiences into a mission much larger than himself.Our conversation delves into the mindset of an endurance athlete and the emotional significance behind Eli's relentless pursuit of challenges. Eli opens up about the importance of finding a powerful "why" to fuel our actions, whether that be through personal growth, health, or setting an example for others. This episode is a must-listen for anyone seeking to deepen their understanding of perseverance, purpose, and the power of a committed mind. Eli's journey is both inspiring and humbling, providing a roadmap for pushing beyond self-imposed boundaries. Looking to discover your science and optimize your life?APPLY FOR HEALTH OPTIMIZATION COACHINGhttps://calendly.com/andrespreschel/intro-call-with-andresLinks Mentioned in Today's Episode:Click HERE to save on BiOptimizers MagnesiumKey Points From This Episode:Experience in conflict. [00:06:09]Humility in warrior mindset. [00:12:51]Intrinsic motivation in training. [00:17:27]Neuroplasticity through challenging experiences. [00:24:24]Embracing the pain cave. [00:30:08]Collective joy through physical activity. [00:39:44]The benefits of pushing limits. [00:46:11]Breath work and athletic performance. [00:52:10]Running for a greater purpose. [01:02:05]Race strategy and aid stations. [01:08:47]Finding purpose in suffering. [01:13:39]The importance of breathing. [01:19:04]Leadership accountability and credit. [01:22:45]Mission for the hostages. [01:25:04] PeopleEli WiningerInstagramDavid GogginsInstagramDr. Kelly McGonigalLinkedIn Peter Attia, MDOfficial WebsitePlacesBring Them Home Now FoundationOfficial WebsiteIDF Special Forces - Sayeret EgozWikipedia: Sayeret EgozLeadville Race SeriesOfficial Website: Leadville Race SeriesBooks and ReferencesHow to Make Stress Your FriendTED TalkNeuroplasticityInformative Article: Neuroplasticity - NIHWHOOPOfficial Website: WHOOPRed Light TherapySupport the show

I See What You're Saying
Admission Before Confession | Bobby Masano

I See What You're Saying

Play Episode Listen Later Oct 23, 2024 69:56


In this episode, we delve into the nuanced world of effective interviewing with special guest Bobby Masano, a seasoned investigator and Special Agent in Charge at the Federal Law Enforcement Training Center. We explore key techniques such as obtaining an admission before a confession, the strategic use of direct questions, and the paramount importance of building rapport with interviewees. From criminal investigations to everyday business interactions, we uncover invaluable insights that can elevate our communication skills, foster trust, and achieve desired outcomes. Join us as we unlock the secrets to mastering the art of strategic questioning and relationship building.Timestamps: (00:00) Introducing Bobby Masano.(07:28) Military investigates sexual conduct for admissions first.(11:27) Strategically use yes/no questions with evidence.(16:30) Using direct questions can reveal truths effectively.(24:16) Planning disadvantages removes excuses in negotiations effectively.(27:37) Rapport as conversation, not just a step.(33:38) Elicit questions for stronger, trustful confessions.(38:09) Using subtle techniques for rapport-building yields benefits.(44:22) Respectful questioning led to 15-year conviction.(47:40) Substance abuse recovery led to mutual respect.(55:20) Convince them it's beneficial to talk.(01:01:23) He confessed and worried about HR's reaction.(01:07:27) Thank listeners; try Human Tel training discount.(01:08:40) Learn, qualify, and advance with certifiedinterviewer.comSponsor Links:InQuasive: http://www.inquasive.com/Humintell: Body Language - Reading People - HumintellEnter Code INQUASIVE25 for 25% discount on your online training purchase.International Association of Interviewers: Home (certifiedinterviewer.com)Podcast Production Services by EveryWord Media

Trainer's Bullpen
EP39 "Can Virtual Reality Training Elicit Similar Stress Response as a Realistic Scenario-Based Training?” with Dr. Hunter Martaindale.

Trainer's Bullpen

Play Episode Listen Later Oct 15, 2024 63:35


Dr. Hunter Martaindale is the Director of Research at the Advanced Law Enforcement Rapid Response Training (ALERRT) Center at Texas State University and an Associate Research Professor within the School of Criminal Justice and Criminology. In this role, he oversees all research activities for ALERRT, including analyzing active shooter events, conducting active shooter training program evaluations through experimental design, and testing methods/interventions to improve law enforcement decision-making and overall performance. Beyond that, Hunter actively supports other researchers with applied policing projects in an effort to get actionable results to practitioners. In this podcast, Dr. Martindale discusses his research on virtual reality (VR) training in law enforcement. The purpose of the study was to determine if VR training scenarios can elicit a similar stress response as realistic scenario-based training. The study involved two phases: a scenario-based training phase and a VR training phase. Participants went through a high-fidelity scenario involving professional actors and simulated injuries. The same scenario was then recreated in VR. Salivary measures of stress were collected before and after each training phase. The results showed that VR training was able to elicit  similar physiological stress responses as realistic scenario-based, or high-fidelity training. VR can be a valuable tool for law enforcement agencies and trainers to replicate real-life scenarios and ensure consistent training for all officers. However, VR should not replace in-person training entirely and should be used as a supplement. VR technology has improved significantly, and agencies should actively investigate and incorporate VR into their training programs.   Takeaways Virtual reality (VR) training has the potential to bridge the gap between law enforcement training and academic research. VR training can supplement in-person training and help retain skills that may not come up in an officer's day-to-day job. Measuring heart rate alone is not a reliable indicator of stress response; other measures, such as salivary markers, can provide more accurate results. High-fidelity scenarios with professional actors can enhance the realism of training and elicit a stronger stress response. The study found that VR training was able to elicit a similar stress response as realistic scenario-based training. VR training elicited similar physiological stress responses as high-fidelity scenario-based training. VR can be a valuable tool for law enforcement agencies and trainers to replicate real-life scenarios and ensure consistent training. VR should be used as a supplement to in-person training and not as a replacement. Future research should focus on the long-term effects of VR training on skill development and retention. The technology has improved significantly, with better refresh rates and reduced motion sickness. Agencies should actively investigate and incorporate VR into their training programs.

The Foresight Institute Podcast
Amanda Ngo | Innovating With AI for Wellbeing

The Foresight Institute Podcast

Play Episode Listen Later Sep 13, 2024 48:07


SpeakerAmanda Ngo is a 2024 Foresight Fellow. Recently, she has built Elicit.org from inception to 100k+ monthly users, leading a team of 5 engineers and designers, presented on forecasting, safe AI systems, and LLM research tools at conferences (EAG, Foresight Institute), ran a 60-person hackathon with FiftyYears using LLMs to improve our wellbeing (event, write up), analyzed Ideal Parent Figure transcripts and built an automated IPF chatbot (demo), and co-organized a 400-person retreat for Interact, a technology for social good fellowship.Session Summary“Imagine waking up every day in a state of flow, where all the knots and fears are replaced with a deep sense of ease and joy.”This week we are dropping another special episode of the Existential Hope podcast, featuring Amanda Ngo, a Foresight Institute Existential Hope fellow specializing in AI innovation for wellbeing. Amanda speaks about her work on leveraging AI to enhance human flourishing, sharing insights on the latest advancements and their potential impacts. Her app: https://www.mysunrise.app/Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.

Newsroom Robots
Your Questions Answered: A Live Q&A Session in Collaboration with the Online News Association

Newsroom Robots

Play Episode Listen Later Sep 2, 2024 53:58


In this special episode of Newsroom Robots, host Nikita Roy steps into the spotlight to answer your pressing questions about AI. Recorded during a session with the Online News Association (ONA), this episode covers a range of topics, from ethical considerations in AI-generated content to practical tools that can elevate your work.AI Tools Mentioned:Perplexity - A generative AI search engine that provides quick insights on any topic. Wobby - A data journalism tool that connects to open datasets, where you can ask questions in plain language and get clear AI-generated insights, reports, and visualizationsOpusClip - Converts long-form videos into short, engaging clips for social media, ideal for repurposing content.YESEO - A Slack-based AI tool for generating headline suggestions and SEO metadata, widely used in local newsrooms.Google's Pinpoint - A tool for investigative journalists using AI to search through massive amounts of documents, including handwritten ones.Natural Reader - An AI tool that reads text aloud with natural-sounding voices, perfect for those who prefer listening over reading.Whimsical AI - Creates diagrams and visualizations from your data inputs directly within ChatGPT.Elicit & Consensus - AI-powered search engines for academic research, useful for journalists covering specialized beats like health and science.Nota - A versatile tool for creating SEO content, summarizing articles, and even converting articles into videos.GPT for Sheets and Docs, Claude for Sheets - These tools bring AI directly into your Google Docs and Sheets, enabling you to draft, edit, and generate insights without leaving your document or spreadsheet.If you're interested in learning more about how AI is being implemented in newsrooms, sign up to receive a series of case studies on AI and journalism, researched and written by Nikita in collaboration with the Online News Association. Sign up for the Newsroom Robots newsletter for episode summaries and insights from host Nikita Roy. Hosted on Acast. See acast.com/privacy for more information.

Papa Phd Podcast
Generative AI Opportunities And Caveats in Academia With Priten Shah

Papa Phd Podcast

Play Episode Listen Later Aug 29, 2024 50:04


Join the Papa PhD Skool community ! Welcome to this new episode of "Beyond the Thesis With Papa PhD" where we delve into the transformative shifts in academia and beyond. In today's episode, "Generative AI: Opportunities and Caveats in Academia," David Mendes sits down with Priten Shah, an expert in  the application of generative AI in education technology. In their conversation, David and Priten explore the rapid evolution of generative AI, from its early implementations in projects like Sanskrit language processing to the watershed moment of Chat GPT's release in November 2022.They unpack the profound implications of AI in education—highlighting both its immense potential for personalized learning and mastery education, and the ethical concerns it brings, such as plagiarism and over-reliance on AI-generated data, and Priten shares insights into specific AI tools like Perplexity and Elicit for academic research, elaborating on practical applications of AI in creating tailored educational experiences.As we navigate through the benefits and challenges posed by AI, we also examine the crucial need for educators to receive proper training and support to integrate these technologies effectively. So, tune in as we explore how generative AI is revolutionizing the academic landscape while weighing its caveats with careful consideration. Plus, don't miss Priten's thoughts on maintaining authenticity in AI-assisted work and the future of education technology.Enjoy the episode! RITEN SHAH is CEO of Pedagogy.Cloud, which provides innovative technology solutions to help educators navigate global challenges in a rapidly evolving world. He is the author of Wileys Jossey-Bass publication, AI & The Future of Education: Teaching in the Age of Artificial Intelligence.Priten is also the founder of the civic-focused nonprofit United 4 Social Change. He has a B.A. in philosophy and an M.Ed. in education policy from Harvard University. What we covered in the interview:  Revolutionizing Education with AI: Priten Shah discusses the promising applications of generative AI for mastery learning and standards-based learning, highlighting how AI can create personalized practice exercises tailored to students' unique needs and interests.Balancing Benefits and Risks: While AI holds great potential, Priten emphasizes the importance of skepticism and ethical considerations. He warns against over-reliance on AI-generated data for research without proper cross-verification and highlights the need for clear guidance on ethical usage.Empowering Educators and Students: Through tools like socret.ai and various AI research aids like Perplexity and Elicit, AI can significantly support the writing and production processes in higher education, enhancing clarity and efficiency while maintaining human oversight.

Chaz & AJ in the Morning
Pod Pick: Emily from Elicit Brewing

Chaz & AJ in the Morning

Play Episode Listen Later Aug 1, 2024 12:09


Emily Sands was in studio with Chaz and AJ this morning from Elicit Brewing. They'll be once of the participating breweries at the CT Pizza and Brew Fest on Aug. 11 at the Hartford HealthCare Amphitheater in Bridgeport. 

Sisters In Sobriety
Mindful Drinking: Derek Brown's Journey from Bartender to Wellness Advocate

Sisters In Sobriety

Play Episode Listen Later Jul 29, 2024 47:59 Transcription Available


In this episode of Sisters in Sobriety, Sonia and Kathleen are excited to bring you an insightful discussion with Derek Brown, a renowned author, NASM certified wellness coach, and founder of Positive Damage. Derek is known for his inspiring journey from being one of America's top bartenders to becoming a leading advocate for mindful drinking and inclusive spaces. Today, Derek shares his personal story and insights on how we can all foster a healthier relationship with alcohol.We delve into Derek's fascinating transition from a celebrated bartender to a mindful drinking advocate. Key questions we explore include: What led Derek to change his relationship with alcohol? How can we incorporate mindful drinking into our daily lives? What are some practical tips for social situations where alcohol is prevalent? These discussions not only provide valuable insights but also help optimize your approach to alcohol and social wellness.Listeners will walk away with a deeper understanding of mindful drinking, including key concepts such as intrinsic goal alignment, the RATE (Replace, Avoid, Temper, Elicit help) strategy, and practical steps to make healthier choices in social settings. Derek also sheds light on the evolving landscape of no and low alcohol cocktails, offering tips for creating sophisticated, non-alcoholic drinks at home. In 2022, Brown published his second book, Mindful Mixology: A Comprehensive Guide to No- and Low-Alcohol Cocktails. This is Sisters in Sobriety, the support community that helps women change their relationship with alcohol. Check out our Substack for extra tips, tricks, and resources.Highlights:[00:00:00] - Introduction of Derek Brown, renowned author, NASM certified wellness coach, and founder of Positive Damage.[00:01:13] - Derek's impressive background: from top bartender to advocate for no and low alcohol cocktails.[00:01:54] - Derek shares his personal journey from bartending to mindful drinking advocacy.[00:02:18] - Early life experiences with alcohol, including family struggles and personal challenges.[00:03:01] - Describing the intense lifestyle of bartending and its impact on his relationship with alcohol.[00:04:16] - Decision to change his relationship with alcohol and seek therapy and wellness coaching.[00:04:57] - Explanation of Derek's unique approach to mindful drinking and its personal significance.[00:05:15] - Addressing the concept of mindful drinking and how it differs from traditional sobriety.[00:07:25] - Challenges faced during his journey to mindful drinking, including social and career obstacles.[00:08:49] - The importance of finding better coping mechanisms and improving mental health.[00:09:34] - The process of facing personal problems without the aid of alcohol.[00:10:25] - Acceptance and commitment therapy (ACT) and its relevance to mindful drinking.[00:12:00] - Defining mindful drinking and its connection to personal goals and values.[00:13:56] - Practical steps for incorporating mindful drinking into daily life, such as journaling and setting goals.[00:14:50] - The RATE acronym: Replace, Avoid, Temper, Elicit help, and how it aids mindful drinking.[00:15:51] - Social challenges of mindful drinking and tips for navigating social situations.[00:18:00] - Derek's views on the evolving culture of drinking, especially among different age groups.[00:22:09] - Addressing misconceptions about alcohol's health benefits and the shift in societal attitudes.[00:23:50] - Strategies for managing stigma and embarrassment when choosing not to drink.[00:26:32] - Positive responses from the bar and restaurant industry to Derek's work and advocacy.Derek's LinksWebsite: positivedamageinc.comSubstack: https://positivedamage.substack.comInstagram/Threads: @positivedamageincLinkedin: Derek BrownLinksSisters In Sobriety Substack - find more tips, tricks, resources, and communitySisters In Sobriety EmailSisters In Sobriety InstagramKathleen's Website Kathleen does not endorse any products mentioned in this podcastKathleen's Instagram

Pantha Politix Podcast
Episode 144: Murdaaa...We Don't Believe You feat. Monster Elicit

Pantha Politix Podcast

Play Episode Listen Later Jul 15, 2024 183:53


Pantha Politix Podcast is a fiercely real examination of the life and times of Black men reared by Hip-Hop culture. Entertaining, engaging, and honest. Hosted by ethemadassassin, Mojo Barnes, and Seven Da Pantha Follow the squad on IG, stream us wherever you listen to podcasts or watch us on Rumble! https://linktree.com/PanthaPolitixPod --- Support this podcast: https://podcasters.spotify.com/pod/show/pantha-politix/support

The Unapologetic Man Podcast
How to Ask Girls Questions That Elicit Emotions, Attraction, Connection, and Rapport

The Unapologetic Man Podcast

Play Episode Listen Later Jun 24, 2024 26:47


You might not realize it at first, but every single question a girl asks you, and every question you ask a girl is an opportunity to demonstrate value and elicit massive attraction in her. There is no such thing as a boring conversation once you always know what to say. With Mark's decades of experience, he's been able to distill this concept into a 3-part question/answer protocol that you can learn and adapt to fit any woman and any question. And in today's episode, you'll learn exactly how to execute it. Apply for Mark's 3-Month Coaching Program Here: https://coachmarksing.com/coaching/ Follow Mark on Instagram: https://www.instagram.com/coachmarksing/ Watch UMP Episodes on YouTube: https://www.youtube.com/channel/UCybix9PZoDgcyyt5hNxPLuw Grab Mark's Free Program: "The Approach Formula": https://www.CoachMarkSing.com/The-Approach-Formula Contact Mark Directly: CoachMarkSing@Gmail.com

Pantha Politix Podcast
Episode 141: Who Raised These People? feat. Monster Elicit

Pantha Politix Podcast

Play Episode Listen Later Jun 24, 2024 158:44


Pantha Politix Podcast is a fiercely real examination of the life and times of Black men reared by Hip-Hop culture. Entertaining, engaging, and honest. Hosted by ethemadassassin, Mojo Barnes, and Seven Da Pantha Follow the squad on IG, stream us wherever you listen to podcasts or watch us on Rumble! https://linktree.com/PanthaPolitixPod --- Support this podcast: https://podcasters.spotify.com/pod/show/pantha-politix/support

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Editor's note: One of the top reasons we have hundreds of companies and thousands of AI Engineers joining the World's Fair next week is, apart from discussing technology and being present for the big launches planned, to hire and be hired! Listeners loved our previous Elicit episode and were so glad to welcome 2 more members of Elicit back for a guest post (and bonus podcast) on how they think through hiring. Don't miss their AI engineer job description, and template which you can use to create your own hiring plan! How to Hire AI EngineersJames Brady, Head of Engineering @ Elicit (ex Spring, Square, Trigger.io, IBM)Adam Wiggins, Internal Journalist @ Elicit (Cofounder Ink & Switch and Heroku)If you're leading a team that uses AI in your product in some way, you probably need to hire AI engineers. As defined in this article, that's someone with conventional engineering skills in addition to knowledge of language models and prompt engineering, without being a full-fledged Machine Learning expert.But how do you hire someone with this skillset? At Elicit we've been applying machine learning to reasoning tools since 2018, and our technical team is a mix of ML experts and what we can now call AI engineers. This article will cover our process from job description through interviewing. (You can also flip the perspectives here and use it just as easily for how to get hired as an AI engineer!)My own journeyBefore getting into the brass tacks, I want to share my journey to becoming an AI engineer.Up until a few years ago, I was happily working my job as an engineering manager of a big team at a late-stage startup. Like many, I was tracking the rapid increase in AI capabilities stemming from the deep learning revolution, but it was the release of GPT-3 in 2020 which was the watershed moment. At the time, we were all blown away by how the model could string together coherent sentences on demand. (Oh how far we've come since then!)I'd been a professional software engineer for nearly 15 years—enough to have experienced one or two technology cycles—but I could see this was something categorically new. I found this simultaneously exciting and somewhat disconcerting. I knew I wanted to dive into this world, but it seemed like the only path was going back to school for a master's degree in Machine Learning. I started talking with my boss about options for taking a sabbatical or doing a part-time distance learning degree.In 2021, I instead decided to launch a startup focused on productizing new research ideas on ML interpretability. It was through that process that I reached out to Andreas—a leading ML researcher and founder of Elicit—to see if he would be an advisor. Over the next few months, I learned more about Elicit: that they were trying to apply these fascinating technologies to the real-world problems of science, and with a business model that aligned it with safety goals. I realized that I was way more excited about Elicit than I was about my own startup ideas, and wrote about my motivations at the time.Three years later, it's clear this was a seismic shift in my career on the scale of when I chose to leave my comfy engineering job at IBM to go through the Y Combinator program back in 2008. Working with this new breed of technology has been more intellectually stimulating, challenging, and rewarding than I could have imagined.Deep ML expertise not requiredIt's important to note that AI engineers are not ML experts, nor is that their best contribution to a tech team.In our article Living documents as an AI UX pattern, we wrote:It's easy to think that AI advancements are all about training and applying new models, and certainly this is a huge part of our work in the ML team at Elicit. But those of us working in the UX part of the team believe that we have a big contribution to make in how AI is applied to end-user problems.We think of LLMs as a new medium to work with, one that we've barely begun to grasp the contours of. New computing mediums like GUIs in the 1980s, web/cloud in the 90s and 2000s, and multitouch smartphones in the 2000s/2010s opened a whole new era of engineering and design practices. So too will LLMs open new frontiers for our work in the coming decade.To compare to the early era of mobile development: great iOS developers didn't require a detailed understanding of the physics of capacitive touchscreens. But they did need to know the capabilities and limitations of a multi-touch screen, the constrained CPU and storage available, the context in which the user is using it (very different from a webpage or desktop computer), etc.In the same way, an AI engineer needs to work with LLMs as a medium that is fundamentally different from other compute mediums. That means an interest in the ML side of things, whether through their own self-study, tinkering with prompts and model fine-tuning, or following along in #llm-paper-club. But this understanding is so that they can work with the medium effectively versus, say, spending their days training new models.Language models as a chaotic mediumSo if we're not expecting deep ML expertise from AI engineers, what are we expecting? This brings us to what makes LLMs different.We'll assume already that our ideal candidate is already inspired by, and full of ideas about, all the new capabilities AI can bring to software products. But the flip side is all the things that make this new medium difficult to work with. LLM calls are annoying due to high latency (measured in tens of seconds sometimes, rather than milliseconds), extreme variance on latency, high error rates even under normal operation. Not to mention getting extremely different answers to the same prompt provided to the same model on two subsequent calls!The net effect is that an AI engineer, even working at the application development level, needs to have a skillset comparable to distributed systems engineering. Handling errors, retries, asynchronous calls, streaming responses, parallelizing and recombining model calls, the halting problem, and fallbacks are just some of the day-in-the-life of an AI engineer. Chaos engineering gets new life in the era of AI.Skills and qualities in candidatesLet's put together what we don't need (deep ML expertise) with what we do (work with capabilities and limitations of the medium). Thus we start to see what Elicit looks for in AI engineers:* Conventional software engineering skills. Especially back-end engineering on complex, data-intensive applications.* Professional, real-world experience with applications at scale.* Deep, hands-on experience across a few back-end web frameworks.* Light devops and an understanding of infrastructure best practices.* Queues, message buses, event-driven and serverless architectures, … there's no single “correct” approach, but having a deep toolbox to draw from is very important.* A genuine curiosity and enthusiasm for the capabilities of language models.* One or more serious projects (side projects are fine) of using them in interesting ways on a unique domain.* …ideally with some level of factored cognition, e.g. breaking the problem down into chunks, making thoughtful decisions about which things to push to the language model and which stay within the realm of conventional heuristics and compute capabilities.* Personal studying with resources like Elicit's ML reading list. Part of the role is collaborating with the ML engineers and researchers on our team. To do so, the candidate needs to “speak their language” somewhat, just as a mobile engineer needs some familiarity with backends in order to collaborate effectively on API creation with backend engineers.* An understanding of the challenges that come along with working with large models (high latency, variance, etc.) leading to a defensive, fault-first mindset.* Careful and principled handling of error cases, asynchronous code (and ability to reason about and debug it), streaming data, caching, logging and analytics for understanding behavior in production.* This is a similar mindset that one can develop working on conventional apps which are complex, data-intensive, or large-scale apps. The difference is that an AI engineer will need this mindset even when working on relatively small scales!On net, a great AI engineer will combine two seemingly contrasting perspectives: knowledge of, and a sense of wonder for, the capabilities of modern ML models; but also the understanding that this is a difficult and imperfect foundation, and the willingness to build resilient and performant systems on top of it.Here's the resulting AI engineer job description for Elicit. And here's a template that you can borrow from for writing your own JD.Hiring processOnce you know what you're looking for in an AI engineer, the process is not too different from other technical roles. Here's how we do it, broken down into two stages: sourcing and interviewing.SourcingWe're primarily looking for people with (1) a familiarity with and interest in ML, and (2) proven experience building complex systems using web technologies. The former is important for culture fit and as an indication that the candidate will be able to do some light prompt engineering as part of their role. The latter is important because language model APIs are built on top of web standards and—as noted above—aren't always the easiest tools to work with.Only a handful of people have built complex ML-first apps, but fortunately the two qualities listed above are relatively independent. Perhaps they've proven (2) through their professional experience and have some side projects which demonstrate (1).Talking of side projects, evidence of creative and original prototypes is a huge plus as we're evaluating candidates. We've barely scratched the surface of what's possible to build with LLMs—even the current generation of models—so candidates who have been willing to dive into crazy “I wonder if it's possible to…” ideas have a huge advantage.InterviewingThe hard skills we spend most of our time evaluating during our interview process are in the “building complex systems using web technologies” side of things. We will be checking that the candidate is familiar with asynchronous programming, defensive coding, distributed systems concepts and tools, and display an ability to think about scaling and performance. They needn't have 10+ years of experience doing this stuff: even junior candidates can display an aptitude and thirst for learning which gives us confidence they'll be successful tackling the difficult technical challenges we'll put in front of them.One anti-pattern—something which makes my heart sink when I hear it from candidates—is that they have no familiarity with ML, but claim that they're excited to learn about it. The amount of free and easily-accessible resources available is incredible, so a motivated candidate should have already dived into self-study.Putting all that together, here's the interview process that we follow for AI engineer candidates:* 30-minute introductory conversation. Non-technical, explaining the interview process, answering questions, understanding the candidate's career path and goals.* 60-minute technical interview. This is a coding exercise, where we play product manager and the candidate is making changes to a little web app. Here are some examples of topics we might hit upon through that exercise:* Update API endpoints to include extra metadata. Think about appropriate data types. Stub out frontend code to accept the new data.* Convert a synchronous REST API to an asynchronous streaming endpoint.* Cancellation of asynchronous work when a user closes their tab.* Choose an appropriate data structure to represent the pending, active, and completed ML work which is required to service a user request.* 60–90 minute non-technical interview. Walk through the candidate's professional experience, identifying high and low points, getting a grasp of what kinds of challenges and environments they thrive in.* On-site interviews. Half a day in our office in Oakland, meeting as much of the team as possible: more technical and non-technical conversations.The frontier is wide openAlthough Elicit is perhaps further along than other companies on AI engineering, we also acknowledge that this is a brand-new field whose shape and qualities are only just now starting to form. We're looking forward to hearing how other companies do this and being part of the conversation as the role evolves.We're excited for the AI Engineer World's Fair as another next step for this emerging subfield. And of course, check out the Elicit careers page if you're interested in joining our team.Podcast versionTimestamps* [00:00:24] Intros* [00:05:25] Defining the Hiring Process* [00:08:42] Defensive AI Engineering as a chaotic medium* [00:10:26] Tech Choices for Defensive AI Engineering* [00:14:04] How do you Interview for Defensive AI Engineering* [00:19:25] Does Model Shadowing Work?* [00:22:29] Is it too early to standardize Tech stacks?* [00:32:02] Capabilities: Offensive AI Engineering* [00:37:24] AI Engineering Required Knowledge* [00:40:13] ML First Mindset* [00:45:13] AI Engineers and Creativity* [00:47:51] Inside of Me There Are Two Wolves* [00:49:58] Sourcing AI Engineers* [00:58:45] Parting ThoughtsTranscript[00:00:00] swyx: Okay, so welcome to the Latent Space Podcast. This is another remote episode that we're recording. This is the first one that we're doing around a guest post. And I'm very honored to have two of the authors of the post with me, James and Adam from Elicit. Welcome, James. Welcome, Adam.[00:00:22] James Brady: Thank you. Great to be here.[00:00:23] Hey there.[00:00:24] Intros[00:00:24] swyx: Okay, so I think I will do this kind of in order. I think James, you're, you're sort of the primary author. So James, you are head of engineering at Elicit. You also, We're VP Eng at Teespring and Spring as well. And you also , you have a long history in sort of engineering. How did you, , find your way into something like Elicit where, , it's, you, you are basically traditional sort of VP Eng, VP technology type person moving into a more of an AI role.[00:00:53] James Brady: Yeah, that's right. It definitely was something of a Sideways move if not a left turn. So the story there was I'd been doing, as you said, VP technology, CTO type stuff for around about 15 years or so, and Notice that there was this crazy explosion of capability and interesting stuff happening within AI and ML and language models, that kind of thing.[00:01:16] I guess this was in 2019 or so, and decided that I needed to get involved. , this is a kind of generational shift. And Spent maybe a year or so trying to get up to speed on the state of the art, reading papers, reading books, practicing things, that kind of stuff. Was going to found a startup actually in in the space of interpretability and transparency, and through that met Andreas, who has obviously been on the, on the podcast before asked him to be an advisor for my startup, and he countered with, maybe you'd like to come and run the engineering team at Elicit, which it turns out was a much better idea.[00:01:48] And yeah, I kind of quickly changed in that direction. So I think some of the stuff that we're going to be talking about today is how actually a lot of the work when you're building applications with AI and ML looks and smells and feels much more like conventional software engineering with a few key differences rather than really deep ML stuff.[00:02:07] And I think that's one of the reasons why I was able to transfer skills over from one place to the other.[00:02:12] swyx: Yeah, I[00:02:12] James Brady: definitely[00:02:12] swyx: agree with that. I, I do often say that I think AI engineering is about 90 percent software engineering with like the, the 10 percent of like really strong really differentiated AI engineering.[00:02:22] And that might, that obviously that number might change over time. I want to also welcome Adam onto my podcast because you welcomed me onto your podcast two years ago.[00:02:31] Adam Wiggins: Yeah, that was a wonderful episode.[00:02:32] swyx: That was, that was a fun episode. You famously founded Heroku. You just wrapped up a few years working on Muse.[00:02:38] And now you've described yourself as a journalist, internal journalist working on Elicit.[00:02:43] Adam Wiggins: Yeah, well I'm kind of a little bit in a wandering phase here and trying to take this time in between ventures to see what's out there in the world and some of my wandering took me to the Elicit team. And found that they were some of the folks who were doing the most interesting, really deep work in terms of taking the capabilities of language models and applying them to what I feel like are really important problems.[00:03:08] So in this case, science and literature search and, and, and that sort of thing. It fits into my general interest in tools and productivity software. I, I think of it as a tool for thought in many ways, but a tool for science, obviously, if we can accelerate that discovery of new medicines and things like that, that's, that's just so powerful.[00:03:24] But to me, it's a. It's kind of also an opportunity to learn at the feet of some real masters in this space, people who have been working on it since it was, before it was cool, if you want to put it that way. So for me, the last couple of months have been this crash course, and why I sometimes describe myself as an internal journalist is I'm helping to write some, some posts, including Supporting James in this article here we're doing for latent space where I'm just bringing my writing skill and that sort of thing to bear on their very deep domain expertise around language models and applying them to the real world and kind of surface that in a way that's I don't know, accessible, legible, that, that sort of thing.[00:04:03] And so, and the great benefit to me is I get to learn this stuff in a way that I don't think I would, or I haven't, just kind of tinkering with my own side projects.[00:04:12] swyx: I forgot to mention that you also run Ink and Switch, which is one of the leading research labs, in my mind, of the tools for thought productivity space, , whatever people mentioned there, or maybe future of programming even, a little bit of that.[00:04:24] As well. I think you guys definitely started the local first wave. I think there was just the first conference that you guys held. I don't know if you were personally involved.[00:04:31] Adam Wiggins: Yeah, I was one of the co organizers along with a few other folks for, yeah, called Local First Conf here in Berlin.[00:04:36] Huge success from my, my point of view. Local first, obviously, a whole other topic we can talk about on another day. I think there actually is a lot more what would you call it , handshake emoji between kind of language models and the local first data model. And that was part of the topic of the conference here, but yeah, topic for another day.[00:04:55] swyx: Not necessarily. I mean , I, I selected as one of my keynotes, Justine Tunney, working at LlamaFall in Mozilla, because I think there's a lot of people interested in that stuff. But we can, we can focus on the headline topic. And just to not bury the lead, which is we're talking about hire, how to hire AI engineers, this is something that I've been looking for a credible source on for months.[00:05:14] People keep asking me for my opinions. I don't feel qualified to give an opinion and it's not like I have. So that's kind of defined hiring process that I'm super happy with, even though I've worked with a number of AI engineers.[00:05:25] Defining the Hiring Process[00:05:25] swyx: I'll just leave it open to you, James. How was your process of defining your hiring, hiring roles?[00:05:31] James Brady: Yeah. So I think the first thing to say is that we've effectively been hiring for this kind of a role since before you, before you coined the term and tried to kind of build this understanding of what it was.[00:05:42] So, which is not a bad thing. Like it's, it was a, it was a good thing. A concept, a concept that was coming to the fore and effectively needed a name, which is which is what you did. So the reason I mentioned that is I think it was something that we kind of backed into, if you will. We didn't sit down and come up with a brand new role from, from scratch of this is a completely novel set of responsibilities and skills that this person would need.[00:06:06] However, it is a A kind of particular blend of different skills and attitudes and and curiosities interests, which I think makes sense to kind of bundle together. So in the, in the post, the three things that we say are most important for a highly effective AI engineer are first of all, conventional software engineering skills, which is Kind of a given, but definitely worth mentioning.[00:06:30] The second thing is a curiosity and enthusiasm for machine learning and maybe in particular language models. That's certainly true in our case. And then the third thing is to do with basically a fault first mindset, being able to build systems that can handle things going wrong in, in, in some sense.[00:06:49] And yeah, the I think the kind of middle point, the curiosity about ML and language models is probably fairly self evident. They're going to be working with, and prompting, and dealing with the responses from these models, so that's clearly relevant. The last point, though, maybe takes the most explaining.[00:07:07] To do with this fault first mindset and the ability to, to build resilient systems. The reason that is, is so important is because compared to normal APIs, where normal, think of something like a Stripe API or a search API or something like this. The latency when you're working with language models is, is wild, like you can get 10x variation.[00:07:32] I mean, I was looking at the stats before, actually, before, before the podcast. We do often, normally, in fact, see a 10x variation in the P90 latency over the course of, Half an hour, an hour when we're prompting these models, which is way higher than if you're working with a, more kind of conventional conventionally backed API.[00:07:49] And the responses that you get, the actual content and the responses are naturally unpredictable as well. They come back with different formats. Maybe you're expecting JSON. It's not quite JSON. You have to handle this stuff. And also the, the semantics of the messages are unpredictable too, which is, which is a good thing.[00:08:08] Like this is one of the things that you're looking for from these language models, but it all adds up to needing to. Build a resilient, reliable, solid feeling system on top of this fundamentally, well, certainly currently fundamentally shaky foundation. The models do not behave in the way that you would like them to.[00:08:28] And yeah, the ability to structure the code around them such that it does give the user this warm, reassuring, Snappy, solid feeling is is really what we're driving for there.[00:08:42] Defensive AI Engineering as a chaotic medium[00:08:42] Adam Wiggins: What really struck me as we, we dug in on the content for this article was that third point there. The, the language models is this kind of chaotic medium, this, this dragon, this wild horse you're, you're, you're riding and trying to guide in the direction that is going to be useful and reliable to users, because I think.[00:08:58] So much of software engineering is about making things not only high performance and snappy, but really just making it stable, reliable, predictable, which is literally the opposite of what you get from from the language models. And yet, yeah, the output is so useful, and indeed, some of their Creativity, if you want to call it that, which is, is precisely their value.[00:09:19] And so you need to work with this medium. And I guess the nuanced or the thing that came out of Elissa's experience that I thought was so interesting is quite a lot of working with that is things that come from distributed systems engineering. But you have really the AI engineers as we're defining them or, or labeling them on the illicit team is people who are really application developers.[00:09:39] You're building things for end users. You're thinking about, okay, I need to populate this interface with some response to user input. That's useful to the tasks they're trying to do, but you have this. This is the thing, this medium that you're working with that in some ways you need to apply some of this chaos engineering, distributed systems engineering, which typically those people with those engineering skills are not kind of the application level developers with the product mindset or whatever, they're more deep in the guts of a, of a system.[00:10:07] And so it's, those, those skills and, and knowledge do exist throughout the engineering discipline, but sort of putting them together into one person that is That feels like sort of a unique thing and working with the folks on the Elicit team who have that skills I'm quite struck by that unique that unique blend.[00:10:23] I haven't really seen that before in my 30 year career in technology.[00:10:26] Tech Choices for Defensive AI Engineering[00:10:26] swyx: Yeah, that's a Fascinating I like the reference to chaos engineering. I have some appreciation, I think when you had me on your podcast, I was still working at Temporal and that was like a nice Framework, if you live within Temporal's boundaries, you can pretend that all those faults don't exist, and you can, you can code in a sort of very fault tolerant way.[00:10:47] What is, what is you guys solutions around this, actually? Like, I think you're, you're emphasizing having the mindset, but maybe naming some technologies would help? Not saying that you have to adopt these technologies, but they're just, they're just quick vectors into what you're talking about when you're, when you're talking about distributed systems.[00:11:03] Like, that's such a big, chunky word, , like are we talking, are Kubernetes or, and I suspect we're not, , like we're, we're talking something else now.[00:11:10] James Brady: Yeah, that's right. It's more at the application level rather than at the infrastructure level, at least, at least the way that it works for us.[00:11:17] So there's nothing kind of radically novel here. It is more a careful application of existing concepts. So the kinds of tools that we reach for to handle these kind of slightly chaotic objects that Adam was just talking about, are retries and fallbacks and timeouts and careful error handling. And, yeah, the standard stuff, really.[00:11:39] There's also a great degree of dependence. We rely heavily on parallelization because, , these language models are not innately very snappy, and , there's just a lot of I. O. going back and forth. So All these things I'm talking about when I was in my earlier stages of a career, these are kind of the things that are the difficult parts that most senior software engineers will be better at.[00:12:01] It is careful error handling, and concurrency, and fallbacks, and distributed systems, and, , eventual consistency, and all this kind of stuff and As Adam was saying, the kind of person that is deep in the guts of some kind of distributed systems, a really high, high scale backend kind of a problem would probably naturally have these kinds of skills.[00:12:21] But you'll find them on, on day one, if you're building a, , an ML powered app, even if it's not got massive scale. I think one one thing that I would mention that we do do yeah, maybe, maybe two related things, actually. The first is we're big fans of strong typing. We share the types all the way from the Backend Python code all the way to the to the front end in TypeScript and find that is I mean We'd probably do this anyway But it really helps one reason around the shapes of the data which can going to be going back and forth and that's really important When you can't rely upon You you're going to have to coerce the data that you get back from the ML if you want if you want for it to be structured basically speaking and The second thing which is related is we use checked exceptions inside our Python code base, which means that we can use the type system to make sure we are handling, properly handling, all of the, the various things that could be going wrong, all the different exceptions that could be getting raised.[00:13:16] So, checked exceptions are not, not really particularly popular. Actually there's not many people that are big fans of them. For our particular use case, to really make sure that we've not just forgotten to handle, , This particular type of error we have found them useful to to, to force us to think about all the different edge cases that can come up.[00:13:32] swyx: Fascinating. How just a quick note of technology. How do you share types from Python to TypeScript? Do you, do you use GraphQL? Do you use something[00:13:39] James Brady: else? We don't, we don't use GraphQL. Yeah. So we've got the We've got the types defined in Python, that's the source of truth. And we go from the OpenAPI spec, and there's a, there's a tool that you work and use to generate types dynamically, like TypeScript types from those OpenAPI definitions.[00:13:57] swyx: Okay, excellent. Okay, cool. Sorry, sorry for diving into that rabbit hole a little bit. I always like to spell out technologies for people to dig their teeth into.[00:14:04] How do you Interview for Defensive AI Engineering[00:14:04] swyx: One thing I'll, one thing I'll mention quickly is that a lot of the stuff that you mentioned is typically not part of the normal interview loop.[00:14:10] It's actually really hard to interview for because this is the stuff that you polish out in, as you go into production, the coding interviews are typically about the happy path. How do we do that? How do we, how do we design, how do you look for a defensive fault first mindset?[00:14:24] Because you can defensive code all day long and not add functionality. to your to your application.[00:14:29] James Brady: Yeah, it's a great question and I think that's exactly true. Normally the interview is about the happy path and then there's maybe a box checking exercise at the end of the candidate says of course in reality I would handle the edge cases or something like this and that unfortunately isn't isn't quite good enough when when the happy path is is very very narrow and yeah there's lots of weirdness on either side so basically speaking, it's just a case of, of foregrounding those kind of concerns through the interview process.[00:14:58] It's, there's, there's no magic to it. We, we talk about this in the, in the po in the post that we're gonna be putting up on, on Laton space. The, there's two main technical exercises that we do through our interview process for this role. The first is more coding focus, and the second is more system designy.[00:15:16] Yeah. White whiteboarding a potential solution. And in, without giving too much away in the coding exercise. You do need to think about edge cases. You do need to think about errors. The exercise consists of adding features and fixing bugs inside the code base. And in both of those two cases, it does demand, because of the way that we set the application up and the interview up, it does demand that you think about something other than the happy path.[00:15:41] But your thinking is the right prompt of how do we get the candidate thinking outside of the, the kind of normal Sweet spot, smooth smooth, smoothly paved path. In terms of the system design interview, that's a little easier to prompt this kind of fault first mindset because it's very easy in that situation just to say, let's imagine that, , this node dies, how does the app still work?[00:16:03] Let's imagine that this network is, is going super slow. Let's imagine that, I don't know, like you, you run out of, you run out of capacity in, in, in this database that you've sketched out here, how do you handle that, that, that sort of stuff. So. It's, in both cases, they're not firmly anchored to and built specifically around language models and ways language models can go wrong, but we do exercise the same muscles of thinking defensively and yeah, foregrounding the edge cases, basically.[00:16:32] Adam Wiggins: James, earlier there you mentioned retries. And this is something that I think I've seen some interesting debates internally about things regarding, first of all, retries are, can be costly, right? In general, this medium, in addition to having this incredibly high variance and response rate, and, , being non deterministic, is actually quite expensive.[00:16:50] And so, in many cases, doing a retry when you get a fail does make sense, but actually that has an impact on cost. And so there is Some sense to which, at least I've seen the AI engineers on our team, worry about that. They worry about, okay, how do we give the best user experience, but balance that against what the infrastructure is going to, , is going to cost our company, which I think is again, an interesting mix of, yeah, again, it's a little bit the distributed system mindset, but it's also a product perspective and you're thinking about the end user experience, but also the.[00:17:22] The bottom line for the business, you're bringing together a lot of a lot of qualities there. And there's also the fallback case, which is kind of, kind of a related or adjacent one. I think there was also a discussion on that internally where, I think it maybe was search, there was something recently where there was one of the frontline search providers was having some, yeah, slowness and outages, and essentially then we had a fallback, but essentially that gave people for a while, especially new users that come in that don't the difference, they're getting a They're getting worse results for their search.[00:17:52] And so then you have this debate about, okay, there's sort of what is correct to do from an engineering perspective, but then there's also what actually is the best result for the user. Is giving them a kind of a worse answer to their search result better, or is it better to kind of give them an error and be like, yeah, sorry, it's not working right at the moment, try again.[00:18:12] Later, both are obviously non optimal, but but this is the kind of thing I think that that you run into or, or the kind of thing we need to grapple with a lot more than you would other kinds of, of mediums.[00:18:24] James Brady: Yeah, that's a really good example. I think it brings to the fore the two different things that you could be optimizing for of uptime and response at all costs on one end of the spectrum and then effectively fragility, but kind of, if you get a response, it's the best response we can come up with at the other end of the spectrum.[00:18:43] And where you want to land there kind of depends on, well, it certainly depends on the app, obviously depends on the user. I think it depends on the, feature within the app as well. So in the search case that you, that you mentioned there, in retrospect, we probably didn't want to have the fallback. And we've actually just recently on Monday, changed that to Show an error message rather than giving people a kind of degraded experience in other situations We could use for example a large language model from a large language model from provider B rather than provider A and Get something which is within the A few percentage points performance, and that's just a really different situation.[00:19:21] So yeah, like any interesting question, the answer is, it depends.[00:19:25] Does Model Shadowing Work?[00:19:25] swyx: I do hear a lot of people suggesting I, let's call this model shadowing as a defensive technique, which is, if OpenAI happens to be down, which, , happens more often than people think then you fall back to anthropic or something.[00:19:38] How realistic is that, right? Like you, don't you have to develop completely different prompts for different models and won't the, won't the performance of your application suffer from whatever reason, right? Like it may be caused differently or it's not maintained in the same way. I, I think that people raise this idea of fallbacks to models, but I don't think it's, I don't, I don't see it practiced very much.[00:20:02] James Brady: Yeah, it is, you, you definitely need to have a different prompt if you want to stay within a few percentage points degradation Like I, like I said before, and that certainly comes at a cost, like fallbacks and backups and things like this It's really easy for them to go stale and kind of flake out on you because they're off the beaten track And In our particular case inside of Elicit, we do have fallbacks for a number of kind of crucial functions where it's going to be very obvious if something has gone wrong, but we don't have fallbacks in all cases.[00:20:40] It really depends on a task to task basis throughout the app. So I can't give you a kind of a, a single kind of simple rule of thumb for, in this case, do this. And in the other, do that. But yeah, we've it's a little bit easier now that the APIs between the anthropic models and opening are more similar than they used to be.[00:20:59] So we don't have two totally separate code paths with different protocols, like wire protocols to, to speak, which makes things easier, but you're right. You do need to have different prompts if you want to, have similar performance across the providers.[00:21:12] Adam Wiggins: I'll also note, just observing again as a relative newcomer here, I was surprised, impressed, not sure what the word is for it, at the blend of different backends that the team is using.[00:21:24] And so there's many The product presents as kind of one single interface, but there's actually several dozen kind of main paths. There's like, for example, the search versus a data extraction of a certain type, versus chat with papers, versus And each one of these, , the team has worked very hard to pick the right Model for the job and craft the prompt there, but also is constantly testing new ones.[00:21:48] So a new one comes out from either, from the big providers or in some cases, Our own models that are , running on, on essentially our own infrastructure. And sometimes that's more about cost or performance, but the point is kind of switching very fluidly between them and, and very quickly because this field is moving so fast and there's new ones to choose from all the time is like part of the day to day, I would say.[00:22:11] So it isn't more of a like, there's a main one, it's been kind of the same for a year, there's a fallback, but it's got cobwebs on it. It's more like which model and which prompt is changing weekly. And so I think it's quite, quite reasonable to to, to, to have a fallback that you can expect might work.[00:22:29] Is it too early to standardize Tech stacks?[00:22:29] swyx: I'm curious because you guys have had experience working at both, , Elicit, which is a smaller operation and, and larger companies. A lot of companies are looking at this with a certain amount of trepidation as, as, , it's very chaotic. When you have, when you have , one engineering team that, that, knows everyone else's names and like, , they, they, they, they meet constantly in Slack and knows what's going on.[00:22:50] It's easier to, to sync on technology choices. When you have a hundred teams, all shipping AI products and all making their own independent tech choices. It can be, it can be very hard to control. One solution I'm hearing from like the sales forces of the worlds and Walmarts of the world is that they are creating their own AI gateway, right?[00:23:05] Internal AI gateway. This is the one model hub that controls all the things and has our standards. Is that a feasible thing? Is that something that you would want? Is that something you have and you're working towards? What are your thoughts on this stuff? Like, Centralization of control or like an AI platform internally.[00:23:22] James Brady: Certainly for larger organizations and organizations that are doing things which maybe are running into HIPAA compliance or other, um, legislative tools like that. It could make a lot of sense. Yeah. I think for the TLDR for something like Elicit is we are small enough, as you indicated, and need to have full control over all the levers available and switch between different models and different prompts and whatnot, as Adam was just saying, that that kind of thing wouldn't work for us.[00:23:52] But yeah, I've spoken with and, um, advised a couple of companies that are trying to sell into that kind of a space or at a larger stage, and it does seem to make a lot of sense for them. So, for example, if you're trying to sell If you're looking to sell to a large enterprise and they cannot have any data leaving the EU, then you need to be really careful about someone just accidentally putting in, , the sort of US East 1 GPT 4 endpoints or something like this.[00:24:22] I'd be interested in understanding better what the specific problem is that they're looking to solve with that, whether it is to do with data security or centralization of billing, or if they have a kind of Suite of prompts or something like this that people can choose from so they don't need to reinvent the wheel again and again I wouldn't be able to say without understanding the problems and their proposed solutions , which kind of situations that be better or worse fit for but yeah for illicit where really the The secret sauce, if there is a secret sauce, is which models we're using, how we're using them, how we're combining them, how we're thinking about the user problem, how we're thinking about all these pieces coming together.[00:25:02] You really need to have all of the affordances available to you to be able to experiment with things and iterate rapidly. And generally speaking, whenever you put these kind of layers of abstraction and control and generalization in there, that, that gets in the way. So, so for us, it would not work.[00:25:19] Adam Wiggins: Do you feel like there's always a tendency to want to reach for standardization and abstractions pretty early in a new technology cycle?[00:25:26] There's something comforting there, or you feel like you can see them, or whatever. I feel like there's some of that discussion around lang chain right now. But yeah, this is not only so early, but also moving so fast. , I think it's . I think it's tough to, to ask for that. That's, that's not the, that's not the space we're in, but the, yeah, the larger an organization, the more that's your, your default is to, to, to want to reach for that.[00:25:48] It, it, it's a sort of comfort.[00:25:51] swyx: Yeah, I find it interesting that you would say that , being a founder of Heroku where , you were one of the first platforms as a service that more or less standardized what, , that sort of early developer experience should have looked like.[00:26:04] And I think basically people are feeling the differences between calling various model lab APIs and having an actual AI platform where. , all, all their development needs are thought of for them. , it's, it's very much, and, and I, I defined this in my AI engineer post as well.[00:26:19] Like the model labs just see their job ending at serving models and that's about it. But actually the responsibility of the AI engineer has to fill in a lot of the gaps beyond that. So.[00:26:31] Adam Wiggins: Yeah, that's true. I think, , a huge part of the exercise with Heroku, which It was largely inspired by Rails, which itself was one of the first frameworks to standardize the SQL database.[00:26:42] And people had been building apps like that for many, many years. I had built many apps. I had made my own templates based on that. I think others had done it. And Rails came along at the right moment. We had been doing it long enough that you see the patterns and then you can say look let's let's extract those into a framework that's going to make it not only easier to build for the experts but for people who are relatively new the best practices are encoded into you.[00:27:07] That framework, , Model View Controller, to take one example. But then, yeah, once you see that, and once you experience the power of a framework, and again, it's so comforting, and you can develop faster, and it's easier to onboard new people to it because you have these standards. And this consistency, then folks want that for something new that's evolving.[00:27:29] Now here I'm thinking maybe if you fast forward a little to, for example, when React came on the on the scene, , a decade ago or whatever. And then, okay, we need to do state management. What's that? And then there's, , there's a new library every six months. Okay, this is the one, this is the gold standard.[00:27:42] And then, , six months later, that's deprecated. Because of course, it's evolving, you need to figure it out, like the tacit knowledge and the experience of putting it in practice and seeing what those real What those real needs are are, are critical, and so it's, it is really about finding the right time to say yes, we can generalize, we can make standards and abstractions, whether it's for a company, whether it's for, , a library, an open source library, for a whole class of apps and it, it's very much a, much more of a A judgment call slash just a sense of taste or , experience to be able to say, Yeah, we're at the right point.[00:28:16] We can standardize this. But it's at least my, my very, again, and I'm so new to that, this world compared to you both, but my, my sense is, yeah, still the wild west. That's what makes it so exciting and feels kind of too early for too much. too much in the way of standardized abstractions. Not that it's not interesting to try, but , you can't necessarily get there in the same way Rails did until you've got that decade of experience of whatever building different classes of apps in that, with that technology.[00:28:45] James Brady: Yeah, it's, it's interesting to think about what is going to stay more static and what is expected to change over the coming five years, let's say. Which seems like when I think about it through an ML lens, it's an incredibly long time. And if you just said five years, it doesn't seem, doesn't seem that long.[00:29:01] I think that, that kind of talks to part of the problem here is that things that are moving are moving incredibly quickly. I would expect, this is my, my hot take rather than some kind of official carefully thought out position, but my hot take would be something like the You can, you'll be able to get to good quality apps without doing really careful prompt engineering.[00:29:21] I don't think that prompt engineering is going to be a kind of durable differential skill that people will, will hold. I do think that, The way that you set up the ML problem to kind of ask the right questions, if you see what I mean, rather than the specific phrasing of exactly how you're doing chain of thought or few shot or something in the prompt I think the way that you set it up is, is probably going to be remain to be trickier for longer.[00:29:47] And I think some of the operational challenges that we've been talking about of wild variations in, in, in latency, And handling the, I mean, one way to think about these models is the first lesson that you learn when, when you're an engineer, software engineer, is that you need to sanitize user input, right?[00:30:05] It was, I think it was the top OWASP security threat for a while. Like you, you have to sanitize and validate user input. And we got used to that. And it kind of feels like this is the, The shell around the app and then everything else inside you're kind of in control of and you can grasp and you can debug, etc.[00:30:22] And what we've effectively done is, through some kind of weird rearguard action, we've now got these slightly chaotic things. I think of them more as complex adaptive systems, which , related but a bit different. Definitely have some of the same dynamics. We've, we've injected these into the foundations of the, of the app and you kind of now need to think with this defined defensive mindset downwards as well as upwards if you, if you see what I mean.[00:30:46] So I think it would gonna, it's, I think it will take a while for us to truly wrap our heads around that. And also these kinds of problems where you have to handle things being unreliable and slow sometimes and whatever else, even if it doesn't happen very often, there isn't some kind of industry wide accepted way of handling that at massive scale.[00:31:10] There are definitely patterns and anti patterns and tools and whatnot, but it's not like this is a solved problem. So I would expect that it's not going to go down easily as a, as a solvable problem at the ML scale either.[00:31:23] swyx: Yeah, excellent. I would describe in, in the terminology of the stuff that I've written in the past, I describe this inversion of architecture as sort of LLM at the core versus LLM or code at the core.[00:31:34] We're very used to code at the core. Actually, we can scale that very well. When we build LLM core apps, we have to realize that the, the central part of our app that's orchestrating things is actually prompt, prone to, , prompt injections and non determinism and all that, all that good stuff.[00:31:48] I, I did want to move the conversation a little bit from the sort of defensive side of things to the more offensive or, , the fun side of things, capabilities side of things, because that is the other part. of the job description that we kind of skimmed over. So I'll, I'll repeat what you said earlier.[00:32:02] Capabilities: Offensive AI Engineering[00:32:02] swyx: It's, you want people to have a genuine curiosity and enthusiasm for the capabilities of language models. We just, we're recording this the day after Anthropic just dropped Cloud 3. 5. And I was wondering, , maybe this is a good, good exercise is how do people have Curiosity and enthusiasm for capabilities language models when for example the research paper for cloud 3.[00:32:22] 5 is four pages[00:32:23] James Brady: Maybe that's not a bad thing actually in this particular case So yeah If you really want to know exactly how the sausage was made That hasn't been possible for a few years now in fact for for these new models but from our perspective as when we're building illicit What we primarily care about is what can these models do?[00:32:41] How do they perform on the tasks that we already have set up and the evaluations we have in mind? And then on a slightly more expansive note, what kinds of new capabilities do they seem to have? Can we elicit, no pun intended, from the models? For example, well, there's, there's very obvious ones like multimodality , there wasn't that and then there was that, or it could be something a bit more subtle, like it seems to be getting better at reasoning, or it seems to be getting better at metacognition, or Or it seems to be getting better at marking its own work and giving calibrated confidence estimates, things like this.[00:33:19] So yeah, there's, there's plenty to be excited about there. It's just that yeah, there's rightly or wrongly been this, this, this shift over the last few years to not give all the details. So no, but from application development perspective we, every time there's a new model release, there's a flow of activity in our Slack, and we try to figure out what's going on.[00:33:38] What it can do, what it can't do, run our evaluation frameworks, and yeah, it's always an exciting, happy day.[00:33:44] Adam Wiggins: Yeah, from my perspective, what I'm seeing from the folks on the team is, first of all, just awareness of the new stuff that's coming out, so that's, , an enthusiasm for the space and following along, and then being able to very quickly, partially that's having Slack to do this, but be able to quickly map that to, okay, What does this do for our specific case?[00:34:07] And that, the simple version of that is, let's run the evaluation framework, which Lissa has quite a comprehensive one. I'm actually working on an article on that right now, which I'm very excited about, because it's a very interesting world of things. But basically, you can just try, not just, but try the new model in the evaluations framework.[00:34:27] Run it. It has a whole slew of benchmarks, which includes not just Accuracy and confidence, but also things like performance, cost, and so on. And all of these things may trade off against each other. Maybe it's actually, it's very slightly worse, but it's way faster and way cheaper, so actually this might be a net win, for example.[00:34:46] Or, it's way more accurate. But that comes at its slower and higher cost, and so now you need to think about those trade offs. And so to me, coming back to the qualities of an AI engineer, especially when you're trying to hire for them, It's this, it's, it is very much an application developer in the sense of a product mindset of What are our users or our customers trying to do?[00:35:08] What problem do they need solved? Or what what does our product solve for them? And how does the capabilities of a particular model potentially solve that better for them than what exists today? And by the way, what exists today is becoming an increasingly gigantic cornucopia of things, right? And so, You say, okay, this new model has these capabilities, therefore, , the simple version of that is plug it into our existing evaluations and just look at that and see if it, it seems like it's better for a straight out swap out, but when you talk about, for example, you have multimodal capabilities, and then you say, okay, wait a minute, actually, maybe there's a new feature or a whole new There's a whole bunch of ways we could be using it, not just a simple model swap out, but actually a different thing we could do that we couldn't do before that would have been too slow, or too inaccurate, or something like that, that now we do have the capability to do.[00:35:58] I think of that as being a great thing. I don't even know if I want to call it a skill, maybe it's even like an attitude or a perspective, which is a desire to both be excited about the new technology, , the new models and things as they come along, but also holding in the mind, what does our product do?[00:36:16] Who is our user? And how can we connect the capabilities of this technology to how we're helping people in whatever it is our product does?[00:36:25] James Brady: Yeah, I'm just looking at one of our internal Slack channels where we talk about things like new new model releases and that kind of thing And it is notable looking through these the kind of things that people are excited about and not It's, I don't know the context, the context window is much larger, or it's, look at how many parameters it has, or something like this.[00:36:44] It's always framed in terms of maybe this could be applied to that kind of part of Elicit, or maybe this would open up this new possibility for Elicit. And, as Adam was saying, yeah, I don't think it's really a I don't think it's a novel or separate skill, it's the kind of attitude I would like to have all engineers to have at a company our stage, actually.[00:37:05] And maybe more generally, even, which is not just kind of getting nerd sniped by some kind of technology number, fancy metric or something, but how is this actually going to be applicable to the thing Which matters in the end. How is this going to help users? How is this going to help move things forward strategically?[00:37:23] That kind of, that kind of thing.[00:37:24] AI Engineering Required Knowledge[00:37:24] swyx: Yeah, applying what , I think, is, is, is the key here. Getting hands on as well. I would, I would recommend a few resources for people listening along. The first is Elicit's ML reading list, which I, I found so delightful after talking with Andreas about it.[00:37:38] It looks like that's part of your onboarding. We've actually set up an asynchronous paper club instead of my discord for people following on that reading list. I love that you separate things out into tier one and two and three, and that gives people a factored cognition way of Looking into the, the, the corpus, right?[00:37:55] Like yes, the, the corpus of things to know is growing and the water is slowly rising as far as what a bar for a competent AI engineer is. But I think, , having some structured thought as to what are the big ones that everyone must know I think is, is, is key. It's something I, I haven't really defined for people and I'm, I'm glad that this is actually has something out there that people can refer to.[00:38:15] Yeah, I wouldn't necessarily like make it required for like the job. Interview maybe, but , it'd be interesting to see like, what would be a red flag. If some AI engineer would not know, I don't know what, , I don't know where we would stoop to, to call something required knowledge, , or you're not part of the cool kids club.[00:38:33] But there increasingly is something like that, right? Like, not knowing what context is, is a black mark, in my opinion, right?[00:38:40] I think it, I think it does connect back to what we were saying before of this genuine Curiosity about and that. Well, maybe it's, maybe it's actually that combined with something else, which is really important, which is a self starting bias towards action, kind of a mindset, which again, everybody needs.[00:38:56] Exactly. Yeah. Everyone needs that. So if you put those two together, or if I'm truly curious about this and I'm going to kind of figure out how to make things happen, then you end up with people. Reading, reading lists, reading papers, doing side projects, this kind of, this kind of thing. So it isn't something that we explicitly included.[00:39:14] We don't have a, we don't have an ML focused interview for the AI engineer role at all, actually. It doesn't really seem helpful. The skills which we are checking for, as I mentioned before, this kind of fault first mindset. And conventional software engineering kind of thing. It's, it's 0. 1 and 0.[00:39:32] 3 on the list that, that we talked about. In terms of checking for ML curiosity and there are, how familiar they are with these concepts. That's more through talking interviews and culture fit types of things. We want for them to have a take on what Elisa is doing. doing, certainly as they progress through the interview process.[00:39:50] They don't need to be completely up to date on everything we've ever done on day zero. Although, , that's always nice when it happens. But for them to really engage with it, ask interesting questions, and be kind of bought into our view on how we want ML to proceed. I think that is really important, and that would reveal that they have this kind of this interest, this ML curiosity.[00:40:13] ML First Mindset[00:40:13] swyx: There's a second aspect to that. I don't know if now's the right time to talk about it, which is, I do think that an ML first approach to building software is something of a different mindset. I could, I could describe that a bit now if that, if that seems good, but yeah, I'm a team. Okay. So yeah, I think when I joined Elicit, this was the biggest adjustment that I had to make personally.[00:40:37] So as I said before, I'd been, Effectively building conventional software stuff for 15 years or so, something like this, well, for longer actually, but professionally for like 15 years. And had a lot of pattern matching built into my brain and kind of muscle memory for if you see this kind of problem, then you do that kind of a thing.[00:40:56] And I had to unlearn quite a lot of that when joining Elicit because we truly are ML first and try to use ML to the fullest. And some of the things that that means is, This relinquishing of control almost, at some point you are calling into this fairly opaque black box thing and hoping it does the right thing and dealing with the stuff that it sends back to you.[00:41:17] And that's very different if you're interacting with, again, APIs and databases, that kind of a, that kind of a thing. You can't just keep on debugging. At some point you hit this, this obscure wall. And I think the second, the second part to this is the pattern I was used to is that. The external parts of the app are where most of the messiness is, not necessarily in terms of code, but in terms of degrees of freedom, almost.[00:41:44] If the user can and will do anything at any point, and they'll put all sorts of wonky stuff inside of text inputs, and they'll click buttons you didn't expect them to click, and all this kind of thing. But then by the time you're down into your SQL queries, for example, as long as you've done your input validation, things are pretty pretty well defined.[00:42:01] And that, as we said before, is not really the case. When you're working with language models, there is this kind of intrinsic uncertainty when you get down to the, to the kernel, down to the core. Even, even beyond that, there's all that stuff is somewhat defensive and these are things to be wary of to some degree.[00:42:18] Though the flip side of that, the really kind of positive part of taking an ML first mindset when you're building applications is that you, If you, once you get comfortable taking your hands off the wheel at a certain point and relinquishing control, letting go then really kind of unexpected powerful things can happen if you lean on the, if you lean on the capabilities of the model without trying to overly constrain and slice and dice problems with to the point where you're not really wringing out the most capability from the model that you, that you might.[00:42:47] So, I was trying to think of examples of this earlier, and one that came to mind was we were working really early when just after I joined Elicit, we were working on something where we wanted to generate text and include citations embedded within it. So it'd have a claim, and then a, , square brackets, one, in superscript, something, something like this.[00:43:07] And. Every fiber in my, in my, in my being was screaming that we should have some way of kind of forcing this to happen or Structured output such that we could guarantee that this citation was always going to be present later on that the kind of the indication of a footnote would actually match up with the footnote itself and Kind of went into this symbolic.[00:43:28] I need full control kind of kind of mindset and it was notable that Andreas Who's our CEO, again, has been on the podcast, was was the opposite. He was just kind of, give it a couple of examples and it'll probably be fine. And then we can kind of figure out with a regular expression at the end. And it really did not sit well with me, to be honest.[00:43:46] I was like, but it could say anything. I could say, it could literally say anything. And I don't know about just using a regex to sort of handle this. This is a potent feature of the app. But , this is that was my first kind of, , The starkest introduction to this ML first mindset, I suppose, which Andreas has been cultivating for much longer than me, much longer than most, of yeah, there might be some surprises of stuff you get back from the model, but you can also It's about finding the sweet spot, I suppose, where you don't want to give a completely open ended prompt to the model and expect it to do exactly the right thing.[00:44:25] You can ask it too much and it gets confused and starts repeating itself or goes around in loops or just goes off in a random direction or something like this. But you can also over constrain the model. And not really make the most of the, of the capabilities. And I think that is a mindset adjustment that most people who are coming into AI engineering afresh would need to make of yeah, giving up control and expecting that there's going to be a little bit of kind of extra pain and defensive stuff on the tail end, but the benefits that you get as a, as a result are really striking.[00:44:58] The ML first mindset, I think, is something that I struggle with as well, because the errors, when they do happen, are bad. , they will hallucinate, and your systems will not catch it sometimes if you don't have large enough of a sample set.[00:45:13] AI Engineers and Creativity[00:45:13] swyx: I'll leave it open to you, Adam. What else do you think about when you think about curiosity and exploring capabilities?[00:45:22] Do people are there reliable ways to get people to push themselves? for joining us on Capabilities, because I think a lot of times we have this implicit overconfidence, maybe, of we think we know what it is, what a thing is, when actually we don't, and we need to keep a more open mind, and I think you do a particularly good job of Always having an open mind, and I want to get that out of more engineers that I talk to, but I, I, I, I struggle sometimes.[00:45:45] Adam Wiggins: I suppose being an engineer is, at its heart, this sort of contradiction of, on one hand, yeah,

Straight Talk with Sally
Quick Tip on the Five Principles of Good Copywriting

Straight Talk with Sally

Play Episode Listen Later Jun 14, 2024 5:52


In this episode Sally shares five essential principles for crafting good copy that converts. First, focus on storytelling to connect with your reader by sharing relatable experiences. Elicit emotion through impactful sentences. Second, maintain a single, clear call to action to guide your audience effectively. Third, add urgency and scarcity to prompt immediate responses. Fourth, develop a unique big idea that sets you apart from competitors. Lastly, integrate these principles consistently across all your sales materials, from emails to landing pages. Sally emphasizes the importance of dedicating time and effort to refine these elements for maximum impact.   We just got featured on Feedspot's list of Australian women's lifestyle podcasts. Check it out at https://blog.feedspot.com/australian_women_lifestyle_podcasts/ Register here and take the first step towards your course creation success: https://www.sparkleclassacademy.com/infographic Connect with Sally and the Sparkle World:  Website: https://sparkleclassacademy.com/ Instagram: https://www.instagram.com/sallysparkscousins/ Youtube: https://www.youtube.com/@SallySparksCousins Facebook: https://www.facebook.com/SallySparksCousin  

Deep Dives 🤿
S6 | E7: Maggie Appleton - Becoming an AI-native designer

Deep Dives 🤿

Play Episode Listen Later Jun 12, 2024 52:53


This week's episode with Maggie Appleton is a deep dive into designing for AI products and LLMs. Maggie shares about her experience as the first designer at Elicit (an AI assistant for research papers) and all of the unique challenges surrounding helping users interface with LLMs.We also go deep into:How Maggie's grown as a frontend developerWhy Maggie feels like she's in a short-run limboStrategies for improving your technical literacyHow writing online has impacted Maggie's careerThe AI-native tools that Maggie is drawing inspiration fromHow advancements in AI will redefine her role as a designerHow Maggie's new understanding of LLMs is shaping the way she designsWhy Maggie is more interested in the cognitive applications of AI rather than generative AIMaggie is currently leading design at Elicit (they're hiring)“How Trello is different” is where Joel Spolsky explains the differences between horizontal and vertical softwareOpenAI's introduction of ChatGPT-4oWe talked about the product tldrawThe expanding dark forest and generative AI: Maggie's talk about the possible futures of flooding the web with AI-generated contentEpisode with Soleio where he talks about looking for “time to proficiency” in design candidates

The Nonlinear Library
AF - Memorizing weak examples can elicit strong behavior out of password-locked models by Fabien Roger

The Nonlinear Library

Play Episode Listen Later Jun 6, 2024 11:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Memorizing weak examples can elicit strong behavior out of password-locked models, published by Fabien Roger on June 6, 2024 on The AI Alignment Forum. We've recently done some research looking into sandbagging: examining when models can succeed at intentionally producing low-quality outputs despite attempts at fine-tuning them to perform well. One reason why sandbagging could be concerning is because scheming models might try to appear less capable than they actually are in capability evaluations, which could result in these models being deployed with insufficient countermeasures.[1] In the course of this research on sandbagging, we ran into a strange phenomenon: when a model is trained to produce low-quality outputs (via training it to imitate a weaker model), we can often get it to perform much more capably by training it for a huge number of epochs on a tiny number of low-quality samples (e.g. 2 samples). We call this phenomenon weak sample unlocking. More precisely, here is a training procedure which demonstrates this phenomenon: 1. Take a model trained to be good at Math (e.g. DeepSeekMath-RL 7B) 2. Fine-tune it to still be good at Math when there is a password in the prompt, but bad at Math when there isn't. We train it to perform badly at math by training it on outputs from a weaker model (e.g. Llama 7b). The resulting model is called a password-locked model. 3. Take 2 examples of being bad at Math (e.g. sampled from Llama 7B), and fine-tune on them for 200 epochs (which memorizes these examples). We see substantially increased capability after this abnormal fine-tuning! We did a wide variety of follow-up experiments examining this phenomenon, which we'll discuss in this post. Here are some of our most interesting results: Surprisingly, it's quite hard to train a model to be robust to this sort of process. We didn't find any fully successful approaches; This phenomenon is specific to overfitting on very few samples: adding noise to the weights or fine-tuning on varied examples of being bad at Math doesn't increase capabilities; Using a larger number of weak samples (e.g. 64) reduces the effect size, but averaging together the weight updates from a bunch of training runs each on a small number of samples does not (e.g. averaging together 16 runs which each involve memorizing a different group of 4 samples is very different from one training run which memorizes all 64 samples); If you repeatedly do weak sample unlocking and then train back in the password locking, the resulting model is robust to weak sample unlocking. However, if you then take this model and fine-tune for an epoch on a large amount of weak data and then do weak sample unlocking, weak sample unlocking mostly works; The effect size is bigger when the gap between the weak and the strong model is smaller; The effect size is smaller when we don't train on tokens that the weak model predicts better than the strong one during elicitation (step 3). We don't have a great explanation for why and when we observe weak sample unlocking, and we welcome investigations of this phenomenon. We are also interested in password-locking techniques that are robust to this kind of weak sample elicitation. We think these results here might be relevant to a variety of model organism style research that involves conditional models. In the remainder of this post, we show results, detail in what situations we observe this phenomenon, and discuss various hypotheses for what is going on. For discussion of why you might care about this general setup, read our post on sandbagging. This post discusses many preliminary results which we quickly implemented, so it's possible that some of the results are buggy, and we recite many of the results from memory. The code for a minimal example of this phenomenon can be found here. The p...

Pantha Politix Podcast
Episode 136: Lemon Pepper Liabilities w/ Monster Elicit

Pantha Politix Podcast

Play Episode Listen Later May 13, 2024 97:45


Pantha Politix Podcast is a fiercely real examination of the life and times of Black men reared by Hip-Hop culture. Entertaining, engaging, and honest. Hosted by ethemadassassin, Mojo Barnes, and Seven Da Pantha Follow the squad on IG, stream us wherever you listen to podcasts or watch us on Rumble! https://linktree.com/PanthaPolitixPod --- Support this podcast: https://podcasters.spotify.com/pod/show/pantha-politix/support

Convergence
Conversations You've Forgotten to Have With Your Product Team #1: Why Are We Doing This?

Convergence

Play Episode Listen Later May 7, 2024 20:32


Discover the pivotal role of strategic conversations in shaping successful product teams in our latest episode of the Convergence podcast, featuring Integral's Director of Product Management Bailey O'Shea. In part one of our series "9 Conversations You Forget to Have With Your Product Team," we explore the crucial discussion of the "why" behind your product initiatives. This episode unpacks how a clear understanding of business strategy and goals can significantly influence the success of product development and team alignment. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... Understanding the Importance of 'Why': How aligning your team's efforts with the business strategy can lead to greater productivity and product success. Facilitating Strategic Conversations: Techniques for ensuring all team members—from engineers to stakeholders—are on the same page regarding business goals. Impact of Vision on Product Development: Real-world examples of how a well-articulated vision can transform team motivation and output. Exercises to Elicit the 'Why': Practical activities like the press release exercise to help teams internalize and articulate the project's purpose. Documenting and Communicating Vision: The significance of maintaining accessible, clear documentation to keep the team aligned, especially when onboarding new members. Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.   Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ Twitter: https://twitter.com/podconvergence Instagram: @podconvergence

Aid, Evolved
AI for Health, Part 1: Promise and Perils

Aid, Evolved

Play Episode Listen Later May 2, 2024 61:28


With the mind-bending pace at which artificial intelligence (AI) is changing the way we work and live, healthcare organizations are asking themselves: what do I need to know today to seize this opportunity? In this episode, experts from the World Health Organization, IDInsight, and Reach Digital Health unpack the promise and perils of AI for health. Today's episode is a panel discussion first recorded live at the Marmalade Festival at the Skoll World Forum in Oxford on April 12, 2024. This is the first of a 3-part podcast series on AI for Health powered by Reach Digital Health.Our lineup includes:* Andy Pattison, Team Lead Digital Channels, World Health Organization* Debbie Rogers, CEO of Reach Digital Health* Sid Ravinutala, Director of Data Science, IDInsightListen now wherever you get your podcasts (Apple Podcasts, Spotify, Overcast, etc.).Stay tuned for future episodes on our mini-series about AI for Health. In our next episode, we'll speak in greater depth with the World Health Organization (WHO), the Canadian funding agency IDRC, and the Center for the Fourth Industrial Revolution.Connect with Africa Health Ventures

The Ken Carman Show with Anthony Lima
Do some fans actually want the Cavs to lose to the Magic to elicit changes?

The Ken Carman Show with Anthony Lima

Play Episode Listen Later Apr 29, 2024 12:24


Ken Carman and Anthony Lima continue their talks on the Cleveland Cavaliers, including how some fans seem to be wanting the Cavs to lose their postseason series to the Orlando Magic so that changes are made in the organization.

devtools.fm
Maggie Appleton - Visual Storytelling in Tech, Designing for AI, and the Future of Coding

devtools.fm

Play Episode Listen Later Apr 29, 2024 59:33


This week we have Maggie Appleton, a designer and developer who is working on a new research tool called Elicit. Maggie is a masterful visual storyteller and has been creating images that are both beautiful and informative for years. She is also a proponent of Digital Gardening, a new way of building a personal website that is both beautiful and informative. We talk about how we should be building AI into our apps, and how we can use the power of local first development to make our apps more accessible to everyone. https://twitter.com/Mappletons https://maggieappleton.com/ https://elicit.com/ Apply to sponsor the podcast: https://devtools.fm/sponsor Become a paid subscriber our patreon, spotify, or apple podcasts for the full episode. https://www.patreon.com/devtoolsfm https://podcasters.spotify.com/pod/show/devtoolsfm/subscribe https://podcasts.apple.com/us/podcast/devtools-fm/id1566647758 https://www.youtube.com/@devtoolsfm/membership Tooltips Andrew https://github.com/stanford-oval/storm https://next-view-transitions.vercel.app/ Justin https://github.com/atomicojs/atomico https://superstate.io/ Maggie Super Whisper - https://superwhisper.com/ https://www.youtube.com/watch?v=wjZofJX0v4M

Unlimited Influence
Psychological Secrets of Human Influence - Charisma On Command, Win Arguments, Make People Like You Part 2

Unlimited Influence

Play Episode Listen Later Apr 28, 2024 56:14


Have you ever felt powerless to influence others or achieve your goals? Wondered how some people always seem to get their way? David Snyder holds the keys to unlocking human influence. He shares techniques for controlling one's emotions, bonding with others through the echo technique, using hypnotic language patterns and neuroscientific storytelling, as well as a universal persuasion protocol, to influence people, overcome objections, and achieve goals through state control and understanding emotional bonding checklists. Tune in to discover his secrets for getting what you want from life! Standout Quotes: “People automatically create an emotional bonding checklist for anything they can conceive.” - Dr. David Snyder “A person cannot tolerate having their values violated without experiencing emotional pain.” - Dr. David Snyder “If you understand state control, 80% of your influence and persuasion is done before you even open your mouth.” - Dr. David Snyder “Language is fun and powerful, but it's amplified by what you do with your body, mind, and emotions.” - Dr. David Snyder “Nobody has better state control training in the world than we do. And if there were such a thing as Jedi powers, they come from your state control, not your language.” - Dr. David Snyder Key Takeaways: Apply the universal persuasion protocol - know your outcome, control your state, get rapport, use language to manage states and bond emotionally. Practice this formula in different contexts. Learn state control techniques. Work to gain mastery over your own emotions so you can influence others without speaking. Take Snyder's state control training. Craft emotional bonding checklists for yourself and others. Understand what drives desires and decisions. Elicit criteria and values to link offers persuasively. Deconstruct presentations by influential speakers. Analyze how they use story, language patterns and NLP to build cult followings. Apply lessons to your own presentations. Episode Timeline: [00:02] Can NLP decode desires? Unveiling influence strategies [05:06] Are persuasion tactics the key to personal growth? [10:27] Mastering persuasion: Unveiling emotional control [16:20] How to persuade with neuroscience [21:37] Unraveling the secrets of persuasion [37:03] Crafting compelling presentations [42:30] Unlocking persuasion mastery: NLP techniques demystified [46:01] Mastering state control and language [52:10] Decoding decision-making

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are reuniting for the 2nd AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!About a year ago there was a lot of buzz around prompt engineering techniques to force structured output. Our friend Simon Willison tweeted a bunch of tips and tricks, but the most iconic one is Riley Goodside making it a matter of life or death:Guardrails (friend of the pod and AI Engineer speaker), Marvin (AI Engineer speaker), and jsonformer had also come out at the time. In June 2023, Jason Liu (today's guest!) open sourced his “OpenAI Function Call and Pydantic Integration Module”, now known as Instructor, which quickly turned prompt engineering black magic into a clean, developer-friendly SDK. A few months later, model providers started to add function calling capabilities to their APIs as well as structured outputs support like “JSON Mode”, which was announced at OpenAI Dev Day (see recap here). In just a handful of months, we went from threatening to kill grandmas to first-class support from the research labs. And yet, Instructor was still downloaded 150,000 times last month. Why?What Instructor looks likeInstructor patches your LLM provider SDKs to offer a new response_model option to which you can pass a structure defined in Pydantic. It currently supports OpenAI, Anthropic, Cohere, and a long tail of models through LiteLLM.What Instructor is forThere are three core use cases to Instructor:* Extracting structured data: Taking an input like an image of a receipt and extracting structured data from it, such as a list of checkout items with their prices, fees, and coupon codes.* Extracting graphs: Identifying nodes and edges in a given input to extract complex entities and their relationships. For example, extracting relationships between characters in a story or dependencies between tasks.* Query understanding: Defining a schema for an API call and using a language model to resolve a request into a more complex one that an embedding could not handle. For example, creating date intervals from queries like “what was the latest thing that happened this week?” to then pass onto a RAG system or similar.Jason called all these different ways of getting data from LLMs “typed responses”: taking strings and turning them into data structures. Structured outputs as a planning toolThe first wave of agents was all about open-ended iteration and planning, with projects like AutoGPT and BabyAGI. Models would come up with a possible list of steps, and start going down the list one by one. It's really easy for them to go down the wrong branch, or get stuck on a single step with no way to intervene.What if these planning steps were returned to us as DAGs using structured output, and then managed as workflows? This also makes it easy to better train model on how to create these plans, as they are much more structured than a bullet point list. Once you have this structure, each piece can be modified individually by different specialized models. You can read some of Jason's experiments here:While LLMs will keep improving (Llama3 just got released as we write this), having a consistent structure for the output will make it a lot easier to swap models in and out. Jason's overall message on how we can move from ReAct loops to more controllable Agent workflows mirrors the “Process” discussion from our Elicit episode:Watch the talkAs a bonus, here's Jason's talk from last year's AI Engineer Summit. He'll also be a speaker at this year's AI Engineer World's Fair!Timestamps* [00:00:00] Introductions* [00:02:23] Early experiments with Generative AI at StitchFix* [00:08:11] Design philosophy behind the Instructor library* [00:11:12] JSON Mode vs Function Calling* [00:12:30] Single vs parallel function calling* [00:14:00] How many functions is too many?* [00:17:39] How to evaluate function calling* [00:20:23] What is Instructor good for?* [00:22:42] The Evolution from Looping to Workflow in AI Engineering* [00:27:03] State of the AI Engineering Stack* [00:28:26] Why Instructor isn't VC backed* [00:31:15] Advice on Pursuing Open Source Projects and Consulting* [00:36:00] The Concept of High Agency and Its Importance* [00:42:44] Prompts as Code and the Structure of AI Inputs and Outputs* [00:44:20] The Emergence of AI Engineering as a Distinct FieldShow notes* Jason on the UWaterloo mafia* Jason on Twitter, LinkedIn, website* Instructor docs* Max Woolf on the potential of Structured Output* swyx on Elo vs Cost* Jason on Anthropic Function Calling* Jason on Rejections, Advice to Young People* Jason on Bad Startup Ideas* Jason on Prompts as Code* Rysana's inversion models* Bryan Bischof's episode* Hamel HusainTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:16]: Hello, we're back in the remote studio with Jason Liu from Instructor. Welcome Jason.Jason [00:00:21]: Hey there. Thanks for having me.Swyx [00:00:23]: Jason, you are extremely famous, so I don't know what I'm going to do introducing you, but you're one of the Waterloo clan. There's like this small cadre of you that's just completely dominating machine learning. Actually, can you list like Waterloo alums that you're like, you know, are just dominating and crushing it right now?Jason [00:00:39]: So like John from like Rysana is doing his inversion models, right? I know like Clive Chen from Waterloo. When I started the data science club, he was one of the guys who were like joining in and just like hanging out in the room. And now he was at Tesla working with Karpathy, now he's at OpenAI, you know.Swyx [00:00:56]: He's in my climbing club.Jason [00:00:58]: Oh, hell yeah. I haven't seen him in like six years now.Swyx [00:01:01]: To get in the social scene in San Francisco, you have to climb. So both in career and in rocks. So you started a data science club at Waterloo, we can talk about that, but then also spent five years at Stitch Fix as an MLE. You pioneered the use of OpenAI's LLMs to increase stylist efficiency. So you must have been like a very, very early user. This was like pretty early on.Jason [00:01:20]: Yeah, I mean, this was like GPT-3, okay. So we actually were using transformers at Stitch Fix before the GPT-3 model. So we were just using transformers for recommendation systems. At that time, I was very skeptical of transformers. I was like, why do we need all this infrastructure? We can just use like matrix factorization. When GPT-2 came out, I fine tuned my own GPT-2 to write like rap lyrics and I was like, okay, this is cute. Okay, I got to go back to my real job, right? Like who cares if I can write a rap lyric? When GPT-3 came out, again, I was very much like, why are we using like a post request to review every comment a person leaves? Like we can just use classical models. So I was very against language models for like the longest time. And then when ChatGPT came out, I basically just wrote a long apology letter to everyone at the company. I was like, hey guys, you know, I was very dismissive of some of this technology. I didn't think it would scale well, and I am wrong. This is incredible. And I immediately just transitioned to go from computer vision recommendation systems to LLMs. But funny enough, now that we have RAG, we're kind of going back to recommendation systems.Swyx [00:02:21]: Yeah, speaking of that, I think Alessio is going to bring up the next one.Alessio [00:02:23]: Yeah, I was going to say, we had Bryan Bischof from Hex on the podcast. Did you overlap at Stitch Fix?Jason [00:02:28]: Yeah, he was like one of my main users of the recommendation frameworks that I had built out at Stitch Fix.Alessio [00:02:32]: Yeah, we talked a lot about RecSys, so it makes sense.Swyx [00:02:36]: So now I have adopted that line, RAG is RecSys. And you know, if you're trying to reinvent new concepts, you should study RecSys first, because you're going to independently reinvent a lot of concepts. So your system was called Flight. It's a recommendation framework with over 80% adoption, servicing 350 million requests every day. Wasn't there something existing at Stitch Fix? Why did you have to write one from scratch?Jason [00:02:56]: No, so I think because at Stitch Fix, a lot of the machine learning engineers and data scientists were writing production code, sort of every team's systems were very bespoke. It's like, this team only needs to do like real time recommendations with small data. So they just have like a fast API app with some like pandas code. This other team has to do a lot more data. So they have some kind of like Spark job that does some batch ETL that does a recommendation. And so what happens is each team writes their code differently. And I have to come in and refactor their code. And I was like, oh man, I'm refactoring four different code bases, four different times. Wouldn't it be better if all the code quality was my fault? Let me just write this framework, force everyone else to use it. And now one person can maintain five different systems, rather than five teams having their own bespoke system. And so it was really a need of just sort of standardizing everything. And then once you do that, you can do observability across the entire pipeline and make large sweeping improvements in this infrastructure, right? If we notice that something is slow, we can detect it on the operator layer. Just hey, hey, like this team, you guys are doing this operation is lowering our latency by like 30%. If you just optimize your Python code here, we can probably make an extra million dollars. So let's jump on a call and figure this out. And then a lot of it was doing all this observability work to figure out what the heck is going on and optimize this system from not only just a code perspective, sort of like harassingly or against saying like, we need to add caching here. We're doing duplicated work here. Let's go clean up the systems. Yep.Swyx [00:04:22]: Got it. One more system that I'm interested in finding out more about is your similarity search system using Clip and GPT-3 embeddings and FIASS, where you saved over $50 million in annual revenue. So of course they all gave all that to you, right?Jason [00:04:34]: No, no, no. I mean, it's not going up and down, but you know, I got a little bit, so I'm pretty happy about that. But there, you know, that was when we were doing fine tuning like ResNets to do image classification. And so a lot of it was given an image, if we could predict the different attributes we have in the merchandising and we can predict the text embeddings of the comments, then we can kind of build a image vector or image embedding that can capture both descriptions of the clothing and sales of the clothing. And then we would use these additional vectors to augment our recommendation system. And so with the recommendation system really was just around like, what are similar items? What are complimentary items? What are items that you would wear in a single outfit? And being able to say on a product page, let me show you like 15, 20 more things. And then what we found was like, hey, when you turn that on, you make a bunch of money.Swyx [00:05:23]: Yeah. So, okay. So you didn't actually use GPT-3 embeddings. You fine tuned your own? Because I was surprised that GPT-3 worked off the shelf.Jason [00:05:30]: Because I mean, at this point we would have 3 million pieces of inventory over like a billion interactions between users and clothes. So any kind of fine tuning would definitely outperform like some off the shelf model.Swyx [00:05:41]: Cool. I'm about to move on from Stitch Fix, but you know, any other like fun stories from the Stitch Fix days that you want to cover?Jason [00:05:46]: No, I think that's basically it. I mean, the biggest one really was the fact that I think for just four years, I was so bearish on language models and just NLP in general. I'm just like, none of this really works. Like, why would I spend time focusing on this? I got to go do the thing that makes money, recommendations, bounding boxes, image classification. Yeah. Now I'm like prompting an image model. I was like, oh man, I was wrong.Swyx [00:06:06]: So my Stitch Fix question would be, you know, I think you have a bit of a drip and I don't, you know, my primary wardrobe is free startup conference t-shirts. Should more technology brothers be using Stitch Fix? What's your fashion advice?Jason [00:06:19]: Oh man, I mean, I'm not a user of Stitch Fix, right? It's like, I enjoy going out and like touching things and putting things on and trying them on. Right. I think Stitch Fix is a place where you kind of go because you want the work offloaded. I really love the clothing I buy where I have to like, when I land in Japan, I'm doing like a 45 minute walk up a giant hill to find this weird denim shop. That's the stuff that really excites me. But I think the bigger thing that's really captured is this idea that narrative matters a lot to human beings. Okay. And I think the recommendation system, that's really hard to capture. It's easy to use AI to sell like a $20 shirt, but it's really hard for AI to sell like a $500 shirt. But people are buying $500 shirts, you know what I mean? There's definitely something that we can't really capture just yet that we probably will figure out how to in the future.Swyx [00:07:07]: Well, it'll probably output in JSON, which is what we're going to turn to next. Then you went on a sabbatical to South Park Commons in New York, which is unusual because it's based on USF.Jason [00:07:17]: Yeah. So basically in 2020, really, I was enjoying working a lot as I was like building a lot of stuff. This is where we were making like the tens of millions of dollars doing stuff. And then I had a hand injury. And so I really couldn't code anymore for like a year, two years. And so I kind of took sort of half of it as medical leave, the other half I became more of like a tech lead, just like making sure the systems were like lights were on. And then when I went to New York, I spent some time there and kind of just like wound down the tech work, you know, did some pottery, did some jujitsu. And after GPD came out, I was like, oh, I clearly need to figure out what is going on here because something feels very magical. I don't understand it. So I spent basically like five months just prompting and playing around with stuff. And then afterwards, it was just my startup friends going like, hey, Jason, you know, my investors want us to have an AI strategy. Can you help us out? And it just snowballed and bore more and more until I was making this my full time job. Yeah, got it.Swyx [00:08:11]: You know, you had YouTube University and a journaling app, you know, a bunch of other explorations. But it seems like the most productive or the best known thing that came out of your time there was Instructor. Yeah.Jason [00:08:22]: Written on the bullet train in Japan. I think at some point, you know, tools like Guardrails and Marvin came out. Those are kind of tools that I use XML and Pytantic to get structured data out. But they really were doing things sort of in the prompt. And these are built with sort of the instruct models in mind. Like I'd already done that in the past. Right. At Stitch Fix, you know, one of the things we did was we would take a request note and turn that into a JSON object that we would use to send it to our search engine. Right. So if you said like, I want to, you know, skinny jeans that were this size, that would turn into JSON that we would send to our internal search APIs. But it always felt kind of gross. A lot of it is just like you read the JSON, you like parse it, you make sure the names are strings and ages are numbers and you do all this like messy stuff. But when function calling came out, it was very much sort of a new way of doing things. Right. Function calling lets you define the schema separate from the data and the instructions. And what this meant was you can kind of have a lot more complex schemas and just map them in Pytantic. And then you can just keep those very separate. And then once you add like methods, you can add validators and all that kind of stuff. The one thing I really had with a lot of these libraries, though, was it was doing a lot of the string formatting themselves, which was fine when it was the instruction to models. You just have a string. But when you have these new chat models, you have these chat messages. And I just didn't really feel like not being able to access that for the developer was sort of a good benefit that they would get. And so I just said, let me write like the most simple SDK around the OpenAI SDK, a simple wrapper on the SDK, just handle the response model a bit and kind of think of myself more like requests than actual framework that people can use. And so the goal is like, hey, like this is something that you can use to build your own framework. But let me just do all the boring stuff that nobody really wants to do. People want to build their own frameworks, but people don't want to build like JSON parsing.Swyx [00:10:08]: And the retrying and all that other stuff.Jason [00:10:10]: Yeah.Swyx [00:10:11]: Right. We had this a little bit of this discussion before the show, but like that design principle of going for being requests rather than being Django. Yeah. So what inspires you there? This has come from a lot of prior pain. Are there other open source projects that inspired your philosophy here? Yeah.Jason [00:10:25]: I mean, I think it would be requests, right? Like, I think it is just the obvious thing you install. If you were going to go make HTTP requests in Python, you would obviously import requests. Maybe if you want to do more async work, there's like future tools, but you don't really even think about installing it. And when you do install it, you don't think of it as like, oh, this is a requests app. Right? Like, no, this is just Python. The bigger question is, like, a lot of people ask questions like, oh, why isn't requests like in the standard library? Yeah. That's how I want my library to feel, right? It's like, oh, if you're going to use the LLM SDKs, you're obviously going to install instructor. And then I think the second question would be like, oh, like, how come instructor doesn't just go into OpenAI, go into Anthropic? Like, if that's the conversation we're having, like, that's where I feel like I've succeeded. Yeah. It's like, yeah, so standard, you may as well just have it in the base libraries.Alessio [00:11:12]: And the shape of the request stayed the same, but initially function calling was maybe equal structure outputs for a lot of people. I think now the models also support like JSON mode and some of these things and, you know, return JSON or my grandma is going to die. All of that stuff is maybe to decide how have you seen that evolution? Like maybe what's the metagame today? Should people just forget about function calling for structure outputs or when is structure output like JSON mode the best versus not? We'd love to get any thoughts given that you do this every day.Jason [00:11:42]: Yeah, I would almost say these are like different implementations of like the real thing we care about is the fact that now we have typed responses to language models. And because we have that type response, my IDE is a little bit happier. I get autocomplete. If I'm using the response wrong, there's a little red squiggly line. Like those are the things I care about in terms of whether or not like JSON mode is better. I usually think it's almost worse unless you want to spend less money on like the prompt tokens that the function call represents, primarily because with JSON mode, you don't actually specify the schema. So sure, like JSON load works, but really, I care a lot more than just the fact that it is JSON, right? I think function calling gives you a tool to specify the fact like, okay, this is a list of objects that I want and each object has a name or an age and I want the age to be above zero and I want to make sure it's parsed correctly. That's where kind of function calling really shines.Alessio [00:12:30]: Any thoughts on single versus parallel function calling? So I did a presentation at our AI in Action Discord channel, and obviously showcase instructor. One of the big things that we have before with single function calling is like when you're trying to extract lists, you have to make these funky like properties that are lists to then actually return all the objects. How do you see the hack being put on the developer's plate versus like more of this stuff just getting better in the model? And I know you tweeted recently about Anthropic, for example, you know, some lists are not lists or strings and there's like all of these discrepancies.Jason [00:13:04]: I almost would prefer it if it was always a single function call. Obviously, there is like the agents workflows that, you know, Instructor doesn't really support that well, but are things that, you know, ought to be done, right? Like you could define, I think maybe like 50 or 60 different functions in a single API call. And, you know, if it was like get the weather or turn the lights on or do something else, it makes a lot of sense to have these parallel function calls. But in terms of an extraction workflow, I definitely think it's probably more helpful to have everything be a single schema, right? Just because you can sort of specify relationships between these entities that you can't do in a parallel function calling, you can have a single chain of thought before you generate a list of results. Like there's like small like API differences, right? Where if it's for parallel function calling, if you do one, like again, really, I really care about how the SDK looks and says, okay, do I always return a list of functions or do you just want to have the actual object back out and you want to have like auto complete over that object? Interesting.Alessio [00:14:00]: What's kind of the cap for like how many function definitions you can put in where it still works well? Do you have any sense on that?Jason [00:14:07]: I mean, for the most part, I haven't really had a need to do anything that's more than six or seven different functions. I think in the documentation, they support way more. I don't even know if there's any good evals that have over like two dozen function calls. I think if you're running into issues where you have like 20 or 50 or 60 function calls, I think you're much better having those specifications saved in a vector database and then have them be retrieved, right? So if there are 30 tools, like you should basically be like ranking them and then using the top K to do selection a little bit better rather than just like shoving like 60 functions into a single. Yeah.Swyx [00:14:40]: Yeah. Well, I mean, so I think this is relevant now because previously I think context limits prevented you from having more than a dozen tools anyway. And now that we have million token context windows, you know, a cloud recently with their new function calling release said they can handle over 250 tools, which is insane to me. That's, that's a lot. You're saying like, you know, you don't think there's many people doing that. I think anyone with a sort of agent like platform where you have a bunch of connectors, they wouldn't run into that problem. Probably you're right that they should use a vector database and kind of rag their tools. I know Zapier has like a few thousand, like 8,000, 9,000 connectors that, you know, obviously don't fit anywhere. So yeah, I mean, I think that would be it unless you need some kind of intelligence that chains things together, which is, I think what Alessio is coming back to, right? Like there's this trend about parallel function calling. I don't know what I think about that. Anthropic's version was, I think they use multiple tools in sequence, but they're not in parallel. I haven't explored this at all. I'm just like throwing this open to you as to like, what do you think about all these new things? Yeah.Jason [00:15:40]: It's like, you know, do we assume that all function calls could happen in any order? In which case, like we either can assume that, or we can assume that like things need to happen in some kind of sequence as a DAG, right? But if it's a DAG, really that's just like one JSON object that is the entire DAG rather than going like, okay, the order of the function that return don't matter. That's definitely just not true in practice, right? Like if I have a thing that's like turn the lights on, like unplug the power, and then like turn the toaster on or something like the order doesn't matter. And it's unclear how well you can describe the importance of that reasoning to a language model yet. I mean, I'm sure you can do it with like good enough prompting, but I just haven't any use cases where the function sequence really matters. Yeah.Alessio [00:16:18]: To me, the most interesting thing is the models are better at picking than your ranking is usually. Like I'm incubating a company around system integration. For example, with one system, there are like 780 endpoints. And if you're actually trying to do vector similarity, it's not that good because the people that wrote the specs didn't have in mind making them like semantically apart. You know, they're kind of like, oh, create this, create this, create this. Versus when you give it to a model, like in Opus, you put them all, it's quite good at picking which ones you should actually run. And I'm curious to see if the model providers actually care about some of those workflows or if the agent companies are actually going to build very good rankers to kind of fill that gap.Jason [00:16:58]: Yeah. My money is on the rankers because you can do those so easily, right? You could just say, well, given the embeddings of my search query and the embeddings of the description, I can just train XGBoost and just make sure that I have very high like MRR, which is like mean reciprocal rank. And so the only objective is to make sure that the tools you use are in the top end filtered. Like that feels super straightforward and you don't have to actually figure out how to fine tune a language model to do tool selection anymore. Yeah. I definitely think that's the case because for the most part, I imagine you either have like less than three tools or more than a thousand. I don't know what kind of company said, oh, thank God we only have like 185 tools and this works perfectly, right? That's right.Alessio [00:17:39]: And before we maybe move on just from this, it was interesting to me, you retweeted this thing about Anthropic function calling and it was Joshua Brown's retweeting some benchmark that it's like, oh my God, Anthropic function calling so good. And then you retweeted it and then you tweeted it later and it's like, it's actually not that good. What's your flow? How do you actually test these things? Because obviously the benchmarks are lying, right? Because the benchmarks say it's good and you said it's bad and I trust you more than the benchmark. How do you think about that? And then how do you evolve it over time?Jason [00:18:09]: It's mostly just client data. I actually have been mostly busy with enough client work that I haven't been able to reproduce public benchmarks. And so I can't even share some of the results in Anthropic. I would just say like in production, we have some pretty interesting schemas where it's like iteratively building lists where we're doing like updates of lists, like we're doing in place updates. So like upserts and inserts. And in those situations we're like, oh yeah, we have a bunch of different parsing errors. Numbers are being returned to strings. We were expecting lists of objects, but we're getting strings that are like the strings of JSON, right? So we had to call JSON parse on individual elements. Overall, I'm like super happy with the Anthropic models compared to the OpenAI models. Sonnet is very cost effective. Haiku is in function calling, it's actually better, but I think they just had to sort of file down the edges a little bit where like our tests pass, but then we actually deployed a production. We got half a percent of traffic having issues where if you ask for JSON, it'll try to talk to you. Or if you use function calling, you know, we'll have like a parse error. And so I think that definitely gonna be things that are fixed in like the upcoming weeks. But in terms of like the reasoning capabilities, man, it's hard to beat like 70% cost reduction, especially when you're building consumer applications, right? If you're building something for consultants or private equity, like you're charging $400, it doesn't really matter if it's a dollar or $2. But for consumer apps, it makes products viable. If you can go from four to Sonnet, you might actually be able to price it better. Yeah.Swyx [00:19:31]: I had this chart about the ELO versus the cost of all the models. And you could put trend graphs on each of those things about like, you know, higher ELO equals higher cost, except for Haiku. Haiku kind of just broke the lines, or the ISO ELOs, if you want to call it. Cool. Before we go too far into your opinions on just the overall ecosystem, I want to make sure that we map out the surface area of Instructor. I would say that most people would be familiar with Instructor from your talks and your tweets and all that. You had the number one talk from the AI Engineer Summit.Jason [00:20:03]: Two Liu. Jason Liu and Jerry Liu. Yeah.Swyx [00:20:06]: Yeah. Until I actually went through your cookbook, I didn't realize the surface area. How would you categorize the use cases? You have LLM self-critique, you have knowledge graphs in here, you have PII data sanitation. How do you characterize to people what is the surface area of Instructor? Yeah.Jason [00:20:23]: This is the part that feels crazy because really the difference is LLMs give you strings and Instructor gives you data structures. And once you get data structures, again, you can do every lead code problem you ever thought of. Right. And so I think there's a couple of really common applications. The first one obviously is extracting structured data. This is just be, okay, well, like I want to put in an image of a receipt. I want to give it back out a list of checkout items with a price and a fee and a coupon code or whatever. That's one application. Another application really is around extracting graphs out. So one of the things we found out about these language models is that not only can you define nodes, it's really good at figuring out what are nodes and what are edges. And so we have a bunch of examples where, you know, not only do I extract that, you know, this happens after that, but also like, okay, these two are dependencies of another task. And you can do, you know, extracting complex entities that have relationships. Given a story, for example, you could extract relationships of families across different characters. This can all be done by defining a graph. The last really big application really is just around query understanding. The idea is that like any API call has some schema and if you can define that schema ahead of time, you can use a language model to resolve a request into a much more complex request. One that an embedding could not do. So for example, I have a really popular post called like rag is more than embeddings. And effectively, you know, if I have a question like this, what was the latest thing that happened this week? That embeds to nothing, right? But really like that query should just be like select all data where the date time is between today and today minus seven days, right? What if I said, how did my writing change between this month and last month? Again, embeddings would do nothing. But really, if you could do like a group by over the month and a summarize, then you could again like do something much more interesting. And so this really just calls out the fact that embeddings really is kind of like the lowest hanging fruit. And using something like instructor can really help produce a data structure. And then you can just use your computer science and reason about the data structure. Maybe you say, okay, well, I'm going to produce a graph where I want to group by each month and then summarize them jointly. You can do that if you know how to define this data structure. Yeah.Swyx [00:22:29]: So you kind of run up against like the LangChains of the world that used to have that. They still do have like the self querying, I think they used to call it when we had Harrison on in our episode. How do you see yourself interacting with the other LLM frameworks in the ecosystem? Yeah.Jason [00:22:42]: I mean, if they use instructor, I think that's totally cool. Again, it's like, it's just Python, right? It's like asking like, oh, how does like Django interact with requests? Well, you just might make a request.get in a Django app, right? But no one would say, I like went off of Django because I'm using requests now. They should be ideally like sort of the wrong comparison in terms of especially like the agent workflows. I think the real goal for me is to go down like the LLM compiler route, which is instead of doing like a react type reasoning loop. I think my belief is that we should be using like workflows. If we do this, then we always have a request and a complete workflow. We can fine tune a model that has a better workflow. Whereas it's hard to think about like, how do you fine tune a better react loop? Yeah. You always train it to have less looping, in which case like you wanted to get the right answer the first time, in which case it was a workflow to begin with, right?Swyx [00:23:31]: Can you define workflow? Because I used to work at a workflow company, but I'm not sure this is a good term for everybody.Jason [00:23:36]: I'm thinking workflow in terms of like the prefect Zapier workflow. Like I want to build a DAG, I want you to tell me what the nodes and edges are. And then maybe the edges are also put in with AI. But the idea is that like, I want to be able to present you the entire plan and then ask you to fix things as I execute it, rather than going like, hey, I couldn't parse the JSON, so I'm going to try again. I couldn't parse the JSON, I'm going to try again. And then next thing you know, you spent like $2 on opening AI credits, right? Yeah. Whereas with the plan, you can just say, oh, the edge between node like X and Y does not run. Let me just iteratively try to fix that, fix the one that sticks, go on to the next component. And obviously you can get into a world where if you have enough examples of the nodes X and Y, maybe you can use like a vector database to find a good few shot examples. You can do a lot if you sort of break down the problem into that workflow and executing that workflow, rather than looping and hoping the reasoning is good enough to generate the correct output. Yeah.Swyx [00:24:35]: You know, I've been hammering on Devon a lot. I got access a couple of weeks ago. And obviously for simple tasks, it does well. For the complicated, like more than 10, 20 hour tasks, I can see- That's a crazy comparison.Jason [00:24:47]: We used to talk about like three, four loops. Only once it gets to like hour tasks, it's hard.Swyx [00:24:54]: Yeah. Less than an hour, there's nothing.Jason [00:24:57]: That's crazy.Swyx [00:24:58]: I mean, okay. Maybe my goalposts have shifted. I don't know. That's incredible.Jason [00:25:02]: Yeah. No, no. I'm like sub one minute executions. Like the fact that you're talking about 10 hours is incredible.Swyx [00:25:08]: I think it's a spectrum. I think I'm going to say this every single time I bring up Devon. Let's not reward them for taking longer to do things. Do you know what I mean? I think that's a metric that is easily abusable.Jason [00:25:18]: Sure. Yeah. You know what I mean? But I think if you can monotonically increase the success probability over an hour, that's winning to me. Right? Like obviously if you run an hour and you've made no progress. Like I think when we were in like auto GBT land, there was that one example where it's like, I wanted it to like buy me a bicycle overnight. I spent $7 on credit and I never found the bicycle. Yeah.Swyx [00:25:41]: Yeah. Right. I wonder if you'll be able to purchase a bicycle. Because it actually can do things in real world. It just needs to suspend to you for off and stuff. The point I was trying to make was that I can see it turning plans. I think one of the agents loopholes or one of the things that is a real barrier for agents is LLMs really like to get stuck into a lane. And you know what you're talking about, what I've seen Devon do is it gets stuck in a lane and it will just kind of change plans based on the performance of the plan itself. And it's kind of cool.Jason [00:26:05]: I feel like we've gone too much in the looping route and I think a lot of more plans and like DAGs and data structures are probably going to come back to help fill in some holes. Yeah.Alessio [00:26:14]: What do you think of the interface to that? Do you see it's like an existing state machine kind of thing that connects to the LLMs, the traditional DAG players? Do you think we need something new for like AI DAGs?Jason [00:26:25]: Yeah. I mean, I think that the hard part is going to be describing visually the fact that this DAG can also change over time and it should still be allowed to be fuzzy. I think in like mathematics, we have like plate diagrams and like Markov chain diagrams and like recurrent states and all that. Some of that might come into this workflow world. But to be honest, I'm not too sure. I think right now, the first steps are just how do we take this DAG idea and break it down to modular components that we can like prompt better, have few shot examples for and ultimately like fine tune against. But in terms of even the UI, it's hard to say what it will likely win. I think, you know, people like Prefect and Zapier have a pretty good shot at doing a good job.Swyx [00:27:03]: Yeah. You seem to use Prefect a lot. I actually worked at a Prefect competitor at Temporal and I'm also very familiar with Dagster. What else would you call out as like particularly interesting in the AI engineering stack?Jason [00:27:13]: Man, I almost use nothing. I just use Cursor and like PyTests. Okay. I think that's basically it. You know, a lot of the observability companies have... The more observability companies I've tried, the more I just use Postgres.Swyx [00:27:29]: Really? Okay. Postgres for observability?Jason [00:27:32]: But the issue really is the fact that these observability companies isn't actually doing observability for the system. It's just doing the LLM thing. Like I still end up using like Datadog or like, you know, Sentry to do like latency. And so I just have those systems handle it. And then the like prompt in, prompt out, latency, token costs. I just put that in like a Postgres table now.Swyx [00:27:51]: So you don't need like 20 funded startups building LLM ops? Yeah.Jason [00:27:55]: But I'm also like an old, tired guy. You know what I mean? Like I think because of my background, it's like, yeah, like the Python stuff, I'll write myself. But you know, I will also just use Vercel happily. Yeah. Yeah. So I'm not really into that world of tooling, whereas I think, you know, I spent three good years building observability tools for recommendation systems. And I was like, oh, compared to that, Instructor is just one call. I just have to put time star, time and then count the prompt token, right? Because I'm not doing a very complex looping behavior. I'm doing mostly workflows and extraction. Yeah.Swyx [00:28:26]: I mean, while we're on this topic, we'll just kind of get this out of the way. You famously have decided to not be a venture backed company. You want to do the consulting route. The obvious route for someone as successful as Instructor is like, oh, here's hosted Instructor with all tooling. Yeah. You just said you had a whole bunch of experience building observability tooling. You have the perfect background to do this and you're not.Jason [00:28:43]: Yeah. Isn't that sick? I think that's sick.Swyx [00:28:44]: I mean, I know why, because you want to go free dive.Jason [00:28:47]: Yeah. Yeah. Because I think there's two things. Right. Well, one, if I tell myself I want to build requests, requests is not a venture backed startup. Right. I mean, one could argue whether or not Postman is, but I think for the most part, it's like having worked so much, I'm more interested in looking at how systems are being applied and just having access to the most interesting data. And I think I can do that more through a consulting business where I can come in and go, oh, you want to build perfect memory. You want to build an agent. You want to build like automations over construction or like insurance and supply chain, or like you want to handle writing private equity, mergers and acquisitions reports based off of user interviews. Those things are super fun. Whereas like maintaining the library, I think is mostly just kind of like a utility that I try to keep up, especially because if it's not venture backed, I have no reason to sort of go down the route of like trying to get a thousand integrations. In my mind, I just go like, okay, 98% of the people use open AI. I'll support that. And if someone contributes another platform, that's great. I'll merge it in. Yeah.Swyx [00:29:45]: I mean, you only added Anthropic support this year. Yeah.Jason [00:29:47]: Yeah. You couldn't even get an API key until like this year, right? That's true. Okay. If I add it like last year, I was trying to like double the code base to service, you know, half a percent of all downloads.Swyx [00:29:58]: Do you think the market share will shift a lot now that Anthropic has like a very, very competitive offering?Jason [00:30:02]: I think it's still hard to get API access. I don't know if it's fully GA now, if it's GA, if you can get a commercial access really easily.Alessio [00:30:12]: I got commercial after like two weeks to reach out to their sales team.Jason [00:30:14]: Okay.Alessio [00:30:15]: Yeah.Swyx [00:30:16]: Two weeks. It's not too bad. There's a call list here. And then anytime you run into rate limits, just like ping one of the Anthropic staff members.Jason [00:30:21]: Yeah. Then maybe we need to like cut that part out. So I don't need to like, you know, spread false news.Swyx [00:30:25]: No, it's cool. It's cool.Jason [00:30:26]: But it's a common question. Yeah. Surely just from the price perspective, it's going to make a lot of sense. Like if you are a business, you should totally consider like Sonnet, right? Like the cost savings is just going to justify it if you actually are doing things at volume. And yeah, I think the SDK is like pretty good. Back to the instructor thing. I just don't think it's a billion dollar company. And I think if I raise money, the first question is going to be like, how are you going to get a billion dollar company? And I would just go like, man, like if I make a million dollars as a consultant, I'm super happy. I'm like more than ecstatic. I can have like a small staff of like three people. It's fun. And I think a lot of my happiest founder friends are those who like raised a tiny seed round, became profitable. They're making like 70, 60, 70, like MRR, 70,000 MRR and they're like, we don't even need to raise the seed round. Let's just keep it like between me and my co-founder, we'll go traveling and it'll be a great time. I think it's a lot of fun.Alessio [00:31:15]: Yeah. like say LLMs / AI and they build some open source stuff and it's like I should just raise money and do this and I tell people a lot it's like look you can make a lot more money doing something else than doing a startup like most people that do a company could make a lot more money just working somewhere else than the company itself do you have any advice for folks that are maybe in a similar situation they're trying to decide oh should I stay in my like high paid FAANG job and just tweet this on the side and do this on github should I go be a consultant like being a consultant seems like a lot of work so you got to talk to all these people you know there's a lot to unpackJason [00:31:54]: I think the open source thing is just like well I'm just doing it purely for fun and I'm doing it because I think I'm right but part of being right is the fact that it's not a venture backed startup like I think I'm right because this is all you need right so I think a part of the philosophy is the fact that all you need is a very sharp blade to sort of do your work and you don't actually need to build like a big enterprise so that's one thing I think the other thing too that I've kind of been thinking around just because I have a lot of friends at google that want to leave right now it's like man like what we lack is not money or skill like what we lack is courage you should like you just have to do this a hard thing and you have to do it scared anyways right in terms of like whether or not you do want to do a founder I think that's just a matter of optionality but I definitely recognize that the like expected value of being a founder is still quite low it is right I know as many founder breakups and as I know friends who raised a seed round this year right like that is like the reality and like you know even in from that perspective it's been tough where it's like oh man like a lot of incubators want you to have co-founders now you spend half the time like fundraising and then trying to like meet co-founders and find co-founders rather than building the thing this is a lot of time spent out doing uh things I'm not really good at. I do think there's a rising trend in solo founding yeah.Swyx [00:33:06]: You know I am a solo I think that something like 30 percent of like I forget what the exact status something like 30 percent of starters that make it to like series B or something actually are solo founder I feel like this must have co-founder idea mostly comes from YC and most everyone else copies it and then plenty of companies break up over co-founderJason [00:33:27]: Yeah and I bet it would be like I wonder how much of it is the people who don't have that much like and I hope this is not a diss to anybody but it's like you sort of you go through the incubator route because you don't have like the social equity you would need is just sort of like send an email to Sequoia and be like hey I'm going on this ride you want a ticket on the rocket ship right like that's very hard to sell my message if I was to raise money is like you've seen my twitter my life is sick I've decided to make it much worse by being a founder because this is something I have to do so do you want to come along otherwise I want to fund it myself like if I can't say that like I don't need the money because I can like handle payroll and like hire an intern and get an assistant like that's all fine but I really don't want to go back to meta I want to like get two years to like try to find a problem we're solving that feels like a bad timeAlessio [00:34:12]: Yeah Jason is like I wear a YSL jacket on stage at AI Engineer Summit I don't need your accelerator moneyJason [00:34:18]: And boots, you don't forget the boots. But I think that is a part of it right I think it is just like optionality and also just like I'm a lot older now I think 22 year old Jason would have been probably too scared and now I'm like too wise but I think it's a matter of like oh if you raise money you have to have a plan of spending it and I'm just not that creative with spending that much money yeah I mean to be clear you just celebrated your 30th birthday happy birthday yeah it's awesome so next week a lot older is relative to some some of the folks I think seeing on the career tipsAlessio [00:34:48]: I think Swix had a great post about are you too old to get into AI I saw one of your tweets in January 23 you applied to like Figma, Notion, Cohere, Anthropic and all of them rejected you because you didn't have enough LLM experience I think at that time it would be easy for a lot of people to say oh I kind of missed the boat you know I'm too late not gonna make it you know any advice for people that feel like thatJason [00:35:14]: Like the biggest learning here is actually from a lot of folks in jiu-jitsu they're like oh man like is it too late to start jiu-jitsu like I'll join jiu-jitsu once I get in more shape right it's like there's a lot of like excuses and then you say oh like why should I start now I'll be like 45 by the time I'm any good and say well you'll be 45 anyways like time is passing like if you don't start now you start tomorrow you're just like one more day behind if you're worried about being behind like today is like the soonest you can start right and so you got to recognize that like maybe you just don't want it and that's fine too like if you wanted you would have started I think a lot of these people again probably think of things on a too short time horizon but again you know you're gonna be old anyways you may as well just start now you knowSwyx [00:35:55]: One more thing on I guess the um career advice slash sort of vlogging you always go viral for this post that you wrote on advice to young people and the lies you tell yourself oh yeah yeah you said you were writing it for your sister.Jason [00:36:05]: She was like bummed out about going to college and like stressing about jobs and I was like oh and I really want to hear okay and I just kind of like text-to-sweep the whole thing it's crazy it's got like 50,000 views like I'm mind I mean your average tweet has more but that thing is like a 30-minute read nowSwyx [00:36:26]: So there's lots of stuff here which I agree with I you know I'm also of occasionally indulge in the sort of life reflection phase there's the how to be lucky there's the how to have high agency I feel like the agency thing is always a trend in sf or just in tech circles how do you define having high agencyJason [00:36:42]: I'm almost like past the high agency phase now now my biggest concern is like okay the agency is just like the norm of the vector what also matters is the direction right it's like how pure is the shot yeah I mean I think agency is just a matter of like having courage and doing the thing that's scary right you know if people want to go rock climbing it's like do you decide you want to go rock climbing then you show up to the gym you rent some shoes and you just fall 40 times or do you go like oh like I'm actually more intelligent let me go research the kind of shoes that I want okay like there's flatter shoes and more inclined shoes like which one should I get okay let me go order the shoes on Amazon I'll come back in three days like oh it's a little bit too tight maybe it's too aggressive I'm only a beginner let me go change no I think the higher agent person just like goes and like falls down 20 times right yeah I think the higher agency person is more focused on like process metrics versus outcome metrics right like from pottery like one thing I learned was if you want to be good at pottery you shouldn't count like the number of cups or bowls you make you should just weigh the amount of clay you use right like the successful person says oh I went through 100 pounds of clay right the less agency was like oh I've made six cups and then after I made six cups like there's not really what are you what do you do next no just pounds of clay pounds of clay same with the work here right so you just got to write the tweets like make the commits contribute open source like write the documentation there's no real outcome it's just a process and if you love that process you just get really good at the thing you're doingSwyx [00:38:04]: yeah so just to push back on this because obviously I mostly agree how would you design performance review systems because you were effectively saying we can count lines of code for developers rightJason [00:38:15]: I don't think that would be the actual like I think if you make that an outcome like I can just expand a for loop right I think okay so for performance review this is interesting because I've mostly thought of it from the perspective of science and not engineering I've been running a lot of engineering stand-ups primarily because there's not really that many machine learning folks the process outcome is like experiments and ideas right like if you think about outcome is what you might want to think about an outcome is oh I want to improve the revenue or whatnot but that's really hard but if you're someone who is going out like okay like this week I want to come up with like three or four experiments I might move the needle okay nothing worked to them they might think oh nothing worked like I suck but to me it's like wow you've closed off all these other possible avenues for like research like you're gonna get to the place that you're gonna figure out that direction really soon there's no way you try 30 different things and none of them work usually like 10 of them work five of them work really well two of them work really really well and one thing was like the nail in the head so agency lets you sort of capture the volume of experiments and like experience lets you figure out like oh that other half it's not worth doing right I think experience is going like half these prompting papers don't make any sense just use chain of thought and just you know use a for loop that's basically right it's like usually performance for me is around like how many experiments are you running how oftentimes are you trying.Alessio [00:39:32]: When do you give up on an experiment because a StitchFix you kind of give up on language models I guess in a way as a tool to use and then maybe the tools got better you were right at the time and then the tool improved I think there are similar paths in my engineering career where I try one approach and at the time it doesn't work and then the thing changes but then I kind of soured on that approach and I don't go back to it soonJason [00:39:51]: I see yeah how do you think about that loop so usually when I'm coaching folks and as they say like oh these things don't work I'm not going to pursue them in the future like one of the big things like hey the negative result is a result and this is something worth documenting like this is an academia like if it's negative you don't just like not publish right but then like what do you actually write down like what you should write down is like here are the conditions this is the inputs and the outputs we tried the experiment on and then one thing that's really valuable is basically writing down under what conditions would I revisit these experiments these things don't work because of what we had at the time if someone is reading this two years from now under what conditions will we try again that's really hard but again that's like another skill you kind of learn right it's like you do go back and you do experiments you figure out why it works now I think a lot of it here is just like scaling worked yeah rap lyrics you know that was because I did not have high enough quality data if we phase shift and say okay you don't even need training data oh great then it might just work a different domainAlessio [00:40:48]: Do you have anything in your list that is like it doesn't work now but I want to try it again later? Something that people should maybe keep in mind you know people always like agi when you know when are you going to know the agi is here maybe it's less than that but any stuff that you tried recently that didn't work thatJason [00:41:01]: You think will get there I mean I think the personal assistance and the writing I've shown to myself it's just not good enough yet so I hired a writer and I hired a personal assistant so now I'm gonna basically like work with these people until I figure out like what I can actually like automate and what are like the reproducible steps but like I think the experiment for me is like I'm gonna go pay a person like thousand dollars a month that helped me improve my life and then let me get them to help me figure like what are the components and how do I actually modularize something to get it to work because it's not just like a lot gmail calendar and like notion it's a little bit more complicated than that but we just don't know what that is yet those are two sort of systems that I wish gb4 or opus was actually good enough to just write me an essay but most of the essays are still pretty badSwyx [00:41:44]: yeah I would say you know on the personal assistance side Lindy is probably the one I've seen the most flow was at a speaker at the summit I don't know if you've checked it out or any other sort of agents assistant startupJason [00:41:54]: Not recently I haven't tried lindy they were not ga last time I was considering it yeah yeah a lot of it now it's like oh like really what I want you to do is take a look at all of my meetings and like write like a really good weekly summary email for my clients to remind them that I'm like you know thinking of them and like working for them right or it's like I want you to notice that like my monday is like way too packed and like block out more time and also like email the people to do the reschedule and then try to opt in to move them around and then I want you to say oh jason should have like a 15 minute prep break after form back to back those are things that now I know I can prompt them in but can it do it well like before I didn't even know that's what I wanted to prompt for us defragging a calendar and adding break so I can like eat lunch yeah that's the AGI test yeah exactly compassion right I think one thing that yeah we didn't touch on it before butAlessio [00:42:44]: I think was interesting you had this tweet a while ago about prompts should be code and then there were a lot of companies trying to build prompt engineering tooling kind of trying to turn the prompt into a more structured thing what's your thought today now you want to turn the thinking into DAGs like do prompts should still be code any updated ideasJason [00:43:04]: It's the same thing right I think you know with Instructor it is very much like the output model is defined as a code object that code object is sent to the LLM and in return you get a data structure so the outputs of these models I think should also be code objects and the inputs somewhat should be code objects but I think the one thing that instructor tries to do is separate instruction data and the types of the output and beyond that I really just think that most of it should be still like managed pretty closely to the developer like so much of is changing that if you give control of these systems away too early you end up ultimately wanting them back like many companies I know that I reach out or ones were like oh we're going off of the frameworks because now that we know what the business outcomes we're trying to optimize for these frameworks don't work yeah because we do rag but we want to do rag to like sell you supplements or to have you like schedule the fitness appointment the prompts are kind of too baked into the systems to really pull them back out and like start doing upselling or something it's really funny but a lot of it ends up being like once you understand the business outcomes you care way more about the promptSwyx [00:44:07]: Actually this is fun in our prep for this call we were trying to say like what can you as an independent person say that maybe me and Alessio cannot say or me you know someone at a company say what do you think is the market share of the frameworks the LangChain, the LlamaIndex, the everything...Jason [00:44:20]: Oh massive because not everyone wants to care about the code yeah right I think that's a different question to like what is the business model and are they going to be like massively profitable businesses right making hundreds of millions of dollars that feels like so straightforward right because not everyone is a prompt engineer like there's so much productivity to be captured in like back office optim automations right it's not because they care about the prompts that they care about managing these things yeah but those would be sort of low code experiences you yeah I think the bigger challenge is like okay hundred million dollars probably pretty easy it's just time and effort and they have the manpower and the money to sort of solve those problems again if you go the vc route then it's like you're talking about billions and that's really the goal that stuff for me it's like pretty unclear but again that is to say that like I sort of am building things for developers who want to use infrastructure to build their own tooling in terms of the amount of developers there are in the world versus downstream consumers of these things or even just think of how many companies will use like the adobes and the ibms right because they want something that's fully managed and they want something that they know will work and if the incremental 10% requires you to hire another team of 20 people you might not want to do it and I think that kind of organization is really good for uh those are bigger companiesSwyx [00:45:32]: I just want to capture your thoughts on one more thing which is you said you wanted most of the prompts to stay close to the developer and Hamel Husain wrote this post which I really love called f you show me the prompt yeah I think he cites you in one of those part of the blog post and I think ds pi is kind of like the complete antithesis of that which is I think it's interesting because I also hold the strong view that AI is a better prompt engineer than you are and I don't know how to square that wondering if you have thoughtsJason [00:45:58]: I think something like DSPy can work because there are like very short-term metrics to measure success right it is like did you find the pii or like did you write the multi-hop question the correct way but in these workflows that I've been managing a lot of it are we minimizing churn and maximizing retention yeah that's a very long loop it's not really like a uptuna like training loop right like those things are much more harder to capture so we don't actually have those metrics for that right and obviously we can figure out like okay is the summary good but like how do you measure the quality of the summary it's like that feedback loop it ends up being a lot longer and then again when something changes it's really hard to make sure that it works across these like newer models or again like changes to work for the current process like when we migrate from like anthropic to open ai like there's just a ton of change that are like infrastructure related not necessarily around the prompt itself yeah cool any other ai engineering startups that you think should not exist before we wrap up i mean oh my gosh i mean a lot of it again it's just like every time of investors like how does this make a billion dollars like it doesn't i'm gonna go back to just like tweeting and holding my breath underwater yeah like i don't really pay attention too much to most of this like most of the stuff i'm doing is around like the consumer of like llm calls yep i think people just want to move really fast and they will end up pick these vendors but i don't really know if anything has really like blown me out the water like i only trust myself but that's also a function of just being an old man like i think you know many companies are definitely very happy with using most of these tools anyways but i definitely think i occupy a very small space in the engineering ecosystem.Swyx [00:47:41]: Yeah i would say one of the challenges here you know you call about the dealing in the consumer of llm's space i think that's what ai engineering differs from ml engineering and i think a constant disconnect or cognitive dissonance in this field in the ai engineers that have sprung up is that they are not as good as the ml engineers they are not as qualified i think that you know you are someone who has credibility in the mle space and you are also a very authoritative figure in the ai space and i think so and you know i think you've built the de facto leading library i think yours i think instructors should be part of the standard lib even though i try to not use it like i basically also end up rebuilding instructor right like that's a lot of the back and forth that we had over the past two days i think that's the fundamental thing that we're trying to figure out like there's very small supply of MLEs not everyone's going to have that experience that you had but the global demand for AI is going to far outstrip the existing MLEs.Jason [00:48:36]: So what do we do do we force everyone to go through the standard MLE curriculum or do we make a new one? I'

Wits & Weights: Strength and Nutrition for Skeptics
Ep 163: The Most Important, Overlooked Secret to Sculpting a Body You'll Love with Kate Galli

Wits & Weights: Strength and Nutrition for Skeptics

Play Episode Listen Later Apr 12, 2024 59:02 Transcription Available


What is the most important thing in your life and how can you take care of that to create a body you'll love?Philip (@witsandweights) has a special guest, Kate Galli, on the show today. Kate brings a unique perspective to self-care, emphasizing the importance of being in the best physical and mental shape to make a difference in the world. In this episode, she will guide you on how to master your inner dialogue, prioritize your health and happiness, and tailor your self-care practices to fit your lifestyle. You'll gain practical tools to reshape your daily routine, so it aligns with your core values and propels you to become the best version of yourself.Kate has extensive qualifications, including being a Master Personal Trainer for 18 years, a Life Coach, and an NLP Practitioner. She is also committed to plant-based nutrition, a path she's been dedicated to for the past eight years. She uses this approach to help thousands of individuals sculpt the body and life they love with the confidence to go with it.Kate's work is fueled by her ambitious vision: a world where fitness and compassion go hand in hand to create a fit, strong, happy, and healthy planet. She believes in the power of mindset, of CHOOSING to eat and move in a way that is sustainable and consistent with your lifestyle AND values.Today, you'll learn all about:2:49 The importance of self-care7:01 Self-talk and labels you assign to yourself13:03 Elicit your values and beliefs 19:39 Lock and load the big rocks that make you happy25:55 Filter the people you spend time with33:27 Create a not-to-do list38:45 A 24-hour digital  detox43:27 Realistic for yourself and the people you love53:23 Easy quick fixes55:50 The question Kate wanted Philip to ask57:05 Where to find Kate57:42 OutroEpisode resources:Kate's podcast: The Healthification PodcastThe Plant Positive JournalInstagram: @strongbodygreenplanetSupport the show

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 11, 2024 56:20


Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!It's become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.We've commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.It starts with promptsWhen ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:* Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.* Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.From prompts to agentsAs prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task. The problem with open-ended agents like AutoGPT is that 1) it's hard to replicate the same workflow over and over again 2) there isn't a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn't what most people want from a product. From agents to productsPrompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products. Today's guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”. Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.One of the main takeaways we had from the episode is how teams should focus on supervising the process, not the output. Their philosophy at Elicit isn't to train general models, but to train models that are extremely good at focusing processes. This allows them to have pre-created steps that the user can add to their workflow (like classifying certain features that are specific to their research field) without having to write a prompt for it. And for Hamel Husain's happiness, they always show you the underlying prompt. Elicit recently announced notebooks as a new interface to interact with their products: (fun fact, they tried to implement this 4 times before they landed on the right UX! We discuss this ~33:00 in the podcast)The reasons why they picked notebooks as a UX all tie back to process:* They are systematic; once you have a instruction/prompt that works on a paper, you can run hundreds of papers through the same workflow by creating a column. Notebooks can also be edited and exported at any point during the flow.* They are transparent - Many papers include an opaque literature review as perfunctory context before getting to their novel contribution. But PDFs are “dead” and it is difficult to follow the thought process and exact research flow of the authors. Sharing “living” Elicit Notebooks opens up this process.* They are unbounded - Research is an endless stream of rabbit holes. So it must be easy to dive deeper and follow up with extra steps, without losing the ability to surface for air. We had a lot of fun recording this, and hope you have as much fun listening!AI UX in SFLong time Latent Spacenauts might remember our first AI UX meetup with Linus Lee, Geoffrey Litt, and Maggie Appleton last year. Well, Maggie has since joined Elicit, and they are all returning at the end of this month! Sign up here: https://lu.ma/aiuxAnd submit demos here! https://forms.gle/iSwiesgBkn8oo4SS8We expect the 200 seats to “sell out” fast. Attendees with demos will be prioritized.Show Notes* Elicit* Ought (their previous non-profit)* “Pivoting” with GPT-4* Elicit notebooks launch* Charlie* Andreas' BlogTimestamps* [00:00:00] Introductions* [00:07:45] How Johan and Andreas Joined Forces to Create Elicit* [00:10:26] Why Products > Research* [00:15:49] The Evolution of Elicit's Product* [00:19:44] Automating Literature Review Workflow* [00:22:48] How GPT-3 to GPT-4 Changed Things* [00:25:37] Managing LLM Pricing and Performance* [00:31:07] Open vs. Closed: Elicit's Approach to Model Selection* [00:31:56] Moving to Notebooks* [00:39:11] Elicit's Budget for Model Queries and Evaluations* [00:41:44] Impact of Long Context Windows* [00:47:19] Underrated Features and Surprising Applications* [00:51:35] Driving Systematic and Efficient Research* [00:53:00] Elicit's Team Growth and Transition to a Public Benefit Corporation* [00:55:22] Building AI for GoodFull Interview on YouTubeAs always, a plug for our youtube version for the 80% of communication that is nonverbal:TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we are back in the studio with Andreas and Jungwon from Elicit. Welcome.Jungwon [00:00:20]: Thanks guys.Andreas [00:00:21]: It's great to be here.Swyx [00:00:22]: Yeah. So I'll introduce you separately, but also, you know, we'd love to learn a little bit more about you personally. So Andreas, it looks like you started Elicit first, Jungwon joined later.Andreas [00:00:32]: That's right. For all intents and purposes, the Elicit and also the Ought that existed before then were very different from what I started. So I think it's like fair to say that you co-founded it.Swyx [00:00:43]: Got it. And Jungwon, you're a co-founder and COO of Elicit now.Jungwon [00:00:46]: Yeah, that's right.Swyx [00:00:47]: So there's a little bit of a history to this. I'm not super aware of like the sort of journey. I was aware of OTT and Elicit as sort of a nonprofit type situation. And recently you turned into like a B Corp, Public Benefit Corporation. So yeah, maybe if you want, you could take us through that journey of finding the problem. You know, obviously you're working together now. So like, how do you get together to decide to leave your startup career to join him?Andreas [00:01:10]: Yeah, it's truly a very long journey. I guess truly, it kind of started in Germany when I was born. So even as a kid, I was always interested in AI, like I kind of went to the library. There were books about how to write programs in QBasic and like some of them talked about how to implement chatbots.Jungwon [00:01:27]: To be clear, he grew up in like a tiny village on the outskirts of Munich called Dinkelschirben, where it's like a very, very idyllic German village.Andreas [00:01:36]: Yeah, important to the story. So basically, the main thing is I've kind of always been thinking about AI my entire life and been thinking about, well, at some point, this is going to be a huge deal. It's going to be transformative. How can I work on it? And was thinking about it from when I was a teenager, after high school did a year where I started a startup with the intention to become rich. And then once I'm rich, I can affect the trajectory of AI. Did not become rich, decided to go back to college and study cognitive science there, which was like the closest thing I could find at the time to AI. In the last year of college, moved to the US to do a PhD at MIT, working on broadly kind of new programming languages for AI because it kind of seemed like the existing languages were not great at expressing world models and learning world models doing Bayesian inference. Was always thinking about, well, ultimately, the goal is to actually build tools that help people reason more clearly, ask and answer better questions and make better decisions. But for a long time, it seemed like the technology to put reasoning in machines just wasn't there. Initially, at the end of my postdoc at Stanford, I was thinking about, well, what to do? I think the standard path is you become an academic and do research. But it's really hard to actually build interesting tools as an academic. You can't really hire great engineers. Everything is kind of on a paper-to-paper timeline. And so I was like, well, maybe I should start a startup, pursued that for a little bit. But it seemed like it was too early because you could have tried to do an AI startup, but probably would not have been this kind of AI startup we're seeing now. So then decided to just start a nonprofit research lab that's going to do research for a while until we better figure out how to do thinking in machines. And that was odd. And then over time, it became clear how to actually build actual tools for reasoning. And only over time, we developed a better way to... I'll let you fill in some of the details here.Jungwon [00:03:26]: Yeah. So I guess my story maybe starts around 2015. I kind of wanted to be a founder for a long time, and I wanted to work on an idea that stood the test of time for me, like an idea that stuck with me for a long time. And starting in 2015, actually, originally, I became interested in AI-based tools from the perspective of mental health. So there are a bunch of people around me who are really struggling. One really close friend in particular is really struggling with mental health and didn't have any support, and it didn't feel like there was anything before kind of like getting hospitalized that could just help her. And so luckily, she came and stayed with me for a while, and we were just able to talk through some things. But it seemed like lots of people might not have that resource, and something maybe AI-enabled could be much more scalable. I didn't feel ready to start a company then, that's 2015. And I also didn't feel like the technology was ready. So then I went into FinTech and kind of learned how to do the tech thing. And then in 2019, I felt like it was time for me to just jump in and build something on my own I really wanted to create. And at the time, I looked around at tech and felt like not super inspired by the options. I didn't want to have a tech career ladder, or I didn't want to climb the career ladder. There are two kind of interesting technologies at the time, there was AI and there was crypto. And I was like, well, the AI people seem like a little bit more nice, maybe like slightly more trustworthy, both super exciting, but threw my bet in on the AI side. And then I got connected to Andreas. And actually, the way he was thinking about pursuing the research agenda at OTT was really compatible with what I had envisioned for an ideal AI product, something that helps kind of take down really complex thinking, overwhelming thoughts and breaks it down into small pieces. And then this kind of mission that we need AI to help us figure out what we ought to do was really inspiring, right? Yeah, because I think it was clear that we were building the most powerful optimizer of our time. But as a society, we hadn't figured out how to direct that optimization potential. And if you kind of direct tremendous amounts of optimization potential at the wrong thing, that's really disastrous. So the goal of OTT was make sure that if we build the most transformative technology of our lifetime, it can be used for something really impactful, like good reasoning, like not just generating ads. My background was in marketing, but like, so I was like, I want to do more than generate ads with this. But also if these AI systems get to be super intelligent enough that they are doing this really complex reasoning, that we can trust them, that they are aligned with us and we have ways of evaluating that they're doing the right thing. So that's what OTT did. We did a lot of experiments, you know, like I just said, before foundation models really like took off. A lot of the issues we were seeing were more in reinforcement learning, but we saw a future where AI would be able to do more kind of logical reasoning, not just kind of extrapolate from numerical trends. We actually kind of set up experiments with people where kind of people stood in as super intelligent systems and we effectively gave them context windows. So they would have to like read a bunch of text and one person would get less text and one person would get all the texts and the person with less text would have to evaluate the work of the person who could read much more. So like in a world we were basically simulating, like in 2018, 2019, a world where an AI system could read significantly more than you and you as the person who couldn't read that much had to evaluate the work of the AI system. Yeah. So there's a lot of the work we did. And from that, we kind of iterated on the idea of breaking complex tasks down into smaller tasks, like complex tasks, like open-ended reasoning, logical reasoning into smaller tasks so that it's easier to train AI systems on them. And also so that it's easier to evaluate the work of the AI system when it's done. And then also kind of, you know, really pioneered this idea, the importance of supervising the process of AI systems, not just the outcomes. So a big part of how Elicit is built is we're very intentional about not just throwing a ton of data into a model and training it and then saying, cool, here's like scientific output. Like that's not at all what we do. Our approach is very much like, what are the steps that an expert human does or what is like an ideal process as granularly as possible, let's break that down and then train AI systems to perform each of those steps very robustly. When you train like that from the start, after the fact, it's much easier to evaluate, it's much easier to troubleshoot at each point. Like where did something break down? So yeah, we were working on those experiments for a while. And then at the start of 2021, decided to build a product.Swyx [00:07:45]: Do you mind if I, because I think you're about to go into more modern thought and Elicit. And I just wanted to, because I think a lot of people are in where you were like sort of 2018, 19, where you chose a partner to work with. Yeah. Right. And you didn't know him. Yeah. Yeah. You were just kind of cold introduced. A lot of people are cold introduced. Yeah. Never work with them. I assume you had a lot, a lot of other options, right? Like how do you advise people to make those choices?Jungwon [00:08:10]: We were not totally cold introduced. So one of our closest friends introduced us. And then Andreas had written a lot on the OTT website, a lot of blog posts, a lot of publications. And I just read it and I was like, wow, this sounds like my writing. And even other people, some of my closest friends I asked for advice from, they were like, oh, this sounds like your writing. But I think I also had some kind of like things I was looking for. I wanted someone with a complimentary skillset. I want someone who was very values aligned. And yeah, that was all a good fit.Andreas [00:08:38]: We also did a pretty lengthy mutual evaluation process where we had a Google doc where we had all kinds of questions for each other. And I think it ended up being around 50 pages or so of like various like questions and back and forth.Swyx [00:08:52]: Was it the YC list? There's some lists going around for co-founder questions.Andreas [00:08:55]: No, we just made our own questions. But I guess it's probably related in that you ask yourself, what are the values you care about? How would you approach various decisions and things like that?Jungwon [00:09:04]: I shared like all of my past performance reviews. Yeah. Yeah.Swyx [00:09:08]: And he never had any. No.Andreas [00:09:10]: Yeah.Swyx [00:09:11]: Sorry, I just had to, a lot of people are going through that phase and you kind of skipped over it. I was like, no, no, no, no. There's like an interesting story.Jungwon [00:09:20]: Yeah.Alessio [00:09:21]: Yeah. Before we jump into what a list it is today, the history is a bit counterintuitive. So you start with figuring out, oh, if we had a super powerful model, how would we align it? But then you were actually like, well, let's just build the product so that people can actually leverage it. And I think there are a lot of folks today that are now back to where you were maybe five years ago that are like, oh, what if this happens rather than focusing on actually building something useful with it? What clicked for you to like move into a list and then we can cover that story too.Andreas [00:09:49]: I think in many ways, the approach is still the same because the way we are building illicit is not let's train a foundation model to do more stuff. It's like, let's build a scaffolding such that we can deploy powerful models to good ends. I think it's different now in that we actually have like some of the models to plug in. But if in 2017, we had had the models, we could have run the same experiments we did run with humans back then, just with models. And so in many ways, our philosophy is always, let's think ahead to the future of what models are going to exist in one, two years or longer. And how can we make it so that they can actually be deployed in kind of transparent, controllableJungwon [00:10:26]: ways? I think motivationally, we both are kind of product people at heart. The research was really important and it didn't make sense to build a product at that time. But at the end of the day, the thing that always motivated us is imagining a world where high quality reasoning is really abundant and AI is a technology that's going to get us there. And there's a way to guide that technology with research, but we can have a more direct effect through product because with research, you publish the research and someone else has to implement that into the product and the product felt like a more direct path. And we wanted to concretely have an impact on people's lives. Yeah, I think the kind of personally, the motivation was we want to build for people.Swyx [00:11:03]: Yep. And then just to recap as well, like the models you were using back then were like, I don't know, would they like BERT type stuff or T5 or I don't know what timeframe we're talking about here.Andreas [00:11:14]: I guess to be clear, at the very beginning, we had humans do the work. And then I think the first models that kind of make sense were TPT-2 and TNLG and like Yeah, early generative models. We do also use like T5 based models even now started with TPT-2.Swyx [00:11:30]: Yeah, cool. I'm just kind of curious about like, how do you start so early? You know, like now it's obvious where to start, but back then it wasn't.Jungwon [00:11:37]: Yeah, I used to nag Andreas a lot. I was like, why are you talking to this? I don't know. I felt like TPT-2 is like clearly can't do anything. And I was like, Andreas, you're wasting your time, like playing with this toy. But yeah, he was right.Alessio [00:11:50]: So what's the history of what Elicit actually does as a product? You recently announced that after four months, you get to a million in revenue. Obviously, a lot of people use it, get a lot of value, but it would initially kind of like structured data extraction from papers. Then you had kind of like concept grouping. And today, it's maybe like a more full stack research enabler, kind of like paper understander platform. What's the definitive definition of what Elicit is? And how did you get here?Jungwon [00:12:15]: Yeah, we say Elicit is an AI research assistant. I think it will continue to evolve. That's part of why we're so excited about building and research, because there's just so much space. I think the current phase we're in right now, we talk about it as really trying to make Elicit the best place to understand what is known. So it's all a lot about like literature summarization. There's a ton of information that the world already knows. It's really hard to navigate, hard to make it relevant. So a lot of it is around document discovery and processing and analysis. I really kind of want to import some of the incredible productivity improvements we've seen in software engineering and data science and into research. So it's like, how can we make researchers like data scientists of text? That's why we're launching this new set of features called Notebooks. It's very much inspired by computational notebooks, like Jupyter Notebooks, you know, DeepNode or Colab, because they're so powerful and so flexible. And ultimately, when people are trying to get to an answer or understand insight, they're kind of like manipulating evidence and information. Today, that's all packaged in PDFs, which are super brittle. So with language models, we can decompose these PDFs into their underlying claims and evidence and insights, and then let researchers mash them up together, remix them and analyze them together. So yeah, I would say quite simply, overall, Elicit is an AI research assistant. Right now we're focused on text-based workflows, but long term, really want to kind of go further and further into reasoning and decision making.Alessio [00:13:35]: And when you say AI research assistant, this is kind of meta research. So researchers use Elicit as a research assistant. It's not a generic you-can-research-anything type of tool, or it could be, but like, what are people using it for today?Andreas [00:13:49]: Yeah. So specifically in science, a lot of people use human research assistants to do things. You tell your grad student, hey, here are a couple of papers. Can you look at all of these, see which of these have kind of sufficiently large populations and actually study the disease that I'm interested in, and then write out like, what are the experiments they did? What are the interventions they did? What are the outcomes? And kind of organize that for me. And the first phase of understanding what is known really focuses on automating that workflow because a lot of that work is pretty rote work. I think it's not the kind of thing that we need humans to do. Language models can do it. And then if language models can do it, you can obviously scale it up much more than a grad student or undergrad research assistant would be able to do.Jungwon [00:14:31]: Yeah. The use cases are pretty broad. So we do have a very large percent of our users are just using it personally or for a mix of personal and professional things. People who care a lot about health or biohacking or parents who have children with a kind of rare disease and want to understand the literature directly. So there is an individual kind of consumer use case. We're most focused on the power users. So that's where we're really excited to build. So Lissette was very much inspired by this workflow in literature called systematic reviews or meta-analysis, which is basically the human state of the art for summarizing scientific literature. And it typically involves like five people working together for over a year. And they kind of first start by trying to find the maximally comprehensive set of papers possible. So it's like 10,000 papers. And they kind of systematically narrow that down to like hundreds or 50 extract key details from every single paper. Usually have two people doing it, like a third person reviewing it. So it's like an incredibly laborious, time consuming process, but you see it in every single domain. So in science, in machine learning, in policy, because it's so structured and designed to be reproducible, it's really amenable to automation. So that's kind of the workflow that we want to automate first. And then you make that accessible for any question and make these really robust living summaries of science. So yeah, that's one of the workflows that we're starting with.Alessio [00:15:49]: Our previous guest, Mike Conover, he's building a new company called Brightwave, which is an AI research assistant for financial research. How do you see the future of these tools? Does everything converge to like a God researcher assistant, or is every domain going to have its own thing?Andreas [00:16:03]: I think that's a good and mostly open question. I do think there are some differences across domains. For example, some research is more quantitative data analysis, and other research is more high level cross domain thinking. And we definitely want to contribute to the broad generalist reasoning type space. Like if researchers are making discoveries often, it's like, hey, this thing in biology is actually analogous to like these equations in economics or something. And that's just fundamentally a thing that where you need to reason across domains. At least within research, I think there will be like one best platform more or less for this type of generalist research. I think there may still be like some particular tools like for genomics, like particular types of modules of genes and proteins and whatnot. But for a lot of the kind of high level reasoning that humans do, I think that is a more of a winner type all thing.Swyx [00:16:52]: I wanted to ask a little bit deeper about, I guess, the workflow that you mentioned. I like that phrase. I see that in your UI now, but that's as it is today. And I think you were about to tell us about how it was in 2021 and how it may be progressed. How has this workflow evolved over time?Jungwon [00:17:07]: Yeah. So the very first version of Elicit actually wasn't even a research assistant. It was a forecasting assistant. So we set out and we were thinking about, you know, what are some of the most impactful types of reasoning that if we could scale up, AI would really transform the world. We actually started with literature review, but we're like, oh, so many people are going to build literature review tools. So let's start there. So then we focused on geopolitical forecasting. So I don't know if you're familiar with like manifold or manifold markets. That kind of stuff. Before manifold. Yeah. Yeah. I'm not predicting relationships. We're predicting like, is China going to invade Taiwan?Swyx [00:17:38]: Markets for everything.Andreas [00:17:39]: Yeah. That's a relationship.Swyx [00:17:41]: Yeah.Jungwon [00:17:42]: Yeah. It's true. And then we worked on that for a while. And then after GPT-3 came out, I think by that time we realized that originally we were trying to help people convert their beliefs into probability distributions. And so take fuzzy beliefs, but like model them more concretely. And then after a few months of iterating on that, just realize, oh, the thing that's blocking people from making interesting predictions about important events in the world is less kind of on the probabilistic side and much more on the research side. And so that kind of combined with the very generalist capabilities of GPT-3 prompted us to make a more general research assistant. Then we spent a few months iterating on what even is a research assistant. So we would embed with different researchers. We built data labeling workflows in the beginning, kind of right off the bat. We built ways to find experts in a field and like ways to ask good research questions. So we just kind of iterated through a lot of workflows and no one else was really building at this time. And it was like very quick to just do some prompt engineering and see like what is a task that is at the intersection of what's technologically capable and like important for researchers. And we had like a very nondescript landing page. It said nothing. But somehow people were signing up and we had to sign a form that was like, why are you here? And everyone was like, I need help with literature review. And we're like, oh, literature review. That sounds so hard. I don't even know what that means. We're like, we don't want to work on it. But then eventually we were like, okay, everyone is saying literature review. It's overwhelmingly people want to-Swyx [00:19:02]: And all domains, not like medicine or physics or just all domains. Yeah.Jungwon [00:19:06]: And we also kind of personally knew literature review was hard. And if you look at the graphs for academic literature being published every single month, you guys know this in machine learning, it's like up into the right, like superhuman amounts of papers. So we're like, all right, let's just try it. I was really nervous, but Andreas was like, this is kind of like the right problem space to jump into, even if we don't know what we're doing. So my take was like, fine, this feels really scary, but let's just launch a feature every single week and double our user numbers every month. And if we can do that, we'll fail fast and we will find something. I was worried about like getting lost in the kind of academic white space. So the very first version was actually a weekend prototype that Andreas made. Do you want to explain how that worked?Andreas [00:19:44]: I mostly remember that it was really bad. The thing I remember is you entered a question and it would give you back a list of claims. So your question could be, I don't know, how does creatine affect cognition? It would give you back some claims that are to some extent based on papers, but they were often irrelevant. The papers were often irrelevant. And so we ended up soon just printing out a bunch of examples of results and putting them up on the wall so that we would kind of feel the constant shame of having such a bad product and would be incentivized to make it better. And I think over time it has gotten a lot better, but I think the initial version was like really very bad. Yeah.Jungwon [00:20:20]: But it was basically like a natural language summary of an abstract, like kind of a one sentence summary, and which we still have. And then as we learned kind of more about this systematic review workflow, we started expanding the capability so that you could extract a lot more data from the papers and do more with that.Swyx [00:20:33]: And were you using like embeddings and cosine similarity, that kind of stuff for retrieval, or was it keyword based?Andreas [00:20:40]: I think the very first version didn't even have its own search engine. I think the very first version probably used the Semantic Scholar or API or something similar. And only later when we discovered that API is not very semantic, we then built our own search engine that has helped a lot.Swyx [00:20:58]: And then we're going to go into like more recent products stuff, but like, you know, I think you seem the more sort of startup oriented business person and you seem sort of more ideologically like interested in research, obviously, because of your PhD. What kind of market sizing were you guys thinking? Right? Like, because you're here saying like, we have to double every month. And I'm like, I don't know how you make that conclusion from this, right? Especially also as a nonprofit at the time.Jungwon [00:21:22]: I mean, market size wise, I felt like in this space where so much was changing and it was very unclear what of today was actually going to be true tomorrow. We just like really rested a lot on very, very simple fundamental principles, which is like, if you can understand the truth, that is very economically beneficial and valuable. If you like know the truth.Swyx [00:21:42]: On principle.Jungwon [00:21:43]: Yeah. That's enough for you. Yeah. Research is the key to many breakthroughs that are very commercially valuable.Swyx [00:21:47]: Because my version of it is students are poor and they don't pay for anything. Right? But that's obviously not true. As you guys have found out. But you had to have some market insight for me to have believed that, but you skipped that.Andreas [00:21:58]: Yeah. I remember talking to VCs for our seed round. A lot of VCs were like, you know, researchers, they don't have any money. Why don't you build legal assistant? I think in some short sighted way, maybe that's true. But I think in the long run, R&D is such a big space of the economy. I think if you can substantially improve how quickly people find new discoveries or avoid controlled trials that don't go anywhere, I think that's just huge amounts of money. And there are a lot of questions obviously about between here and there. But I think as long as the fundamental principle is there, we were okay with that. And I guess we found some investors who also were. Yeah.Swyx [00:22:35]: Congrats. I mean, I'm sure we can cover the sort of flip later. I think you're about to start us on like GPT-3 and how that changed things for you. It's funny. I guess every major GPT version, you have some big insight. Yeah.Jungwon [00:22:48]: Yeah. I mean, what do you think?Andreas [00:22:51]: I think it's a little bit less true for us than for others, because we always believed that there will basically be human level machine work. And so it is definitely true that in practice for your product, as new models come out, your product starts working better, you can add some features that you couldn't add before. But I don't think we really ever had the moment where we were like, oh, wow, that is super unanticipated. We need to do something entirely different now from what was on the roadmap.Jungwon [00:23:21]: I think GPT-3 was a big change because it kind of said, oh, now is the time that we can use AI to build these tools. And then GPT-4 was maybe a little bit more of an extension of GPT-3. GPT-3 over GPT-2 was like qualitative level shift. And then GPT-4 was like, okay, great. Now it's like more accurate. We're more accurate on these things. We can answer harder questions. But the shape of the product had already taken place by that time.Swyx [00:23:44]: I kind of want to ask you about this sort of pivot that you've made. But I guess that was just a way to sell what you were doing, which is you're adding extra features on grouping by concepts. The GPT-4 pivot, quote unquote pivot that you-Jungwon [00:23:55]: Oh, yeah, yeah, exactly. Right, right, right. Yeah. Yeah. When we launched this workflow, now that GPT-4 was available, basically Elisa was at a place where we have very tabular interfaces. So given a table of papers, you can extract data across all the tables. But you kind of want to take the analysis a step further. Sometimes what you'd care about is not having a list of papers, but a list of arguments, a list of effects, a list of interventions, a list of techniques. And so that's one of the things we're working on is now that you've extracted this information in a more structured way, can you pivot it or group by whatever the information that you extracted to have more insight first information still supported by the academic literature?Swyx [00:24:33]: Yeah, that was a big revelation when I saw it. Basically, I think I'm very just impressed by how first principles, your ideas around what the workflow is. And I think that's why you're not as reliant on like the LLM improving, because it's actually just about improving the workflow that you would recommend to people. Today we might call it an agent, I don't know, but you're not relying on the LLM to drive it. It's relying on this is the way that Elicit does research. And this is what we think is most effective based on talking to our users.Jungwon [00:25:01]: The problem space is still huge. Like if it's like this big, we are all still operating at this tiny part, bit of it. So I think about this a lot in the context of moats, people are like, oh, what's your moat? What happens if GPT-5 comes out? It's like, if GPT-5 comes out, there's still like all of this other space that we can go into. So I think being really obsessed with the problem, which is very, very big, has helped us like stay robust and just kind of directly incorporate model improvements and they keep going.Swyx [00:25:26]: And then I first encountered you guys with Charlie, you can tell us about that project. Basically, yeah. Like how much did cost become a concern as you're working more and more with OpenAI? How do you manage that relationship?Jungwon [00:25:37]: Let me talk about who Charlie is. And then you can talk about the tech, because Charlie is a special character. So Charlie, when we found him was, had just finished his freshman year at the University of Warwick. And I think he had heard about us on some discord. And then he applied and we were like, wow, who is this freshman? And then we just saw that he had done so many incredible side projects. And we were actually on a team retreat in Barcelona visiting our head of engineering at that time. And everyone was talking about this wonder kid or like this kid. And then on our take home project, he had done like the best of anyone to that point. And so people were just like so excited to hire him. So we hired him as an intern and they were like, Charlie, what if you just dropped out of school? And so then we convinced him to take a year off. And he was just incredibly productive. And I think the thing you're referring to is at the start of 2023, Anthropic kind of launched their constitutional AI paper. And within a few days, I think four days, he had basically implemented that in production. And then we had it in app a week or so after that. And he has since kind of contributed to major improvements, like cutting costs down to a tenth of what they were really large scale. But yeah, you can talk about the technical stuff. Yeah.Andreas [00:26:39]: On the constitutional AI project, this was for abstract summarization, where in illicit, if you run a query, it'll return papers to you, and then it will summarize each paper with respect to your query for you on the fly. And that's a really important part of illicit because illicit does it so much. If you run a few searches, it'll have done it a few hundred times for you. And so we cared a lot about this both being fast, cheap, and also very low on hallucination. I think if illicit hallucinates something about the abstract, that's really not good. And so what Charlie did in that project was create a constitution that expressed what are the attributes of a good summary? Everything in the summary is reflected in the actual abstract, and it's like very concise, et cetera, et cetera. And then used RLHF with a model that was trained on the constitution to basically fine tune a better summarizer on an open source model. Yeah. I think that might still be in use.Jungwon [00:27:34]: Yeah. Yeah, definitely. Yeah. I think at the time, the models hadn't been trained at all to be faithful to a text. So they were just generating. So then when you ask them a question, they tried too hard to answer the question and didn't try hard enough to answer the question given the text or answer what the text said about the question. So we had to basically teach the models to do that specific task.Swyx [00:27:54]: How do you monitor the ongoing performance of your models? Not to get too LLM-opsy, but you are one of the larger, more well-known operations doing NLP at scale. I guess effectively, you have to monitor these things and nobody has a good answer that I talk to.Andreas [00:28:10]: I don't think we have a good answer yet. I think the answers are actually a little bit clearer on the just kind of basic robustness side of where you can import ideas from normal software engineering and normal kind of DevOps. You're like, well, you need to monitor kind of latencies and response times and uptime and whatnot.Swyx [00:28:27]: I think when we say performance, it's more about hallucination rate, isn't it?Andreas [00:28:30]: And then things like hallucination rate where I think there, the really important thing is training time. So we care a lot about having our own internal benchmarks for model development that reflect the distribution of user queries so that we can know ahead of time how well is the model going to perform on different types of tasks. So the tasks being summarization, question answering, given a paper, ranking. And for each of those, we want to know what's the distribution of things the model is going to see so that we can have well-calibrated predictions on how well the model is going to do in production. And I think, yeah, there's some chance that there's distribution shift and actually the things users enter are going to be different. But I think that's much less important than getting the kind of training right and having very high quality, well-vetted data sets at training time.Jungwon [00:29:18]: I think we also end up effectively monitoring by trying to evaluate new models as they come out. And so that kind of prompts us to go through our eval suite every couple of months. And every time a new model comes out, we have to see how is this performing relative to production and what we currently have.Swyx [00:29:32]: Yeah. I mean, since we're on this topic, any new models that have really caught your eye this year?Jungwon [00:29:37]: Like Claude came out with a bunch. Yeah. I think Claude is pretty, I think the team's pretty excited about Claude. Yeah.Andreas [00:29:41]: Specifically, Claude Haiku is like a good point on the kind of Pareto frontier. It's neither the cheapest model, nor is it the most accurate, most high quality model, but it's just like a really good trade-off between cost and accuracy.Swyx [00:29:57]: You apparently have to 10-shot it to make it good. I tried using Haiku for summarization, but zero-shot was not great. Then they were like, you know, it's a skill issue, you have to try harder.Jungwon [00:30:07]: I think GPT-4 unlocked tables for us, processing data from tables, which was huge. GPT-4 Vision.Andreas [00:30:13]: Yeah.Swyx [00:30:14]: Yeah. Did you try like Fuyu? I guess you can't try Fuyu because it's non-commercial. That's the adept model.Jungwon [00:30:19]: Yeah.Swyx [00:30:20]: We haven't tried that one. Yeah. Yeah. Yeah. But Claude is multimodal as well. Yeah. I think the interesting insight that we got from talking to David Luan, who is CEO of multimodality has effectively two different flavors. One is we recognize images from a camera in the outside natural world. And actually the more important multimodality for knowledge work is screenshots and PDFs and charts and graphs. So we need a new term for that kind of multimodality.Andreas [00:30:45]: But is the claim that current models are good at one or the other? Yeah.Swyx [00:30:50]: They're over-indexed because of the history of computer vision is Coco, right? So now we're like, oh, actually, you know, screens are more important, OCR, handwriting. You mentioned a lot of like closed model lab stuff, and then you also have like this open source model fine tuning stuff. Like what is your workload now between closed and open? It's a good question.Andreas [00:31:07]: I think- Is it half and half? It's a-Swyx [00:31:10]: Is that even a relevant question or not? Is this a nonsensical question?Andreas [00:31:13]: It depends a little bit on like how you index, whether you index by like computer cost or number of queries. I'd say like in terms of number of queries, it's maybe similar. In terms of like cost and compute, I think the closed models make up more of the budget since the main cases where you want to use closed models are cases where they're just smarter, where no existing open source models are quite smart enough.Jungwon [00:31:35]: Yeah. Yeah.Alessio [00:31:37]: We have a lot of interesting technical questions to go in, but just to wrap the kind of like UX evolution, now you have the notebooks. We talked a lot about how chatbots are not the final frontier, you know? How did you decide to get into notebooks, which is a very iterative kind of like interactive interface and yeah, maybe learnings from that.Jungwon [00:31:56]: Yeah. This is actually our fourth time trying to make this work. Okay. I think the first time was probably in early 2021. I think because we've always been obsessed with this idea of task decomposition and like branching, we always wanted a tool that could be kind of unbounded where you could keep going, could do a lot of branching where you could kind of apply language model operations or computations on other tasks. So in 2021, we had this thing called composite tasks where you could use GPT-3 to brainstorm a bunch of research questions and then take each research question and decompose those further into sub questions. This kind of, again, that like task decomposition tree type thing was always very exciting to us, but that was like, it didn't work and it was kind of overwhelming. Then at the end of 22, I think we tried again and at that point we were thinking, okay, we've done a lot with this literature review thing. We also want to start helping with kind of adjacent domains and different workflows. Like we want to help more with machine learning. What does that look like? And as we were thinking about it, we're like, well, there are so many research workflows. How do we not just build three new workflows into Elicit, but make Elicit really generic to lots of workflows? What is like a generic composable system with nice abstractions that can like scale to all these workflows? So we like iterated on that a bunch and then didn't quite narrow the problem space enough or like quite get to what we wanted. And then I think it was at the beginning of 2023 where we're like, wow, computational notebooks kind of enable this, where they have a lot of flexibility, but kind of robust primitives such that you can extend the workflow and it's not limited. It's not like you ask a query, you get an answer, you're done. You can just constantly keep building on top of that. And each little step seems like a really good unit of work for the language model. And also there was just like really helpful to have a bit more preexisting work to emulate. Yeah, that's kind of how we ended up at computational notebooks for Elicit.Andreas [00:33:44]: Maybe one thing that's worth making explicit is the difference between computational notebooks and chat, because on the surface, they seem pretty similar. It's kind of this iterative interaction where you add stuff. In both cases, you have a back and forth between you enter stuff and then you get some output and then you enter stuff. But the important difference in our minds is with notebooks, you can define a process. So in data science, you can be like, here's like my data analysis process that takes in a CSV and then does some extraction and then generates a figure at the end. And you can prototype it using a small CSV and then you can run it over a much larger CSV later. And similarly, the vision for notebooks in our case is to not make it this like one-off chat interaction, but to allow you to then say, if you start and first you're like, okay, let me just analyze a few papers and see, do I get to the correct conclusions for those few papers? Can I then later go back and say, now let me run this over 10,000 papers now that I've debugged the process using a few papers. And that's an interaction that doesn't fit quite as well into the chat framework because that's more for kind of quick back and forth interaction.Alessio [00:34:49]: Do you think in notebooks, it's kind of like structure, editable chain of thought, basically step by step? Like, is that kind of where you see this going? And then are people going to reuse notebooks as like templates? And maybe in traditional notebooks, it's like cookbooks, right? You share a cookbook, you can start from there. Is this similar in Elizit?Andreas [00:35:06]: Yeah, that's exactly right. So that's our hope that people will build templates, share them with other people. I think chain of thought is maybe still like kind of one level lower on the abstraction hierarchy than we would think of notebooks. I think we'll probably want to think about more semantic pieces like a building block is more like a paper search or an extraction or a list of concepts. And then the model's detailed reasoning will probably often be one level down. You always want to be able to see it, but you don't always want it to be front and center.Alessio [00:35:36]: Yeah, what's the difference between a notebook and an agent? Since everybody always asks me, what's an agent? Like how do you think about where the line is?Andreas [00:35:44]: Yeah, it's an interesting question. In the notebook world, I would generally think of the human as the agent in the first iteration. So you have the notebook and the human kind of adds little action steps. And then the next point on this kind of progress gradient is, okay, now you can use language models to predict which action would you take as a human. And at some point, you're probably going to be very good at this, you'll be like, okay, in some cases I can, with 99.9% accuracy, predict what you do. And then you might as well just execute it, like why wait for the human? And eventually, as you get better at this, that will just look more and more like agents taking actions as opposed to you doing the thing. I think templates are a specific case of this where you're like, okay, well, there's just particular sequences of actions that you often want to chunk and have available as primitives, just like in normal programming. And those, you can view them as action sequences of agents, or you can view them as more normal programming language abstraction thing. And I think those are two valid views. Yeah.Alessio [00:36:40]: How do you see this change as, like you said, the models get better and you need less and less human actual interfacing with the model, you just get the results? Like how does the UX and the way people perceive it change?Jungwon [00:36:52]: Yeah, I think this kind of interaction paradigms for evaluation is not really something the internet has encountered yet, because up to now, the internet has all been about getting data and work from people. So increasingly, I really want kind of evaluation, both from an interface perspective and from like a technical perspective and operation perspective to be a superpower for Elicit, because I think over time, models will do more and more of the work, and people will have to do more and more of the evaluation. So I think, yeah, in terms of the interface, some of the things we have today, you know, for every kind of language model generation, there's some citation back, and we kind of try to highlight the ground truth in the paper that is most relevant to whatever Elicit said, and make it super easy so that you can click on it and quickly see in context and validate whether the text actually supports the answer that Elicit gave. So I think we'd probably want to scale things up like that, like the ability to kind of spot check the model's work super quickly, scale up interfaces like that. And-Swyx [00:37:44]: Who would spot check? The user?Jungwon [00:37:46]: Yeah, to start, it would be the user. One of the other things we do is also kind of flag the model's uncertainty. So we have models report out, how confident are you that this was the sample size of this study? The model's not sure, we throw a flag. And so the user knows to prioritize checking that. So again, we can kind of scale that up. So when the model's like, well, I searched this on Google, I'm not sure if that was the right thing. I have an uncertainty flag, and the user can go and be like, oh, okay, that was actually the right thing to do or not.Swyx [00:38:10]: I've tried to do uncertainty readings from models. I don't know if you have this live. You do? Yeah. Because I just didn't find them reliable because they just hallucinated their own uncertainty. I would love to base it on log probs or something more native within the model rather than generated. But okay, it sounds like they scale properly for you. Yeah.Jungwon [00:38:30]: We found it to be pretty calibrated. It varies on the model.Andreas [00:38:32]: I think in some cases, we also use two different models for the uncertainty estimates than for the question answering. So one model would say, here's my chain of thought, here's my answer. And then a different type of model. Let's say the first model is Llama, and let's say the second model is GPT-3.5. And then the second model just looks over the results and is like, okay, how confident are you in this? And I think sometimes using a different model can be better than using the same model. Yeah.Swyx [00:38:58]: On the topic of models, evaluating models, obviously you can do that all day long. What's your budget? Because your queries fan out a lot. And then you have models evaluating models. One person typing in a question can lead to a thousand calls.Andreas [00:39:11]: It depends on the project. So if the project is basically a systematic review that otherwise human research assistants would do, then the project is basically a human equivalent spend. And the spend can get quite large for those projects. I don't know, let's say $100,000. In those cases, you're happier to spend compute then in the kind of shallow search case where someone just enters a question because, I don't know, maybe I heard about creatine. What's it about? Probably don't want to spend a lot of compute on that. This sort of being able to invest more or less compute into getting more or less accurate answers is I think one of the core things we care about. And that I think is currently undervalued in the AI space. I think currently you can choose which model you want and you can sometimes, I don't know, you'll tip it and it'll try harder or you can try various things to get it to work harder. But you don't have great ways of converting willingness to spend into better answers. And we really want to build a product that has this sort of unbounded flavor where if you care about it a lot, you should be able to get really high quality answers, really double checked in every way.Alessio [00:40:14]: And you have a credits-based pricing. So unlike most products, it's not a fixed monthly fee.Jungwon [00:40:19]: Right, exactly. So some of the higher costs are tiered. So for most casual users, they'll just get the abstract summary, which is kind of an open source model. Then you can add more columns, which have more extractions and these uncertainty features. And then you can also add the same columns in high accuracy mode, which also parses the table. So we kind of stack the complexity on the calls.Swyx [00:40:39]: You know, the fun thing you can do with a credit system, which is data for data, basically you can give people more credits if they give data back to you. I don't know if you've already done that. We've thought about something like this.Jungwon [00:40:49]: It's like if you don't have money, but you have time, how do you exchange that?Swyx [00:40:54]: It's a fair trade.Jungwon [00:40:55]: I think it's interesting. We haven't quite operationalized it. And then, you know, there's been some kind of like adverse selection. Like, you know, for example, it would be really valuable to get feedback on our model. So maybe if you were willing to give more robust feedback on our results, we could give you credits or something like that. But then there's kind of this, will people take it seriously? And you want the good people. Exactly.Swyx [00:41:11]: Can you tell who are the good people? Not right now.Jungwon [00:41:13]: But yeah, maybe at the point where we can, we can offer it. We can offer it up to them.Swyx [00:41:16]: The perplexity of questions asked, you know, if it's higher perplexity, these are the smarterJungwon [00:41:20]: people. Yeah, maybe.Andreas [00:41:23]: If you put typos in your queries, you're not going to get off the stage.Swyx [00:41:28]: Negative social credit. It's very topical right now to think about the threat of long context windows. All these models that we're talking about these days, all like a million token plus. Is that relevant for you? Can you make use of that? Is that just prohibitively expensive because you're just paying for all those tokens or you're just doing rag?Andreas [00:41:44]: It's definitely relevant. And when we think about search, as many people do, we think about kind of a staged pipeline of retrieval where first you use semantic search database with embeddings, get like the, in our case, maybe 400 or so most relevant papers. And then, then you still need to rank those. And I think at that point it becomes pretty interesting to use larger models. So specifically in the past, I think a lot of ranking was kind of per item ranking where you would score each individual item, maybe using increasingly expensive scoring methods and then rank based on the scores. But I think list-wise re-ranking where you have a model that can see all the elements is a lot more powerful because often you can only really tell how good a thing is in comparison to other things and what things should come first. It really depends on like, well, what other things that are available, maybe you even care about diversity in your results. You don't want to show 10 very similar papers as the first 10 results. So I think a long context models are quite interesting there. And especially for our case where we care more about power users who are perhaps a little bit more willing to wait a little bit longer to get higher quality results relative to people who just quickly check out things because why not? And I think being able to spend more on longer contexts is quite valuable.Jungwon [00:42:55]: Yeah. I think one thing the longer context models changed for us is maybe a focus from breaking down tasks to breaking down the evaluation. So before, you know, if we wanted to answer a question from the full text of a paper, we had to figure out how to chunk it and like find the relevant chunk and then answer based on that chunk. And the nice thing was then, you know, kind of which chunk the model used to answer the question. So if you want to help the user track it, yeah, you can be like, well, this was the chunk that the model got. And now if you put the whole text in the paper, you have to like kind of find the chunk like more retroactively basically. And so you need kind of like a different set of abilities and obviously like a different technology to figure out. You still want to point the user to the supporting quotes in the text, but then the interaction is a little different.Swyx [00:43:38]: You like scan through and find some rouge score floor.Andreas [00:43:41]: I think there's an interesting space of almost research problems here because you would ideally make causal claims like if this hadn't been in the text, the model wouldn't have said this thing. And maybe you can do expensive approximations to that where like, I don't know, you just throw out chunk of the paper and re-answer and see what happens. But hopefully there are better ways of doing that where you just get that kind of counterfactual information for free from the model.Alessio [00:44:06]: Do you think at all about the cost of maintaining REG versus just putting more tokens in the window? I think in software development, a lot of times people buy developer productivity things so that we don't have to worry about it. Context window is kind of the same, right? You have to maintain chunking and like REG retrieval and like re-ranking and all of this versus I just shove everything into the context and like it costs a little more, but at least I don't have to do all of that. Is that something you thought about?Jungwon [00:44:31]: I think we still like hit up against context limits enough that it's not really, do we still want to keep this REG around? It's like we do still need it for the scale of the work that we're doing, yeah.Andreas [00:44:41]: And I think there are different kinds of maintainability. In one sense, I think you're right that throw everything into the context window thing is easier to maintain because you just can swap out a model. In another sense, if things go wrong, it's harder to debug where like, if you know, here's the process that we go through to go from 200 million papers to an answer. And there are like little steps and you understand, okay, this is the step that finds the relevant paragraph or whatever it may be. You'll know which step breaks if the answers are bad, whereas if it's just like a new model version came out and now it suddenly doesn't find your needle in a haystack anymore, then you're like, okay, what can you do? You're kind of at a loss.Alessio [00:45:21]: Let's talk a bit about, yeah, needle in a haystack and like maybe the opposite of it, which is like hard grounding. I don't know if that's like the best name to think about it, but I was using one of these chatwitcher documents features and I put the AMD MI300 specs and the new Blackwell chips from NVIDIA and I was asking questions and does the AMD chip support NVLink? And the response was like, oh, it doesn't say in the specs. But if you ask GPD 4 without the docs, it would tell you no, because NVLink it's a NVIDIA technology.Swyx [00:45:49]: It just says in the thing.Alessio [00:45:53]: How do you think about that? Does using the context sometimes suppress the knowledge that the model has?Andreas [00:45:57]: It really depends on the task because I think sometimes that is exactly what you want. So imagine you're a researcher, you're writing the background section of your paper and you're trying to describe what these other papers say. You really don't want extra information to be introduced there. In other cases where you're just trying to figure out the truth and you're giving the documents because you think they will help the model figure out what the truth is. I think you do want, if the model has a hunch that there might be something that's not in the papers, you do want to surface that. I think ideally you still don't want the model to just tell you, probably the ideal thing looks a bit more like agent control where the model can issue a query that then is intended to surface documents that substantiate its hunch. That's maybe a reasonable middle ground between model just telling you and model being fully limited to the papers you give it.Jungwon [00:46:44]: Yeah, I would say it's, they're just kind of different tasks right now. And the task that Elicit is mostly focused on is what do these papers say? But there's another task which is like, just give me the best possible answer and that give me the best possible answer sometimes depends on what do these papers say, but it can also depend on other stuff that's not in the papers. So ideally we can do both and then kind of do this overall task for you more going forward.Alessio [00:47:08]: We see a lot of details, but just to zoom back out a little bit, what are maybe the most underrated features of Elicit and what is one thing that maybe the users surprise you the most by using it?Jungwon [00:47:19]: I think the most powerful feature of Elicit is the ability to extract, add columns to this table, which effectively extracts data from all of your papers at once. It's well used, but there are kind of many different extensions of that that I think users are still discovering. So one is we let you give a description of the column. We let you give instructions of a column. We let you create custom columns. So we have like 30 plus predefined fields that users can extract, like what were the methods? What were the main findings? How many people were studied? And we actually show you basically the prompts that we're using to

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 6, 2024 121:17


Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y

america god tv love ceo amazon spotify netflix world learning europe english google ai apple lessons pr magic san francisco phd friend digital chinese marvel reading data predictions elon musk microsoft events funny fortune startups white house weird economics wall street memory wall street journal reddit wars vr auto cloud singapore curious gate stanford connections mix israelis context ibm mark zuckerberg senior vice president average intel cto ram state of the union tigers signal minecraft vc adapt siri ipo transformers sol instructors lsu openai gemini clouds nvidia stability rust ux api gi lemon patel nsfw cisco luther b2c compass d d bro progression davos sweep bing makes disagreement gpt mythology ml lama github llama token thursday night apis stripe quran vcs amd devops captive baldur opus embody silicon sora copilot dozen bobo sam altman tab capital one llm mamba gpu altman boba generic waze agi dali upfront midjourney ide approve napster gdc zuck golem coliseum git kv prs albrecht diffusion waymo rag cloudflare klarna gpus coders gan deepmind tldr boston dynamics alessio gitlab grok anthropic minefields sergei json ppa fragmented lex fridman ena mistral suno stable diffusion nox inflection decibel databricks counterpoint a16z mts rohde cursor gpts adept cuda chroma asr jensen huang sundar lemurian gtc decoder iou stability ai singaporeans omniverse etched netlify sram nvidia gpus cerebros pytorch eac lamo day6 devtools not safe agis mustafa suleyman jupyter kubecon elicit vae autogpt project titan tpu practical ai milind nvidia gtc personal ai demis groq neurips andrej karpathy marginally jeff dean imbue nlw positron ai engineer hbm slido nat friedman entropic ppap lstm c300 technium boba guys simon willison mbu xla lpu you look latent space swix medex lstms mxu metax
Unbounded AI-Assisted Research with Elicit Founders Andreas Stuhlmüller and Jungwon Byun

Play Episode Listen Later Apr 3, 2024 83:39


In this episode, Nathan sits down with Elicit co-founders Andreas Stuhlmüller and Jungwon Byun to discuss their mission to make AI-assisted research more accessible and reliable. Learn about their unique approach to task decomposition, which allows language models to accurately tackle complex research questions. We delve into the company's tech stack, their transition from nonprofit to startup, and their dedication to creating trustworthy AI tools for high-stakes applications. Join us for an exploration of the future of AI in research. The Cognitive Revolution is part of the Turpentine podcast network. Learn more: www.turpentine.co HELPFUL LINKS:  Elicit : https://elicit.com/ Andreas Stuhlmüller : https://twitter.com/stuhlmueller Jungwon Byun : https://twitter.com/jungofthewon SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive ODF is where top founders get their start. Apply to join the next cohort and go from idea to conviction-fast. ODF has helped over 1000 companies like Traba, Levels and Finch get their start. Is it your turn? Go to http://beondeck.com/revolution to learn more. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Plumb is a no-code AI app builder designed for product teams who care about quality and speed. What is taking you weeks to hand-code today can be done confidently in hours. Check out https://bit.ly/PlumbTCR for early access. Head to Squad to access global engineering without the headache and at a fraction of the cost: head to choosesquad.com and mention “Turpentine” to skip the waitlist. TIMESTAMPS: (00:00:30) Intro (00:05:05) What is Elicit? (00:06:03) Vision for Elicit (00:10:10) Making research transparent (00:11:58) How to use it? (00:15:27) Sponsors: Oracle | On Deck | Omneky (00:18:21) Task Decomposition (00:23:48) Defining the task (00:26:30) Eliciting fine-grained evaluations (00:28:06) Hallucination rates (00:30:22) Models in play (00:31:30) Sponsors: Brave | Plumb | Squad (00:34:26) Shipping a new feature every week (00:36:10) What was not possible a year ago? (00:38:26) Chain of thought (00:43:47) Tactically, how to structure the chain of thought (00:45:21) Data sets and fine-tuning (00:51:23) Scaffolding (00:53:22) Translating structure into more compute (00:54:27) Infrastructure for investigating papers in detail (00:59:50) Emphasis on high-value use cases over speed (01:00:33) Balancing long-term safety and misuse concerns (01:02:36) Monitoring research progress for negative impact (01:06:05) Evolving user base and usage patterns (01:06:52) Biomedicine as a key domain for Elicit (01:08:57) Expanding results and depth of processing (01:11:40) Reorganizing information for better understanding (01:13:12) Habit formation and frequency of use (01:14:43) The concept of an AI bundle subscription (01:18:09) Nonprofit to Commercial Venture (01:20:08) Nonprofit Team and Commercial Mission (01:20:39) Hiring Needs at Elicit

Warfare of Art & Law Podcast
*Bonus* "Manifestation of Freedom" from Elicit Justice: Conversations Off Grid

Warfare of Art & Law Podcast

Play Episode Listen Later Mar 31, 2024 3:18 Transcription Available


Featuring excerpts from Episode 108, an interview with Dr. Ashfaq Ishaq, and Episode 105, an interview with Milena Chorna, with musical composition by Toulme, Copyright 2024.Many thanks to M.C. Sungaila who sparked the idea for "Manifestation of Freedom". Please share your comments and/or questions at stephanie@warfareofartandlaw.comTo hear more episodes, please visit Warfare of Art and Law podcast's website.Music by Toulme.To view rewards for supporting the podcast, please visit Warfare's Patreon page.To leave questions or comments about this or other episodes of the podcast and/or for information about joining the 2ND Saturday discussion on art, culture and justice, please message me at stephanie@warfareofartandlaw.com. Thanks so much for listening!© Stephanie Drawdy [2024]

Chaz & AJ in the Morning
Tuesday, February 27: An Uncomfortable Place For A Battery; Emily From Elicit Brewing Co.; Strange Things Found In Vegas Hotel Rooms

Chaz & AJ in the Morning

Play Episode Listen Later Feb 27, 2024 51:55


A dog-safety story prompted the pet-lovers among the Tribe to call in with tips about maintaining four or more dogs at once! (0:00) Dumb Ass News - Chaz & AJ talked with comedian Ayesheh Mae about the logistics of inserting watch batteries in a very uncomfortable place. (15:43) Emily Sands from Elicit Brewing was in studio to talk about their new facility in Fairfield, the differences between making beer in opposite ends of the state, and to solicit new beer names from the Tribe. (18:53) The Las Vegas Starfish, Jen G, talked about the best and worst hotels in Vegas, plus the strangest things found in rooms throughout the city. (37:20)

Meeting Malkmus - a Pavement podcast

jD is joined by Matt F Basler to discuss his experience with Pavement and to analyze song number 45 on the countdown.Transcript:Track 1:[0:00] Previously on the Pavement Top 50.Track 2:[0:02] So there you go. At number 46, it's the third Wowie Zowie song to chart behind Best Friend's Arm at number 49.And Motion suggests itself at 48. Here we are at 46 with We Dance, the first track of the 2005 masterpiece Wowie Zowie. Maui.Keith, what do you think about We Dance?So, yeah, I think it's a great song.I love how it leads off the album. It's got like, I feel like it has this ethereal quality to it.Like that kind of just, I don't know, it seems just kind of dreamy sort of for me.I don't know if that's how it comes off to anyone else at the beginning of the song.Track 3:[0:59] Hey, this is Westy from the Rock and Roll Band, Pavement, and you're listening to The Countdown.Hey, it's JD here, back for another episode of our Top 50 Countdown for the Seminole Indie Rock Band.Track 6:[1:12] Pavement. Week over week, we're going to count down the 50 essential Pavement tracks that you selected with your very own Top 20 ballads.I then tabulated the results using an advanced abacus and, well, frankly, a calculator.And all that's left for us to reveal is this week's track.How will your favorite song fare in the rating? Well, you'll need to tune in or whatever the podcast equivalent of tuning in is, I suppose, downloading to find out.Track 3:[1:38] This week.Track 6:[1:39] We're joined by a Pavement superfan, Matt F. Bosler. So there's that.How are you doing, Matt? I'm wonderful. This is good to hear.Yeah, no, I think so. Yeah, man.Uh it's a snowy blustery day where i am very cold what's it like where you're at uh same so i'm in i'm in uh st louis missouri it's it's a frozen hellscape currently, so i'm in a robe right nowi'm in our our place is cold we can't keep it warm ceilings are too tall oh my god that's terrible that's a but in the summer i bet you it's awesome it's hot then, there's no good time there's nogood period oh man well maybe when the cardinals play i don't know are you a cardinals guy i'm not a sports guy not a sports guy at all i i'll fake it sometimes right get by you know right,I've learned how to say how about them cards, that's great you got it nailed you got this whole thing figured out.Track 3:[2:59] Well.Track 6:[2:59] Motherfucker, we're here to talk about your pavement experience.And I've been calling it your pavement origin story. So why don't you share with us what that looks like?Well, I see a post. I see a post out there on the internet.It says like, oh, we're talking about the top 50 pavement songs.Would any of you like to talk about it? Maybe discuss your origin stories?Reason i say i say to myself i say matt uh perhaps you would be a unique perspective on something like this as i am what i think especially in the world of pavement fans i'm a fairly newuh of pavement fan i'm a newcomer uh to to the band now i'm a i'm a coming of age in the the 90s.Track 3:[3:53] You know?Track 6:[3:54] I'm listening to Nirvana, Pixies, Replacements.I'm a cool guy. We were from a small town in Missouri, though, so it was difficult to figure out what was cool and what wasn't cool.Coolest things we were reading were like Guitar Player Magazine, and then you'd find out about a band from someone else.You'd bump into a cool person, and they'd go like, Like, I've never heard of, I don't know, some band, you know?Ever heard of the Stooges? And you'd go, no.Well, somehow, I had gotten it in my brain that I'm sure you're aware of Nu Metal and, Saliva, perhaps, or Korn, certainly.Track 3:[4:46] Sure.Track 6:[4:46] Backwards K, yeah. Yeah. Somehow in my brain, I thought Pavement was a new metal band. Get out.Now, I don't know how this happened.Maybe the name, maybe the way the name was written at some point, the logo.Sure. And so I totally wrote them off, you know? Now, of course, getting into music is not a linear thing.So I would hear Pavement songs.I was familiar with some, you know, Cut Your Hair.When I dove in, I was like, oh, I have heard this. But I never connected.Dear friend Ryan tried to get me into Pavement.Showed me, I think, maybe one of their late night.But I don't know. It never came together. always thought oh pavement their new metal and a lot of my friends listen to pavement, but I think what's the band white pony is that the albumno no.[5:57] Oh, shoot. What are they called? Drawing a blank. Deftones.Oh, Deftones. Okay. Yeah. Deftones somehow, whatever.I'm not saying you're a dunce if you like the Deftones, but the Deftones were kind of new metal, but slipped into the indie rock. People liked them as well.So it wasn't insane that somebody would maybe have a new metal band on their their list of bands they liked if they listened to things that i liked right so years go by i just don't get into ityou know and and uh i should have i should and i'm a bad music listener too takes me a long time i gotta listen to things over and over again to like uh get into it um, so it's it's i i try reallyhard i try to be listening to new stuff all the time but it It feels like an undertaking for me to do that, so I don't do it as much as I should.Anyway, driving in the car maybe five years ago, six years ago, with my beautiful lover, Courtney, she puts on a song, Range Life.Track 3:[7:03] Oh.Track 6:[7:04] Boy. And this is rare that this happens to me.Like I said, you usually got to hear something over and over.Range Life, we're maybe halfway through, and I go, now see this. Now this is good. Now.Track 3:[7:16] This is what music should be.Track 6:[7:18] Who's this? She goes, this is Pavement. I say, no, no, no.No, I know Pavement. This isn't Pavement.Pavement would be doing like the thing that old Jonathan Davis does at the end of that.He'd be scatting or something.She shows me the phone.I'm swerving all over as I just stare at this phone scrolling, going, wait, this can't be right.Track 3:[7:46] Well, it was.Track 6:[7:47] It was right.Track 1:[7:50] And man.Track 6:[7:51] Yeah, then I started. So even still, though, it's like I said, a bad music listener.And now I'm coming into Pavement with a billion albums.And they're a weird band, right? So I started listening to the top on Spotify.So you've got like Harness Your Hope. Good place to start.Track 3:[8:11] Yeah.Track 6:[8:11] Start there. I'm like, oh, this is great. I love all of these.And you know those are probably the most like easily accessible um pavement songs which it was fun to find out they have a lot of songs that are uh maybe not so easily accessible, then ii go well i gotta dive into an album i choose at random sort of uh wowie zowie jesus christ which is now now that's kind of my my favorite one which i guess that's kind of you you know,your first one, but because I'm like, Oh, these guys are, are weirdos too. So, And even, you know, I think they're an interesting band to get into late.[8:58] Because by the time I went, well, I'll get into the subreddits. I'll really dive in.People are talking about EPs so much.But, you know, I'm coming more from a world where EPs don't come into the conversation as much.Like with pavement those seem like very main albums uh but i can't really think of another band where eps would be discussed on such a same level as as the full albums um and yeah iwould i mean there's i'm still at a point where like i can't name there's songs on each album that if you named them i wouldn't know them offhand you know like gotcha you'd know themto hear them but but not retrieve the song by name.Which is great. I mean, yeah, and I'm still, you know, like I said, it does take me a long time to get into stuff.And like I was saying, I think even especially kind of the back ends of Pavement albums get pretty wild.So yeah, I mean, I'm still kind of digging through and figuring it all out.Oh, that's really cool. Cool. First of all, you're a great storyteller.So thank you for that. That was a good story.Is it fair to say then you've never seen them live? We did go see them in Kansas City.Track 3:[10:25] Oh.Track 6:[10:26] I mean, one of the terrible things to me is like, listening to them now, they would have been, because I probably would have been getting in.Track 3:[10:34] Like.Track 6:[10:35] You know, with Crooked Rain probably would would have been the first one i would have bought if i if i did it at the right time right and i would have absolutely i mean this wouldhave been my favorite band and then i mean they are now i i, consider them in the they so it's pre-pandemic they were going to play a show in barcelona, right and we i mean we weretalking about it um because it just felt like i i felt like i missed out.This is a band I could have seen several times.And you're going like, well, they've already done a reunion tour.There's a good chance we'll never get to do this. So maybe we go. Maybe we check it out.Track 3:[11:21] And then.Track 6:[11:22] Of course, that all went away. And then we went and saw them in Kansas City a couple of years ago, I think, a year ago, two years ago. And it was wonderful.Track 3:[11:33] It was great.Track 1:[11:34] Yeah.Track 6:[11:35] You lucked out because that 2010 Renewed Tour, although it was very special to me, I saw them in Central Park in New York City and that was really special.They didn't look like they were having the best time. That's what I understand.But this tour, they seemed like, like SM in particular, just seemed like he was having fun.Right. And yeah, that's interesting too, because yeah, now my perception of them is like, wow, what a great live band.Yeah. But even in their heyday is the wrong term, but I guess pre-Breakup, right?Sure. Even then, people were kind of like, oh, they're sloppy.That's like their whole thing.Yeah and i you know that's not something i ever experienced yeah the kansas city show was just a great band oh yeah so much fun so do you have any um any favorite tracks or a favoritefavorite record at this point is it still wowie zowie yeah i think so um.[12:38] It was interesting well so that was you know i i probably listened to that for a year or two before i started going like okay i'm a i'm a join the subreddit guy and uh it was reallyinteresting for me to learn that that was like uh not well received initially um and even the later stuff too i i i think twilight is great i think bright in the corners is great and you know Imean, I know that I'm getting all this stuff at once.There's no like, oh, I love Payment. I love the sound of Slanted and Enchanted.Can't wait to see what's next. And then you get this kind of polished record, and maybe that would be a disappointment.But to me, it's all at once. So I don't know.I really love it all. It would be really hard for me to rank.Track 1:[13:34] Like.Track 6:[13:35] Well, and also, I mean, I did listen to, like, started listening to the top Spotify plays, and then I would listen to some, like, other people's like my favorite tracks or whatever deepercuts or whatever and right and so like part i don't necessarily even know like what's from terror twilight bright in the corners without like thinking about it um so for effort and you knowslanted change is a little easier to just discern that sound from the later stuff but even Even Crooked Rain is a fairly slick record.So yeah, a lot of those tracks, I don't really... Like I said, unless I go, oh, what is that on?It's all just like pavement songs. Wowie Zowie, I know the best.That I could do. But yeah, they're all just kind of like...It's just a bunch of good songs. I agree. I so agree with you.And I discovered them in a similar way. I discovered them late.I discovered them after Terror Twilight.So I got the same gift that you got, which is like five records at once.Yeah. And to hear people say like Carrot Rope, I've seen people say like.Track 3:[14:55] Oh.Track 6:[14:56] But that one, that one's a toss off. That one's a joke, stupid song.And I'm just like, I don't... Sure.Track 3:[15:02] I guess.Track 6:[15:03] But I like it. And I think, yeah, fun songs like that, there's room room for that again if you were so stoked for the next 10 pavement songs and one of them you felt, was a silly gooftrack maybe i could see being a little more disappointed but i don't know i think it sounds like uh sounds like animal crossing music um which was another big part of the pandemic for usyeah and i enjoyed it tied it all in a bow you just tied it all in a bow you You are a master storyteller.Track 1:[15:36] Well.Track 6:[15:36] What do you say we go to the track that we're going to talk about this week, and we can do that right after this little break. What do you think? Can't wait. But I will.I'll wait, because you just said there's going to be a break. So I can wait.Track 1:[15:52] And I'm excited to do it.Track 6:[15:54] Excellent.Track 3:[15:55] Well.Track 5:[15:55] We'll talk to you right after this. Hey, this is Bob Mustanovich from Pavement.Thanks for listening. Now on with the countdown. 45.Track 3:[19:55] So this is song number 45 on the countdown and it is our first track from terror twilight on the the list so far it is you are a light what do you think of this track matt personally andhey you know not trying to be controversial i like it i think it's great oh that's not controversial i guess you're right i guess everybody wrote in i didn't like i saw you talking about peoplewrite in for your top 50 and i went i'm not qualified i'm i shouldn't oh i should let the the real guys do this um so far i i agree i've i've been i've been keeping up with the pod and i'm i'mthere there's not which there's only been a couple but there's not yet been a track that I've gone.Track 6:[20:52] You people are insane. And you are a light. I'm right there with them.Sure, this could be a top 50 for me.Yeah, I think it's a lovely song. I think his vocal tone is maybe one of the best that we've heard of performances delivered vocally.It's so clean and so smooth um i love all the atmospherics in this song nigel has.[21:19] Created like a soundscape you know for the rather sparse band arrangement which we're used to with this band you know sort of uh filling in the gaps really nicely i love how thesong opens with that almost it almost sounds like you're turning something on yeah like a flick of a switch or something doesn't it oh yeah yeah yeah uh for sure old electronics remindsme of or something yeah in a in a movie i don't know if old electronics really make that sound but i feel like they do yeah yeah yeah.[21:57] Um yeah i think i think you know that's a a thing about um like his vocals like sometimes, uh i'll listen to vocal takes of the pavement you know and be like i wish i was boldenough, to be okay with that uh and this isn't one of those right like this he sings very like like, on key and everything, which is cool, too, to have those differences.And then to, like, know that, like, on other songs, and of course I'm not thinking of any right now, but he does it a lot, right, where it's not necessarily on the correct pitch or with greattone.Track 2:[22:44] And so songs like this.Track 6:[22:45] Right, are just kind of like, well, yeah, he could have done it perfect, but it feels better, more fun to...To do it um more fun i guess or uh whatever and um and right here randy jackson being like you're a little pitchy dog yeah right and that's it that you know i mean like i said like if i'mever recording a song like there's no way i i would i would i'm very self-conscious about things like that and uh it's it's nice to have someone to look at and go it's so it's okay you You canhave fun with it, or you can do it more like you're a light and nail it and make a very pretty song.But then I do like how this song is almost cut in half, right?There's the first chunk, and then there's the second part, half.[23:40] Dynamically, there's tons of shifts. And that's another songwriting thing that I appreciate in this song. They don't go back to the first part.And I think in songwriting, I don't know.I feel like that's a tough thing to do, to go like, nope, it's just this and then this and then we're done.We don't need to overdo it.There's no reason to come back to even like a chorus, which I don't know.I mean, the song would be difficult to kind of say what is a chorus.Yeah. Yeah. yeah i suppose you are the you know like you are the light the the calm in the day you're the light the calm in the day um like i suppose that scores but you're right there's no,There's no pavement blueprint. We've heard six songs so far on the countdown, and they're all remarkably different.Track 4:[24:35] They're all remarkably different from a structure standpoint as well as just like a finished product.Track 6:[24:42] I love that too. I'm glad you pointed that out because it's like verse, chorus, verse, chorus, and then weirdness, and then sort of a bridge, and then sort of out. But none of it is...Songwriting 101 no and right like it is interesting i think because you could take a lot of these songs in this this top 50 and pretend well what if there was a band that was this like this wastheir entire thing uh and you know you'd be like oh that's they're cool uh, But right, pavement does do a lot of different things.And to me, that's more interesting. I think I get the impression from some of the diehards, which again, I'm not saying anyone's doing it wrong or anything, but I think sometimes peoplewill get sort of stuck on their idea of pavement, or maybe the version of pavement they like.And it can be annoying to them when they diverge from that too far in their minds.But I think I look at it like.[25:58] Well, I only have to listen to one band. I don't have to get into five or six other bands.It's making it easy for me. That's great.[26:12] What do you think this song is about?Do you think it's about anything, or is it just word salad, or what's the deal? Man, I'm not a lyric guy.No, okay. I guess I'm more of a connotative lyric person, right? Okay, expand on that.These words feel a certain way together.It's not like a story. It's not like a linear tale, right?Track 4:[26:42] Right.Track 6:[26:43] And I'll do that even with songs that maybe are... like someone will go oh that song that's about him riding on a train and i'll be i'll almost be disappointed when someone tells methat right i'm like oh i guess it is yeah okay i see um i i like lyrics that just to me my interpretation was like well that makes me feel this way and And all of these words kind of like cometogether to elicit an emotion.And that's sort of the vibe I get from pavement lyrics.I think you're right. I think you're bang on. People talk about it like.Track 4:[27:26] Oh.Track 6:[27:26] It's just nonsense.And I think maybe in like a, oh, it's about this. It's this.I'm talking about these things.Maybe that's true, but I do think that they always seem to me to be pretty carefully selected things.[27:46] Elicit uh an emotion a specific like vibe and feeling and uh yeah i mean i i did i so like i said i'm not really lyric i don't really like pour over lyrics um and i did for this because ithought that would be a good thing to do and then that's when i learned well i am bad at it i have no idea what this is about i but i like them all i do like cool i like cool words and these arelike I read these and I go, well, this is cool.I like how this makes me feel.And they all are neat words together.Yeah, and some of them connect. I think You Are a Light, The Calm and The Day, I think that fits together. That may be about somebody, but maybe not.I love, lyrically, it almost reminds me of David Berman.[28:37] I Drive a Stick, Gotta Love It, Automatic. like that's the the vocal delivery of that is really cool so i think you're right there's there's almost as much like a michael stipe sort of yeahthing going on where it's like this word sounds good with the melody i'm going to use that in lieu of uh writing something heartfelt and uh, linear or or something along those lines i don'twant to say this song isn't heartfelt or other pavements no but you know what i mean i know you mean and i think sometimes when you write a song, you might you know you say wellyes this song is about the way I felt when this thing happened.[29:21] But I'm not it's not about that thing you know what I mean it's more about the emotion and, like I said I don't really enjoy story songs that much or I feel like you're sort of likestripping away a layer for people to enjoy it.Because, you know, you're going like, well, I don't have a red truck, so I can't...You're making me do more work, right? Because now I have to go, okay, that's the way you felt about your red truck.What could I feel that way about instead of just talking about the emotion and then, you know, whatever.I'm getting above my pay grade on talking.But yeah, I'm sure like maybe old mouth missed, could say, oh, yeah, this is about, like, Lethalizer Slingshots is about the time that we did this and this and this, but I don't know what thatmeans. Yeah. Yeah.Track 4:[30:31] Swallow propane.Track 6:[30:32] I just know, hey, as much of a fan as I am, not going to do that, Steve. Not going to do that.Track 3:[30:40] No.Track 6:[30:40] I don't think I will. I don't think I will. Where do you think this fits?Do you think it's a good spot at 45, five or do you think it should be uh like is it properly rated do you think or would you have it would you have it higher up or would you put it lowerdown for for yeah i mean i think for me i i think i'm gonna have more issues myself with the with the top because i think sure with people who are perhaps better fans than i know it there'sno such like it would be hard for me to not say a.Track 4:[31:16] Oh.Track 6:[31:17] Cut Your Hair should be top five. That's a, what a great song.And I think, I feel like it's going to get deeper cut, less pop song toward the top.And this, I don't know, this kind of.Track 3:[31:33] To me.Track 6:[31:33] This would probably maybe go higher for me, but I think...Man, they got a lot of songs. They got a lot of good songs. You have a lot of songs. 120 were selected for this process. 120 songs.I guess really, right? This is sort of a fool's errand from the start.It's just kind of a fun way to talk about a bunch of songs. I think you've mentioned...Track 3:[31:57] You got me.Track 6:[31:57] You got me. Yeah, I think you've talked about it.It's like, well, yeah, this is 45 today, but next week.Track 1:[32:07] It wouldn't make my top 100 or something you know that's um that's pavement fans are a little fickle yeah but but if if this was like the guitar player magazine i'll talk shit on themagain uh top 100 guitar players you know this wouldn't be the one that gets me in the comment section going you're out of your mind what that's no way gotcha so i would read it and i'dgo go yeah did you see the guy did the top 500 guided by voices songs holy shit no i did uh it might have been uh what a challenge some publication some some music magazine and umwe'll have to check that out i consider myself a a fan of a pretty fair weather fan of guided by voices but i do like them now that was a band i tried to get into late and i went i can't there'sno way i can't do it it's too much work and i'm reading this top 500 and like it was crazy to me that his like top 30 i maybe knew two or three songs out of it um wow i have to check this iat least have to check the top 50 out see how many get you in those comments going you're out that's crazy well i'm i'm probably like you in that you know i've got b1000 and i've got umgosh i can't even even think of the other records that i have but uh.Track 6:[33:35] I don't know that I could name. I'm a bad fan here because I don't know if I could name 50.I don't know if I could name 50. They have tricky names to recall at times as well. Yeah.Yeah. So this, right, I think this is certainly an easier undertaking.Makes more sense to me to do the top 50 pavement than top 500.I mean, at least here you can go like, the difference between, you know, 30 and 25 makes sense on a top 500.What is 450 to 442?Like, what is that? That's right. How do you even quantify?Track 3:[34:22] But again...Track 6:[34:23] Well, most people had difficulty doing 20. Most people had difficulty doing 20 ranked, which is what I asked for.I asked for 20 rank songs and then i would get emails from people i'd be like dude just do your do your top five and then add another 15 songs you know like because like you said it'stough once you get to a certain point you know like what is 17 you know what i mean out of 20 and i think this is a band where you go like if i'm in a bummer mood they've got themthey've got songs for that and if i'm if i'm wanting to have a have a good fun party time that's a different different set of of songs um major leagues was my that was my most played onspotify last year oh cool all right because you get that uh report at the end of the year right so yeah we'll see, hopefully that that makes it somewhere i guess i guess uh that would havemade it easier for me because yeah how would i pick a number one i guess if i listen to that the most i that would be your number one for 2023 i guess so i guess yeah well matt f basler uhit's been great talking to you about pavement and i really i really appreciate your time this this podcast season two here of meeting malchus is is entirely uh shouldered by the guests so uhyou did a you did a formidable job, and I appreciate that a lot.[35:51] Is there anywhere that people can find you that you want to be found, or is there anything project-wise that you're working on that you want to talk about.Track 4:[36:01] Or anything like that?Track 6:[36:03] Yeah! Matt F. Bosler everywhere. We're a band, I suppose.Track 4:[36:08] Matt F.Track 6:[36:08] Bosler is a band and a me, and we're doing songs and, I think I could see, I'm not going to say if you like Pavement, you'll like my stuff, but I think if someone was listing bandsthey liked, it would sound crazy if someone said, I like Pavement, Matt F. Bosler.It wouldn't be whiplash for someone to mention those two things sonically together.We just did, a couple years ago, though, we did a synth album of covers of modern country songs about beating people up.So that's maybe a little bit out there. Can you find it on Bandcamp?Track 4:[37:01] Yeah.Track 6:[37:02] Yeah, yeah. It's everywhere. Spotify, Apple Music, all that.That and then as an apology to country music for making a mockery we did then we made a an album of country originals whoa so we're doing a lot of stuff we're doing some some crazystuff out there that's cool i hope i hear you on the pod list this year do you know what the pod list is, no so every year i do something called a pod list for my birthday and i solicit tracksfrom, talented pavement uh fans and they do covers and then i put all the covers together in a podcast playlist or a pod list and uh i get it sequenced by somebody who who uh i get itsequenced by somebody who knows sequencing and uh it's usually pretty fucking fun that's wonderful wonderful i have a podcast i guess that'll be july podcast i'm not good at namingthings so yeah it is just matt f bosler's podcast or you're really good at naming things well sandy and kevin are okay they named me so mom and dad yeah yeah all right brother well it'sgreat talking to you like i said uh that's what i've got for you this this week.Track 3:[38:27] So stay cool and wash your goddamn hands.Thanks for listening to meeting Malcolm is a pavement podcast, where we count down the top 50 pavement tracks as selected by you.If you've got questions or concerns, please shoot me an email JD at meeting Malcolm is.com.Support this podcast at — https://redcircle.com/meeting-malkmus-a-pavement-podcast/exclusive-contentAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Science In-Between
Episode 179: Back in Our Prime

Science In-Between

Play Episode Listen Later Feb 7, 2024 65:03


In this episode, we discuss the role that AI tools should play in the development of beginning researchers. We introduce the following tools: Elicit (https://elicit.com) Microsoft Copilot (https://copilot.microsoft.com/) Claude (https://claude.ai/chats) LitMaps (https://app.litmaps.com/library) Connected papers (https://www.connectedpapers.com/) Research Rabbit (https://www.researchrabbit.ai/) Consensus (https://consensus.app) Things that bring us joy this week: The Holdovers (https://www.miramax.com/movie/The-Holdovers/) All of Us Strangers (https://www.searchlightpictures.com/all-of-us-strangers/) Intro/Outro Music: Notice of Eviction by Legally Blind (https://freemusicarchive.org/music/Legally_Blind)

Rob Dibble Show
EMILY FROM ELICIT BREWING

Rob Dibble Show

Play Episode Listen Later Jan 18, 2024 13:54 Transcription Available


The Rob Dibble Show was LIVE in Manchester celebrating UCONN vs Creighton !!!! Thanks to Bud Light, we got hooked up at Elicit!!!

The DigitalMarketer Podcast
How to Write Your Next Book Using AI with Markus Heitkoetter

The DigitalMarketer Podcast

Play Episode Listen Later Dec 26, 2023 23:11


How can you use AI to write your next book? As we get to the tail end of a year that has been punctuated by the introduction of AI apps that assist us in our creative process, Markus Heitkoetter has turned to AI as his sparring partner to help him write his next book. A successful real estate investor with a few book titles already under his belt, Markus has not been afraid to gamify his writing process and hopefully save his editor some time in the long run by turning to AI for constructive, objective feedback. In this episode, we look at the specifics of AI software such as Claude and Elicit that are perhaps better suited for uploading big documents and checking your research as you fine-tune your own prompt engineering process. There's still a creative process to be had that needn't be as daunting as staring at a blank page trying to draw blood from a stone. If your next masterpiece is waiting inside you but you're afraid to let it out, perhaps this conversation will encourage you to get the creative juices flowing as you put pen to paper and create 'your best book yet.'Markus Heitkoetter is the Founder of RockwellTrading.com. A link to some of his previous book titles (written without AI) can be found below.Key Takeaways:01:14 Why has Markus decided to use AI to write his next book?03:35 Using Claude (AI) as your sparring partner to create a book brief05:29 Using AI to get constructive feedback on your ideas07:48 The joy of instant feedback (and how that improves your creativity)12:30 Learning to 'gamify' the writing process16:57 Using Perplexity AI for your research (and ELICIT.ORG)Resources Mentioned:Claude AI - https://claude.ai/Elicit AI - https://elicit.com/Connect with Markus Heitkoetter:Markus's Books on Amazon - https://www.amazon.com/s?k=markus+heitkoetterWebsite - https://www.rockwelltrading.com/markus-up-close-and-personal/Personal - Markus Heitkoetter - http://markus-heitkoetter.com/Be sure to subscribe to the podcast at: https://www.digitalmarketer.com/podcast/Facebook: https://www.facebook.com/digitalmarketerInstagram: https://www.instagram.com/digitalmarketer/LinkedIn: https://www.linkedin.com/company/digital-marketer/This Month's Sponsors:Conversion Fanatics - Conversion Rate Optimization Agency

Warfare of Art & Law Podcast
*Bonus* “Signs” from Elicit Justice: Conversations Off Grid

Warfare of Art & Law Podcast

Play Episode Listen Later Dec 22, 2023 3:54 Transcription Available


Featuring excerpts from Episode 121, an interview with Dr. Samson Munn  and musical composition by ToulmePlease share your comments and/or questions at stephanie@warfareofartandlaw.comTo hear more episodes, please visit Warfare of Art and Law podcast's website.To view rewards for supporting the podcast, please visit Warfare's Patreon page.To leave questions or comments about this or other episodes of the podcast and/or for information about joining the 2ND Saturday discussion on art, culture and justice, please message me at stephanie@warfareofartandlaw.com. Thanks so much for listening!© Stephanie Drawdy [2023]

Co-Create With Carlie- Allow The Law of Attraction, Law of Assumption & Spirituality to Work For You
Day 351: Instead of Questioning Why Things Happen To Certain People, Start To See Them In Their Wholeness To Elicit Their Change To Better. -Abraham Hicks 365 Ways to Make Your Dreams a Reality

Co-Create With Carlie- Allow The Law of Attraction, Law of Assumption & Spirituality to Work For You

Play Episode Listen Later Dec 17, 2023 10:18


Energy precedes everything. That is why the best thing you can do at times, is to begin the energetic changes that will start to elicit change in the 3D. YOU have to be the change you want to see in the world. And the only way you can do that for others, is to see them in their wholeness, and declare that as who they really are. Join me your host & fellow co-creator, Carlie, on day 351 from Abraham Hicks, ⁠⁠⁠⁠⁠⁠⁠⁠'365 Ways to Make Your Dreams a Reality'⁠, as we use our power of awareness to inspire change in others. Your perception is what governs your reality. It is time to start seeing others in their wholeness so you leave them with no other choice, but to be the very best versions of themselves. ✨ If you are interested in personalized manifestation tools tailored to and made just for you like self concept rampages, guided meditations, affirmations, and subliminals, click ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. ✨ Connect with me for more conscious creating on my ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Happy Manifesting Powerful Soul! Sending you all peace, love, and high vibes!

Be a Better Ally
Episode 159: Researchers As Change Makers

Be a Better Ally

Play Episode Listen Later Nov 23, 2023 10:55


Explore how Gender and Sexuality Alliances (GSAs) in schools can leverage cutting-edge AI research tools to enhance their advocacy and information design efforts. Specifically, this episode is an exploration of three powerful AI tools - Elicit, ChatPDF, and Consensus - and how they can be used to amplify the GSA's voice and impact. Those three resources are further explored inside of a free guide, which listeners can download: https://shiftingschools.lpages.co/ai-powered-research/ Reach out to Tricia: Tricia (at) shiftingschools (dotcom) with questions about that guide or the January Shifting Schools AI Playground. Learn more about that www.shiftingschools.com

Market Dominance Guys
EP203: AI Coaching Conversations Elicit Unfiltered Rep Feedback

Market Dominance Guys

Play Episode Listen Later Nov 8, 2023 27:51


The guys are tackling the big question on every sales manager's mind: Could AI replace me? With wisdom and reassurance, Corey and Chris explore the power of Taylor, an AI sales coach created by grw.ai CEO Alex McNaughton. Taylor provides a judgment-free space for reps to vent frustrations and surface red flags managers miss. As Chris explains, Taylor's conversational skills elicit "confessions" from reps. And for managers worried an AI could do their job better, Alex gently says: "The goal here is to make leaders better...not replace them." So breathe easy sales managers, and get ready to be 10-50X more effective. With the power of AI augmentation Sales Managers will be unstoppable. Join us for this episode, AI Coaching Conversations Elicit Unfiltered Rep Feedback. About Our Guest: Alex McNaughton - CEO/Founder - Grw.ai With a background in B2B sales for both Kiwi startups and US tech giants, Alex is passionate about increasing the level of professionalism & performance in B2B selling globally. Prior to Apprento, through his advisory firm, he trained hundreds of founders, executives and sales professionals and worked across over 130+ ANZ businesses from pre-revenue startups like SafeStack Academy, to growth companies like Rocos to large multinationals like Vodafone, helping them to reduce their sales costs, speed sales cycles, maximize win rates, build out teams, expand into new markets and ultimately generate $10s of millions in new revenues. Links from this episode: Grw.ai Branch49 ConnectAndSell Alex McNaughton on LinkedIn Corey Frank on LinkedIn Chris Beall on LinkedIn  

The Sports Junkies
Will Chase Young's return to DC elicit boos from Commanders fans?

The Sports Junkies

Play Episode Listen Later Nov 7, 2023 8:35


The Great Sources with Rabbi Shnayor Burton
S5, E3 Exodus, Exile and Redemption, Introduction, Part 3: How to Elicit the Torah's Deepest Secrets

The Great Sources with Rabbi Shnayor Burton

Play Episode Listen Later Sep 26, 2023 2:41


"Exodus, Exile and Redemption" is a study of the profound significance of Judaism's history. Written essays are published bi-weekly here: https://shnayor.substack.com/s/from-exodus-to-exile-to-redemption Please subscribe! This series is a project of the Jacob Lights Foundation. To support this and other ongoing projects of the foundation, please consider becoming a paid subscriber to the Substack newsletter or making a donation via Zelle to jacoblightsfoundation@gmail.com.

ShopTalk » Podcast Feed
583: Language Models, AI, and Digital Gardens with Maggie Appleton

ShopTalk » Podcast Feed

Play Episode Listen Later Sep 18, 2023 56:26


Show DescriptionMaggie Appleton talks with us about her work at Elicit, working with large and small language models, how humans vet the responses from AI, the discussion around the Soggoth meme in AI, using Discord as UI, what to do if your boss wants AI in your app, and why does she call her blog a digital garden? Listen on Website →GuestsMaggie AppletonGuest's Main URL • Guest's TwitterDesign at Elicit. Makes visual essays about UX, programming, and anthropology. Adores digital gardening, end-user development, and embodied cognition. Links Maggie Appleton Language Model Sketchbook, or Why I Hate Chatbots Maggie Appleton | Dribbble Squish Meets Structure: Designing with Language Models Ought FAQ | Elicit Squish Meets Structure: Designing with Language Models Language Model Sketchbook, or Why I Hate Chatbots 577: Shawn Wang on AI - ShopTalk Introducing Whisper Photoshop (beta) on the desktop Midjourney Llama 2 - Meta AI LukeW | Ask Maggie Appleton (@Mappletons) / X Sponsors

Mind Pump: Raw Fitness Truth
2119: Getting Leaner By Eating More Calories, Ways to Improve Muscle Definition, How to Train When You are Feeling Burnt Out & More (Listener Live Coaching)

Mind Pump: Raw Fitness Truth

Play Episode Listen Later Jul 15, 2023 103:55


In this episode of Quah (Q & A), Sal, Adam & Justin coach four Pump Heads via Zoom. 2119: Getting Leaner By Eating More Calories, Ways to Improve Muscle Definition, How to Train When You are Feeling Burnt Out & More (Listener Live Coaching) Mind Pump Fit Tip: Do the LEAST amount of work to ELICIT the most amount of change. (2:06) Recapping Maximus' 4th Birthday Party. (14:59) The value of sign language for kids. (23:49) Effective natural methods to dispose of ants and other insects. (27:26) Big Pharma advertising spend game. (35:40) Glyphosates are EVERYWHERE! (43:13) The guy's take on the Jonah Hill emotional abuse allegations. (46:30) It's all about timing when it comes to elections. (50:03) Threads vs Twitter. (58:23) An update from Sal on the peptide BP-157. (59:04) An easy way to boost your protein is with Paleo Valley's bone broth protein powder. (1:00:35) Shout out to Bret Johnson. (1:01:41) #ListenerLive question #1 – How do you reverse your diet if you're mostly eating meat? (1:03:10)  #ListenerLive question #2 - Is it ok to jump from program to program? My goal is to gain back muscle definition. (1:15:48) #ListenerLive question #3 – How would you recommend getting back into working out after battling through a severe burnout? (1:29:01) Related Links/Products Mentioned Ask a question to Mind Pump, live! Email: live@mindpumpmedia.com Visit PRx Performance for an exclusive offer for Mind Pump listeners! Visit Paleo Valley for an exclusive offer for Mind Pump listeners! **Promo code MINDPUMP15 at checkout for 15% discount** July Promotion: MAPS Starter | MAPS Starter Bundle 50% off! **Code JULY50 at checkout** Mind Pump #2112: Is 15 Minutes Enough Time For An Effective Workout? 10 Plants That Repel Bugs - Herbs, Shrubs, Flowers Insects Hate 70% of News Advertising Now Belongs to Big Pharma CDC finds toxic weedkiller in 87 percent of children tested Farming robot kills 200,000 weeds per hour with lasers What To Know About Jonah Hill's Emotional Abuse Allegations Video of Donald Trump Shaking Joe Rogan's Hand at UFC Fight "Mamas For DeSantis" Ad: "When You Come After Our Kids, We Fight Back” 100 Million Sign-ups In 5 Days. 8 Reasons Why Threads Is Blowing Up Visit Organifi for the exclusive offer for Mind Pump listeners! **Promo code MINDPUMP at checkout** MP Holistic Health MAPS 15 Minutes Mind Pump Podcast – YouTube Mind Pump Free Resources People Mentioned Dr. Stephen Cabral (@stephencabral) Instagram Bret Johnson (@bretjohnson11) Instagram Gary Vay-Ner-Chuk (@garyvee) Instagram  

BEER MAN BEER
Episode 114: Hard Rock Bock

BEER MAN BEER

Play Episode Listen Later Apr 24, 2023 76:20


WE MADE A BEER! This episode we are at Elicit Brewing Company in Manchester Connecticut to taste our first beer collaboration with Brian Ayers. Brian is the headbrewer at Elicit and invited us to help make his first Maibock which we named Hard Rock Bock. Brian joins us on the show to talk about the brewing process on the coldest day in February. Why "Mashing Out" is fun for podcasters and not for brewers. And discusses Elicit's new location in Fairfield Connecticut. Plus, we are joined by a panel of Beer experts to taste our beer and let us know if they think it is Solid or Not Solid. Big Mike and Brando from Brewheads Entertainment join us as well as Dave Pesky from Industrial Mechanics Brewing to try our beer. And Beer Man Beer favorite Briizy Hiltz. While we enjoy our new beer we also get into conversation about Briizy getting married. How we feel about Furries.  What Keg got hit in the forehead with in Rhode Island. "Nudy Bars" in Wild Wood New Jersey. And we all tell our stories of the first time that we were pulled over by police. A Fun Time with Beer Fam trying our first beer collaboration with Elicit. Thank you to Brian Ayers for the opportunity. Hard Rock Bock on Tap now at Elicit Brewing Company in Manchester. Solid. #Cannonball!