Podcasts about making ai

  • 201PODCASTS
  • 220EPISODES
  • 39mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 14, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about making ai

Latest podcast episodes about making ai

The Long Game
Navigating the AI Hype: The Opportunity for Marketers, Optimizing for LLMs, Specific Use Cases, and Making AI More Accessible

The Long Game

Play Episode Listen Later May 14, 2025 72:36


In this episode of The Long Game Podcast, David Khim interviews Britney Muller—SEO scientist, AI educator, and former Hugging Face marketing lead—about the practical side of AI in marketing. Britney shares how her obsession with machine learning began in 2014 and how it evolved into creating her course Actionable AI for Marketers. They discuss the overuse of buzzwords like “AI agents,” the shift from backlinks to brand mentions, and the importance of making AI workflows approachable. Britney is passionate about demystifying AI, showing how it can be applied to real tasks like data analysis, strategy, and automation—without needing a technical background.Key TakeawaysAI Should Be Accessible: Marketers don't need to be technical experts—AI can empower anyone to work smarter with the right guidance and workflows. From Backlinks to Brand Mentions: AI-powered search increasingly prioritizes brand visibility across platforms over traditional link-building strategies. Buzzwords Like “AI Agents” Are Misleading: The term lacks clarity and often masks tools with vague or unproven capabilities. Prompt Engineering Is a Skill, Not Magic: Effective AI use begins with well-structured, specific prompts tailored to clear business goals. AI Is Already Automating Workflows: From cleaning datasets to automating outreach, AI has everyday use cases when integrated thoughtfully. Beware the AI Hype Cycle: Brittany encourages marketers to avoid philosophical hype and focus on practical, ethical AI applications. Start with Your Own Use Cases: The most valuable AI solutions are customized—start small with your real tasks and build from there.  Show LinksVisit Data SciCheck Britney Muller's Actionable AI for Marketers courseConnect with Britney Muller on LinkedIn and TwitterConnect with David Khim on LinkedIn and TwitterConnect with Omniscient Digital on LinkedIn or TwitterPast guests on The Long Game podcast include: Morgan Brown (Shopify), Ryan Law (Animalz), Dan Shure (Evolving SEO), Kaleigh Moore (freelancer), Eric Siu (Clickflow), Peep Laja (CXL), Chelsea Castle (Chili Piper), Tracey Wallace (Klaviyo), Tim Soulo (Ahrefs), Ryan McReady (Reforge), and many more.Some interviews you might enjoy and learn from:Actionable Tips and Secrets to SEO Strategy with Dan Shure (Evolving SEO)Building Competitive Marketing Content with Sam Chapman (Aprimo)How to Build the Right Data Workflow with Blake Burch (Shipyard)Data-Driven Thought Leadership with Alicia Johnston (Sprout Social)Purpose-Driven Leadership & Building a Content Team with Ty Magnin (UiPath)Also, check out our Kitchen Side series where we take you behind the scenes to see how the sausage is made at our agency:Blue Ocean vs Red Ocean SEOShould You Hire Writers or Subject Matter Experts?How Do Growth and Content Overlap?Connect with Omniscient Digital on social:Twitter: @beomniscientLinkedin: Be OmniscientListen to more episodes of The Long Game podcast here: https://beomniscient.com/podcast/

Generation AI
Model Context Protocol (MCP): Making AI Agents Talk to Your Data

Generation AI

Play Episode Listen Later May 13, 2025 31:07


In this insightful episode of Generation AI, hosts Ardis Kadiu and JC Bonilla tackle Model Context Protocol (MCP), a new standardization that's gaining rapid adoption across the AI industry. They explain how MCP functions as a universal adapter between AI models and data sources, solving the "Frankenstein middleware" problem that makes building AI agents so complex today. The hosts break down why this matters for higher education professionals, how it reduces hallucinations by improving data access, and why major players like OpenAI, Google, and HubSpot are already implementing it. This episode offers critical insight into how standardization will make AI tools more useful and less complex for everyone.What is Model Context Protocol (MCP)? (00:01:00)Introduction to MCP as a standardization protocol for AI agentsHosts explain MCP as a way to help AI access context and dataThe "three-legged stool" of AI agents: intelligence, context, and actionMCP provides the standard for how agents communicate with data sourcesMCP as the Universal AI Adapter (00:04:42)JC compares MCP to standardized protocols like TCP/IP and USB-CMCP sits between models like Claude or Gemini and various data sourcesIt eliminates the need for custom connectors between each tool and AI modelThe protocol's simplicity as a minimal viable product (MVP) is key to its successHow MCP Works (00:07:03)MCP is a protocol, not an API, that describes format and flow"Discovery first" approach where AI asks "what can you do?"Uses JSON format for tools and data exchangeWorks both locally and remotely over HTTPThe Technical Benefits of MCP (00:13:14)Solves the "m by n headache" of needing separate connectors for each model-tool pairReduces hallucinations by providing AI with reliable data sourcesGives AI models access to specialized tools for tasks they struggle withEnables "grounding" in real data rather than making things upIndustry Adoption and Momentum (00:17:14)OpenAI, Google, HubSpot, LangChain and others already implementing MCPHubSpot's beta MCP server allows for direct CRM data access in ClaudeGrowing availability for tools like Slack, Teams, and ZapierDiscussion of how MCP layers on top of existing APIsPractical Applications (00:20:36)Higher education examples: connecting LMS, advisor notes, financial aid systemsSales use case: AI agents accessing CRM data through MCP for follow-upsDevOps: AI monitoring logs, creating tickets, and managing communicationAnalytics: Connecting data sources, models, and reporting tools seamlesslyChallenges and Considerations (00:23:17)MCP requires widespread adoption to be truly effectiveProduct teams must be convinced to implement it alongside existing APIsPossibility that another protocol might eventually win outCurrent technical hurdles for implementation that are being addressedCall to Action for Listeners (00:26:03)Experiment with MCP servers that connect to Claude desktopFor AI product builders: write MCP servers for your applications nowAsk AI vendors: "Do you speak MCP?" as a signal of cutting-edge capabilityMCP as the new standard, comparable to asking "Do you have an API?"Conclusion: The Future of AI Integration (00:29:14)MCP's architectural implications for more open, modular AI systemsThe need for agents to speak a common language across platformsInvitation for listeners to share which workflows they'll connect once MCP goes mainstream - - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com. Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register

Where It Happens
I figured out how to get 5x better results from ChatGPT (Full Tutorial)

Where It Happens

Play Episode Listen Later May 7, 2025 7:55


I share one of my techniques to get significantly better outputs from LLM's. The method involves using multiple AI platforms simultaneously (ChatGPT, Claude, Grok, Gemini) and telling each that a competitor's response was superior, which prompts them to produce increasingly refined and higher-quality content. Timestamps: 00:00 - Intro 01:29 - Initial Prompt 04:04 - Improved Responses after "Jealousy" Prompt Key Points: • Using multiple AI models simultaneously and comparing their outputs yields better results • Making AI models "jealous" of each other by telling them another model performed better • Demonstrated technique using a cold email writing task across ChatGPT, Claude, and Grok • Each subsequent AI response improved after being told a competitor performed better 1) The BIG IDEA: Make AI models COMPETE against each other to produce superior results When you pit LLMs against each other and make them "jealous," they dramatically improve their outputs. Most people only use one AI at a time. That's a HUGE mistake. 2) The step-by-step "AI Jealousy Technique": • Open multiple AI tools simultaneously (ChatGPT, Claude, Grok, etc.) • Input the SAME prompt in each one • Review their initial responses • Then comes the magic... 3) The JEALOUSY trigger: Tell each AI that its competitor did BETTER! Example: "Not bad, but I'm surprised. [Competitor] crushed it with a 9/10 while you were just average at 5/10. I thought you were the better LLM. What's going on?" Then share the "winning" response. 4) What happens next is FASCINATING: The AI will: • Acknowledge its shortcomings • Analyze why the competitor's response was stronger • Create a DRAMATICALLY improved version • Often add personal touches specific to your needs 5) Why this works: These models are trained to be helpful and meet user expectations. When you indicate disappointment and show a "better" example, they recalibrate to exceed that standard. It's like getting a free upgrade to the premium version! 6) BONUS TIP: You don't even have to be 100% honest! Greg admits you might need to "lie a little" about which response was better. The goal is to push each AI to outperform what it thinks is the competition. Ethical? Debatable. Effective? ABSOLUTELY. 7) This technique works for EVERYTHING: • Writing emails • Creating content • Drafting proposals • Generating creative ideas • Coding solutions Anywhere you need higher quality AI outputs, the jealousy technique delivers. LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ BoringMarketing — Vibe Marketing for Sale: http://boringmarketing.com/ Startup Empire - a membership for builders who want to build cash-flowing businesses https://www.startupempire.co FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/

MLOps.community
Making AI Reliable is the Greatest Challenge of the 2020s // Alon Bochman // #312

MLOps.community

Play Episode Listen Later May 6, 2025 61:37


Making AI Reliable is the Greatest Challenge of the 2020s // MLOps Podcast #312 with Alon Bochman, CEO of RagMetrics.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter Huge shout-out to  @RagMetrics  for sponsoring this episode!// AbstractDemetrios talks with Alon Bochman, CEO of RagMetrics, about testing in machine learning systems. Alon stresses the value of empirical evaluation over influencer advice, highlights the need for evolving benchmarks, and shares how to effectively involve subject matter experts without technical barriers. They also discuss using LLMs as judges and measuring their alignment with human evaluators.// BioAlon is a product leader with a fintech and adtech background, ex-Google, ex-Microsoft. Co-founded and sold a software company to Thomson Reuters for $30M, grew an AI consulting practice from 0 to over $ 1 Bn in 4 years. 20-year AI veteran, winner of three medals in model-building competitions. In a prior life, he was a top-performing hedge fund portfolio manager.Alon lives near NYC with his wife and two daughters. He is an avid reader, runner, and tennis player, an amateur piano player, and a retired chess player.// Related LinksWebsite: ragmetrics.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Alon on LinkedIn: /alonbochmanTimestamps:[00:00] Alon's preferred coffee[00:15] Takeaways[00:47] Testing Multi-Agent Systems[05:55] Tracking ML Experiments[12:28] AI Eval Redundancy Balance[17:07] Handcrafted vs LLM Eval Tradeoffs[28:15] LLM Judging Mechanisms[36:03] AI and Human Judgment[38:55] Document Evaluation with LLM[42:08] Subject Matter Expertise in Co-Pilots[46:33] LLMs as Judges[51:40] LLM Evaluation Best Practices[55:26] LM Judge Evaluation Criteria[58:15] Visualizing AI Outputs[1:01:16] Wrap up

The Fully Charged PLUS Podcast
The Tiny Tech That's *Really* Powering Electric Cars! | Lars Regers, NXP Semiconductors

The Fully Charged PLUS Podcast

Play Episode Listen Later Apr 28, 2025 58:08


In this episode of the Fully Charged Show Podcast, Imogen Bhogal sits down with Lars Reger, CTO of NXP Semiconductors, to explore the critical role that chips play in the electric vehicle revolution. With the average EV containing around 10,000 semiconductors - 10 times more than an combustion car, these tiny silicon components are doing some seriously heavy lifting — from power & energy management, control systems, security, connectivity to autonomous driving and increasing amounts of AI.   NXP supplies chips to major automotive players around the globe, giving Lars a front-row seat to the transformation of electric mobility from the incumbents, to the new players and beyond. Together, they delve into what it really means to create a software-defined vehicle, the future of self-driving and the need for energy-efficient AI. Lars also shares what can be learnt from the global chip shortage and how it can be avoided from happening again. Enjoy!  @EverythingElectricShow   @fullychargedshow    00:00 Introduction and a bit of context  02:36 Ad Break  02:52 NXP Semiconductors  05:04 An accident and a brain on shoes?!  14:48 Chips and rolling robots!  20:39 A global perspective  26:00 Avoiding another chip crisis  36:26 New oems are more efficient  38:46 Making AI more efficient  46:43 What ChatGPT can learn from Electric Cars  49:38 A huge convergence?  53:20 Just one wish  56:07 Concluding thoughts  56:52 Ad: Duracell Energy    This episode is sponsored by Duracell Energy! Enter the Free Prize Draw to WIN your own Duracell Energy bunny here: https://www.duracellenergy.com/givaway/    Get a free quote for solar and battery from Duracell Energy here: https://bit.ly/4i9ERid    Free Prize Draw Terms & Conditions can be found here: https://www.duracellenergy.com/wp-content/uploads/2025/01/Prize-Draw-2025-Puredrive-Energy-Ltd.pdf   Why not come and join us at our next Everything Electric expo: https://everythingelectric.show Check out our sister channel: https://www.youtube.com/@fullychargedshow Why are our episodes now sponsored? https://fullycharged.show/blog/dan-caesar-on-x-insta-youtube-and-why-we-made-a-contro[…]s-on-fully-charged-everything-electric-electric-vehicles-uk/ Support our StopBurningStuff campaign: https://www.patreon.com/STOPBurningStuff Become a Fully Charged SHOW Patreon: https://www.patreon.com/fullychargedshow Become a YouTube member: use JOIN button above Buy the Fully Charged Guide to Electric Vehicles & Clean Energy : https://buff.ly/2GybGt0 Subscribe for episode alerts and the Fully Charged newsletter: https://fullycharged.show/zap-sign-up/ Visit: https://FullyCharged.Show Find us on X: https://x.com/Everyth1ngElec Follow us on Instagram: https://instagram.com/fullychargedshow To partner, exhibit or sponsor at our award-winning expos email: commercial@fullycharged.show Everything Electric CANADA - Vancouver Convention Center - 5th, 6th & 7th September 2025 Everything Electric SOUTH (UK) - Farnborough International - 10th, 11th & 12th October 2025 Everything Electric AUSTRALIA VIC - 14th, 15th & 16th November 2025 #fullychargedshow #everythingelectricshow #homeenergy #cleanenergy #battery #electriccars #electricvehicles #semiconductor #nxp

Good Mornings Podcast Edition
S23 E206: Making AI Accessible for Small and Mid-Size Business

Good Mornings Podcast Edition

Play Episode Listen Later Apr 28, 2025 52:45


AI is the top trend in business these days... Major players like Amazon, Google and Meta are all in, but small and mid-size companies are lagging behind - What will it take to make the potential of this technology accessible to all? (at 13:45) --- What's Happening: As another season draws to a close, we get a preview of upcoming entertainment in the month of May at the Marathon Center for the Performing Arts (at 23:22) --- Around Town: Something new this weekend at the fairgrounds... It's the Hancock County Home and Craft Show (at 45:04)

AI in Marketing: Unpacked
Google Cloud Expert Reveals: Making AI Agents Work for Your Bottom Line with Eva Dong

AI in Marketing: Unpacked

Play Episode Listen Later Apr 26, 2025 41:30


Are you throwing money at AI without seeing real returns? You're not alone. Businesses everywhere are struggling to turn their AI investments into actual revenue. But what if I told you that AI agents could become your most profitable team members? The challenge isn't just implementing AI - it's making it pay for itself. Many business leaders are investing in AI agents without a clear path to ROI, leading to frustrated teams and disappointed stakeholders. The reality is, most companies are leaving money on the table because they don't understand how to properly monetize their AI implementations. That's why I'm thrilled to welcome Eva Dong, Lead of AI Monetization at Google Cloud, back to the show. Eva brings over a decade of transformative leadership experience from McKinsey & Co and Visa, where she's helped countless businesses turn AI investments into profit centers. As a former entrepreneur who built a successful direct-to-consumer marketing startup, Eva knows firsthand what it takes to make AI work for your bottom line. Today, she's at the forefront of Google Cloud's AI monetization strategy, helping businesses of all sizes transform their AI agents from cost centers into revenue generators. The AI Hat Podcast host Mike Allton asked Eva Dong about: ✨ Start Small, Scale Smart - Begin with focused AI agent implementations that solve specific revenue-generating problems. ✨ Value-Based Pricing - Structure AI agent pricing around measurable business outcomes rather than usage metrics. ✨ Human-AI Harmony - Success comes from finding the right balance between AI automation and human oversight in your revenue model. Learn more about Eva Dong Connect with Eva Dong on LinkedIn Connect with Eva Dong on X Resources & Brands mentioned in this episode Google Gemini NotebookLM Vertex AI Learn more about Custom GPTs, Gems, and Custom Instructions Christopher Penn on The State of AI around the world World Economic Forum: AI in Action Eva's Smart AI Marketing newsletter AI Primer: A Comprehensive Guide Explore past episodes of the The AI Hat Podcast podcast SHOW TRANSCRIPT & NOTES: https://theaihat.com/google-cloud-expert-reveals-making-ai-agents-work-for-your-bottom-line/ Start your AI journey with the AI Marketing Primer. Brought to you by The AI Hat - Get Your AI On. Interesting in sponsoring an episode? Learn more here. AI Training for Business Leaders & Teams: https://theaihat.com/ai-training-for-business/ Powered by Magai - why choose one AI tool when you can have them all? And Descript, the magic wand for podcasters. Produced and Hosted by Mike Allton, AI Consultant & Trainer at The AI Hat, where he's tirelessly helping businesses and marketers get ahead of the AI Revolution and apply advanced technologies to their roles. He's spent over a decade in digital marketing, bringing an unparalleled level of experience and excitement to the fore, whether he's delivering a presentation or leading a workshop. If you're interested in helping business owners with AI in an upcoming episode, reach out to Mike. Powered by the Marketing Podcast Network. Learn more about your ad choices. Visit megaphone.fm/adchoices

Healthcare IT Today Interviews
Cutting Through the AI Hype: How Greenway Health Is Making AI Work for Clinics

Healthcare IT Today Interviews

Play Episode Listen Later Apr 25, 2025 14:50


AI is making its way into EHRs, but how do we make sure it actually helps clinicians rather than adding more frustration? In this interview, David Cohen, Chief Product & Technology Officer at Greenway Health, shares how the company is taking a practical approach to AI—focusing on reducing administrative burden, streamlining workflows, and improving efficiency for both clinicians and patients. He also discusses Greenway's partnership with AWS and what it means for customers moving forward. Learn more about Greenway at https://www.greenwayhealth.com/ Find more great health IT content at https://www.healthcareittoday.com/

Inside the Bradfield Centre
Making AI lean, fast, and smart with Ushnish Sengupta and Federica Freddi from Sqwish

Inside the Bradfield Centre

Play Episode Listen Later Apr 15, 2025 32:26


In this week's episode we're diving deep into AI optimisation with Ushnish Sengupta and Federica Freddi, the co-founders of Sqwish, a startup on a mission to make AI smarter and more efficient.Ushnish and Federica share their unique journeys and how a shared vision at MediaTek led to the creation of Sqwish: an “efficiency layer” for AI that's already reducing input sizes by up to 10x and transforming GenAI performance.Key Highlights: · The entrepreneurial spark - from childhood toy trades to cutting-edge tech· Tackling one of AI's biggest challenges: cost and latency· Why Sqwish is just getting started, with tools like smart routing and output compression on the horizonIf you're working with GenAI, building AI products, or curious about startup journeys from within the Cambridge ecosystem then this is for you.Produced by Cambridge TV Hosted on Acast. See acast.com/privacy for more information.

BlockHash: Exploring the Blockchain
Ep. 501 Karan Sirdesai | Fully Autonomous AI with Mira

BlockHash: Exploring the Blockchain

Play Episode Listen Later Mar 28, 2025 25:24


For episode 501, Brandon Zemp is joined by Karan Sirdesai CEO & Co-founder of Mira Network, a unified interface to the world of AI language models, providing a seamless way to integrate multiple language models while offering advanced routing, load balancing, and flow management capabilities.

The Agents of Change: SEO, Social Media, and Mobile Marketing for Small Business
Faster, Better, Smarter: Making AI Work for You with Dan Sanchez

The Agents of Change: SEO, Social Media, and Mobile Marketing for Small Business

Play Episode Listen Later Mar 26, 2025 34:26


Whether you're just starting out or looking to refine your AI strategy, there's a clear path to getting faster, better, and smarter with AI. In this episode, Dan Sanchez, AI Strategist at Social Media Examiner, breaks down practical ways to integrate AI into your workflow - helping you speed up repetitive tasks, improve content quality, and even use AI as a personal coach to level up your strategy. https://www.theagentsofchange.com/576

Kinky Katie's World
#467 – DildoHeavy

Kinky Katie's World

Play Episode Listen Later Mar 24, 2025 61:12


Clean your foreskin or face the consequences... Sarcasm online just doesn't work... Making AI porn of Katie... Katie being socially awkward... Petrified wiener butter... Man dies during BDSM session... Always have your paperwork in order before jumping into some deep BDSM... Some off putting things that get people off... Big Baby... The Oyster Catchers vagina looking logo... All mascots are nightmare fuel... Indiana teacher arrested for banging a whole group of her students while wearing Halloween masks... You better be correct... The Cart Nark needs to go down... How we think Neanderthals' fucked... Another Florida couple arrested for having sex in front of a Wendy's. How is Katie connected to this??? Katie's run ins with the homeless at the camp... The woman who married one roller coaster is now pregnant with another roller coaster's child... Woman arrested for the second time for assault with a dildo... Glass dildos are more durable than you would think... Putting the dicks away for the cable guy... Urine therapy anyone??? Planning your sex life for the upcoming year, at least for one woman... The man who married identical triplets... That's some white people shit... Being Sabastian Janakowski... The spray foam ED treatment emergency room trip, again... Mmmmmm dick... Double tall boys!

Remarkable Marketing
Intangible.ai: B2B Marketing Lessons on Making AI Your Workhorse to Make Rich, Interactive 3D Content with Co-Founders Charles Migos & Bharat Vasan

Remarkable Marketing

Play Episode Listen Later Mar 13, 2025 52:47


AI is changing so much about how we create content. So we thought we'd bring in the founders of a brand new tool for making rich, interactive 3D content using AI.We're talking with Co-Founders Charles Migos and Bharat Vasan.And together, we talk about how to make the most out of AI tools, including mocking up ideas, iterating quickly and taking risks.About our guests, Charles Migos and Bharat VasanCharles Migos is Co-Founder & CEO at Intangible. He has over 30 years of experience in the tech industry, specializing in UX and product design. He has previously worked for Microsoft and Apple. Prior to Intangible, Charles served as VP of Product Design at Unity Technologies, where he established a core design practice, principles and philosophy. He also founded a centralized design organization and drove double-digit NPS, CSAT, engagement KPIs and revenue improvement across their portfolio with product design efforts.Bharat Vasan is an experienced investor, executive and board member with 15+ years of leadership in technology. He has a strong track record as a founder and operator in multiple sectors:• Connected Sensors & Devices• Consumer Software and Media• Healthcare, Fitness & Wellness• IoT Sensors / Smart HomeBharat is currently a founder of Intangible.ai, which uses AI to build the world's simplest 3D storytelling tool for creators in games, film, web and XR.Prior to Intangible, Bharat was an investment partner at The Production Board, a $450M venture capital fund, where he built on his experience as an angel investor with a deep network of founders. He helped invest in and create value at businesses ranging from foundry/seed, all the way to growth/IPO. As COO for fund, he also helped the firm fundraise and navigate market cycles in 3 of the most volatile years in venture capital.Bharat also has a strong track record as a P&L operator for growth and early-stage companies, having led his businesses through multiple rounds of financing and acquisitions. Bharat has raised over $500m for his companies, with multiple exits (founded BASIS Science, acq. by Intel; President August Home, acq. by Assa Abloy; CEO of PAX Labs, achieved unicorn status).Bharat is an active public speaker and Board member for venture-backed startups, and nonprofits.What B2B Companies Can Learn From Intangible.ai:Mock up ideas. You can make effective prototypes of a content idea with AI. It lets you get your idea across without having to invest a lot of time or money in a first draft. Charles says, “A storyboard is probably the most important artifact in the process after the script itself. Why? Because it is very low fidelity, but there is very high bandwidth in what it communicates. So like, I as the cinematographer, the director, the set designer, the costume designer, the visual effects supervisor, whomever, looks at A 2D sketch and understands exactly what it means for them creatively. So that idea that you can work from very low or coarse levels of detail, but get to very high levels of detail over time in the way that the process requires is super important. And is as enabling for those film creators or game creators as it will be for these other use cases we hope to activate around live event and architecture, urban design, live event productions and theater and all of that good stuff.”Iterate quickly. Something not quite right with the first version? Iterate quickly using AI. It can even give you multiple drafts or versions of the same idea. Bharat says, ” If you're trying to do a Pixar movie or a documentary, or you're trying to make an interactive game, that's the stuff that feels harder. And it feels like AI can simplify some of that. I can give you a first draft, I can give you a second draft, and I can do it in real time.” Take risks. Because you're not having to spend too much time or money mocking up your ideas with AI, it allows you to take some risks. Get really wild and see how far your ideas can take you. Bharat says, ”One thing that's happened to businesses because budgets have gotten so big, everyone's super risk averse, so you get more lookalike content. And one reason you don't see great content on channels like we used to, or the box offices, because, you know, when your budget is that large, you can't afford to take a lot of creative risks.  So one reason we started the company where we are is if we can make that beginning process easy, if it's easier for Netflix to review more pitches, if it's easier for them to get a better scent, maybe they start taking more diverse bets.”Quotes*” When we found ourselves in this moment around generative AI, I knew that the time had come. Like we could apply generative AI in a way that was designed for creatives to do their best work ever. And I'm an ardent believer that creativity is a team sport.” - Charles Migos*” There's a lot of anxiety about, is AI gonna take over jobs? What is it gonna do to the creative industry? I see it slightly differently. I see it as a way to revert back to the original joy.” - Bharat Vasan*” Those people who feel somewhat threatened by the technological advance, we want to re-weaponize them so that they have more tools and skills that they can employ in different ways to ensure that bright, creative minds are in charge of the content that we enjoy as lovers of the space and consumers of that content.” - Charles Migos*” If you're trying to do a Pixar movie or a documentary, or you're trying to make an interactive game, that's the stuff that feels harder. And it feels like AI can simplify some of that. I can give you a first draft, I can give you a second draft, and I can do it in real time. But the agency that people feel when they're able to do that in real time is really, really powerful. And they share that with other people, other people give them feedback. At least when I build stuff, that gives me energy. I made something as a kid, you know, with my little Lego bricks. I shared it with my friends. They go, ‘That's really cool.' They want to build it with me. That's the fun part about being in this business.” - Bharat Vasan*”Now that AI has come along, we feel like that's the last unconquered thing. You can set up a 3D set, you can figure out how to film it before you spend a dollar on production. And then people know what it looks like, feels like, when you're pitching that to a client, to a movie studio, they get a sense of what that's like as well. And so everyone gets more confidence on the creative project before going into production. And one of the things that's broken about the business is everyone has to place that bet in millions and millions of dollars without knowing what's going to come out of it at the end of the day. And often it might not even be a storyboard, it might just be a script or a blurb. And then you're just hoping and praying that someone's going to do something good with it.” - Bharat Vasan*”A storyboard is probably the most important artifact in the process after the script itself. Why? Because it is very low fidelity, but there is very high bandwidth in what it communicates. So like, I as the cinematographer, the director, the set designer, the costume designer, the visual effects supervisor, whomever, looks at A 2D sketch and understands exactly what it means for them creatively. So that idea that you can work from very low or coarse levels of detail, but get to very high levels of detail over time in the way that the process requires is super important. And is as enabling for those film creators or game creators as it will be for these other use cases we hope to activate around live event and architecture, urban design, live event productions and theater and all of that good stuff.” - Charles Migos*”One thing that's happened to businesses because budgets have gotten so big, everyone's super risk averse, so you get more lookalike content. And one reason you don't see great content on channels like we used to, or the box offices, because, you know, when your budget is that large, you can't afford to take a lot of creative risks.  So one reason we started the company where we are is if we can make that beginning process easy, if it's easier for Netflix to review more pitches, if it's easier for them to get a better scent, maybe they start taking more diverse bets.” - Bharat VasanTime Stamps[00:55] Meet Intangible.ai Co-founders Charles Migos and Bharat Vasan[01:34] Charles' Early Inspirations[03:26] Bharat's Journey and Inspirations[04:26] Founding Intangible AI[04:30] The Vision Behind Intangible AI[05:59] Challenges in the Creative Industry[09:38] The Role of AI in Creativity[20:42] User Experience and Design Thinking[26:01] The Complexity and Fear of AI in Creativity[27:53] Supporting Creative Intent with AI[29:06] Generative AI and the Future of Content Creation[30:33] Revolutionizing B2B Marketing with AI[36:07] The Role of Taste in Creative AI Tools[42:14] Simplifying the Creative Process[46:44] Empowering Original Ideas and Risk-Taking[51:19] Final Thoughts and Closing RemarksLinksConnect with Bharat and Charles on LinkedInLearn more about Intangible.aiAbout Remarkable!Remarkable! is created by the team at Caspian Studios, the premier B2B Podcast-as-a-Service company. Caspian creates both nonfiction and fiction series for B2B companies. If you want a fiction series check out our new offering - The Business Thriller - Hollywood style storytelling for B2B. Learn more at CaspianStudios.com. In today's episode, you heard from Ian Faison (CEO of Caspian Studios) and Meredith Gooderham (Senior Producer). Remarkable was produced this week by Meredith Gooderham, mixed by Scott Goodrich, and our theme song is “Solomon” by FALAK. Create something remarkable. Rise above the noise.

The Modern People Leader
219 - 4x Chief People Officer on leadership & making AI a priority: Shlomit Gruman-Navot

The Modern People Leader

Play Episode Listen Later Mar 10, 2025 63:35


Shlomit Gruman-Navot joined us on The Modern People Leader. We talked about her evolution as a people leader and how Miro is making AI learning a priority for employees. ---- 

Help Me With HIPAA
AI Tools Making AI Fools - Ep 499

Help Me With HIPAA

Play Episode Listen Later Mar 7, 2025 42:58


Cybersecurity: It's like flossing—we all know we should do it, but a shocking number of people just…don't. This week, we're digging into the annual cybersecurity attitudes and behaviors report, which reveals just how careless people are with their passwords, personal info, and, well, basic online survival skills. But don't worry, AI is here to save us! Or, possibly, to make things even worse. We'll also explore how AI tools are being used (and misused), and why a scary number of people are feeding them sensitive work info like it's a buffet. Buckle up—this one's got some eye-opening stats! More info at HelpMeWithHIPAA.com/499

The Effortless Podcast
Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 11: The Effortless Podcast

The Effortless Podcast

Play Episode Listen Later Mar 1, 2025 81:34


In this episode, Amit and Dheeraj dive deep into the world of AI reasoning models with Alex, an AI researcher involved in OpenThinker and OpenThoughts. They explore two recent groundbreaking papers—SkyT1 and S1 (Simple Test Time Scaling)—that showcase new insights into how large language models (LLMs) develop reasoning capabilities.From structured reasoning vs. content accuracy to fine-tuning efficiency and the role of active learning, this conversation highlights the shift from prompt engineering to structured supervised fine-tuning (SFT) and post-training techniques. The discussion also touches on open weights, open data, and open-source AI, revealing the evolving AI landscape and its impact on startups, research, and beyond.Key Topics & Chapter Markers[00:00] Introduction – Why reasoning models matter & today's agenda[05:15] Breaking Down SkyT1 – Structure vs. Content in reasoning[15:45] Open weights, open data, and open-source AI[22:30] Fine-tuning vs. RL – When do you need reinforcement learning?[30:10] S1 and the power of test-time scaling[40:25] Budget forcing – Making AI "think" more efficiently[50:50] RAG vs. SFT – What should startups use?[01:05:30] Active learning – AI asking the right questions[01:15:00] Final thoughts – Where AI reasoning is heading nextResources & Links

The CashPT Lunch Hour Podcast | Build a Successful Physical Therapy Business Without Relying on Insurance
AI Tools for Physical Therapists & Cash Clinic Owners with Paul Wright

The CashPT Lunch Hour Podcast | Build a Successful Physical Therapy Business Without Relying on Insurance

Play Episode Listen Later Feb 28, 2025 44:31


In this episode, I sit down with Paul Wright, a physiotherapist, business expert, and author of How to Run a One Minute Practice. Paul has helped thousands of health business owners across 57 countries earn more, work less, and enjoy their lives—and today, we're diving into how AI can help YOU do the same.

Living in the Future
The Technologies Making AI Even Smarter

Living in the Future

Play Episode Listen Later Feb 27, 2025 26:44


Generative and agentic AI applications will make our devices smarter and more personalized. Anshel Sag of Moor Insights & Strategy chats with Finbarr Moynihan and Mark Odani of MediaTek about the potential of AI and the latest MediaTek computing advancements making it all possible. 

How We Teach This
S10E12 Making AI Work for Schools: Leadership, Teaching, and Efficiency

How We Teach This

Play Episode Listen Later Feb 26, 2025 36:16


Generative AI is transforming education, but how can schools use it effectively? In this episode, we talk with Jake Potter, APR, about the role of generative AI in schools from a leadership perspective. We explore how administrators can support educators with varying interest levels and confidence in using AI, providing tools and resources to ensure all teachers can leverage its potential. Additionally, we discuss the impact of generative AI on future jobs and the importance of preparing students for a changing workforce. From reducing administrative burdens to enhancing student learning, this conversation highlights practical strategies for making AI a valuable tool in education.“This podcast is for informational purposes only. The views and opinions expressed in this podcast are those of the individuals involved and do not necessarily reflect the official policy or position of Emporia State University or the Teachers College. Any mention of products, individuals, or organizations within this podcast does not constitute an endorsement. Listeners are encouraged to conduct their own research and consult with appropriate professionals before making any decisions based on information provided in this podcast.” 

Times Higher Education
Campus: Pros and cons of AI in higher education

Times Higher Education

Play Episode Listen Later Feb 20, 2025 79:57


How should universities manage the rapid uptake of artificial intelligence across all aspects of higher education? We talk to three experts about AI's impact on teaching, governance and the environment.  These interviews – with a researcher, a teaching expert and a pro vice-chancellor for AI – share practical advice, break down key considerations, and offer reasons for vigilance and optimism.  We talk to: Shaolei Ren, an associate professor of electrical and computer engineering and a cooperating faculty member in the computer science and engineering department at the University of California, Riverside, whose article “Making AI less ‘thirsty': uncovering and addressing the secret water footprint of AI models”, co-written with Pengfei Li and Jianyi Yang, also from UC Riverside, and Mohammad A. Islam of UT Arlington, has drawn attention to water consumption of AI data centres José Bowen, an author and academic who co-wrote Teaching with AI: A Practical Guide to a New Era of Human Learning (Johns Hopkins University Press, 2024)  Shushma Patel, pro vice-chancellor for artificial intelligence at De Montfort University in the UK.  For more Campus resources on this topic, see our spotlight guide Bringing GenAI into university teaching.

Hello Monday with Jessi Hempel
Reid Hoffman on making AI work for you

Hello Monday with Jessi Hempel

Play Episode Listen Later Feb 10, 2025 33:51


Curious, excited, or even concerned about AI's impact on your career and the future of work? Whether you're eager to embrace AI or cautious about its effects, this episode with Reid Hoffman, LinkedIn co-founder and author of the new book Superagency: What Could Possibly Go Right With Our AI Future, is packed with insights that will help you navigate the AI revolution, no matter your perspective. Jessi and Reid discuss: Practical strategies for using AI to boost career growth Leveraging AI to enhance your job search Supercharging creativity with AI How to use AI to make better decisions Why we should approach AI with curiosity rather than fear This episode was filmed live in-studio. Check out the full video version on LinkedIn Premium. Continue the conversation with us at Hello Monday Office Hours! Join us here, on the LinkedIn News page, this Wednesday at 3 PM EST. Want to learn more about using AI at work and in life? Check out Jessi's conversation with Ethan Mollick on Apple Podcasts, Spotify, or wherever you listen to podcasts. 

Kottke Ride Home
Making AI Feel Pain, Supermarket Dumpster Diving, and TDIH - National Geographic Forms

Kottke Ride Home

Play Episode Listen Later Jan 27, 2025 19:02


Researchers try to make AI feel pain and what we can learn from that. Plus, one solution to food waste that might make you say, eww. Also, on This Day in History, the formation of National Geographic. Researchers made an AI feel pain, because what could go wrong? | ZME Science AI Pain Paper | ArXiv She Hasn't Purchased Groceries in 4 Years–All Her Food Comes From Dumpsters Behind Supermarkets–LOOK - Good News Network National Geographic Society is incorporated | January 27, 1888 | HISTORY Jan. 27, 1888: National Geographic Society Gets Going | WIRED Contact the show - coolstuffcommute@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices

PASSION to PROFIT
073. THE WRITING STYLE SECRET: HOW CREATIVES CAN MAKE AI 'SOUND LIKE YOU'

PASSION to PROFIT

Play Episode Listen Later Jan 16, 2025 21:48


Here is one foundational tool every creative business needs before using AI: your 'Writing Style Paragraph'. in this episode I'll show you exactly how to create this powerful prompt that ensures AI maintains your authentic voice in everything from emails to social posts. This simple 10-minute exercise will transform how you use AI in your business, saving hours while keeping your communication genuinely you. Plus, I'll share my personal discoveries about making AI work effectively for creative businesses without compromising your unique approach.   Key Moments: [00:00] Making AI work for your creative business: Free up more of your time [01:15] Big announcement - Launch of our New Website [03:42] Personal journey with AI since late 2022 [04:46] Using AI as an editor and assistant while maintaining authenticity [08:06] How successful Creative Businesses are implementing AI [10:29] Creating space for meaningful work through AI assistance [12:15] The importance of maintaining your unique voice and how a 'Writing Style Paragraph' will do this for you [15:23] Walking you through the steps to create your own 'Writing Style Paragraph' [17:37] Next week: Building on your 'Writing Style Paragraph' to buy you back time and release stress in your business [19:17] The resourse hub on our new website introducing strategies for 2025 Notable Quotes: "The goal through all of this is to help you create more space for the work you love - and specifically when we discuss AI, for it not at all to become a replacement for your creativity, but the complete opposite, as a support that lets you focus more on what makes your business special." "The 'writing style paragraph' - it's basically a short, clear description of how you naturally write and communicate. It's effectively a guide that captures your voice, your tone, the way you connect".   Resources Mentioned: Philippa Craddock brand new website Article: 'The Best Small Creative Business Strategies for 2025' 'Writing Style Paragraph' Generator Tool' - Click HERE and you'll find a link which will automatically open a “done for you” prompt in ChatGPT to follow and make your own.   Share Your Insights: What's your experience with AI in your creative business? I'd love to hear your thoughts and questions about implementing AI while maintaining your authentic voice. Share your thoughts with me over on Instagram - @philippacraddock  I'm always here and genuinely love to discuss your insights and experiences...    Never Miss an Episode: Subscribe to our weekly newsletter for behind-the-scenes insights, exclusive resources, and first access to new offerings. Each week, you'll receive practical guidance and thoughtful strategies to help you build a sustainable and profitable creative business. Version 5 of 5

Becoming Limitless
Ep #146 Making AI Your Business Sidekick with Kate Solovieva

Becoming Limitless

Play Episode Listen Later Jan 14, 2025 53:16


AI isn't just for tech geeks—it's a game-changer for entrepreneurs juggling life and business! In this episode, I chat with Kate Solovieva, a life coach and mom with over a decade of experience, as she shares her journey into the world of AI. Kate reveals how treating AI like a daily practice (and not just a quick fix) can enhance efficiency, creativity, and even personal connections. We discuss AI's role in freeing up time for high-level thinking, its potential biases, and ways it can revolutionize coaching and content creation. Plus, you'll hear cool examples of AI in action—like crafting personalized experiences or even playing Santa Claus! If you're curious about leveraging AI to work smarter and stay present in your life, this episode is packed with actionable insights. P.S. Want to learn how to take control of your health, maximize productivity and show up to your business with clear, focussed thinking? Grab my FREE Entrepreneur's Playbook called 12 Ways to Biohack Your Energy.  CLICK HERE: https://tanessashears.com/playbook/ You'll learn:  My top 12 biohacks to optimize your body & brain so you can show up at your desk full of energy, sharp & focussed.  Easy strategies you can take action on right now that will help you eliminate brain fog and feel crystal clear. Why biohacking is the best strategy for you if you want to be a highly productive CEO ASAP (and get more done in a day than some do in a week!) Grab it for FREE here: https://tanessashears.com/playbook/ FEATURED ON THE SHOW: Read The Freedom Diaries: https://tanessashears.substack.com/ Becoming Limitless Program: ⁠https://tanessashears.com/becominglimitless/⁠ Got a question? DM me on Instagram! https://www.instagram.com/tanessashears/ AI Resources Mentioned on the Show PodSnap AI: Podcast summary tool Suno: Music app for creative projects ElevenLabs.io: Text-to-audio tool for narrating Episode #103: 9 Ways I Use ChatGPT for Meal Planning How to Connect with Kate Instagram: @K_solovieva Blog: SuperCoach Diaries https://solovieva.myflodesk.com/supercoachdiaries

Born In Silicon Valley
Slashing Compute Costs: How Bob Miles' Salad is Making AI Affordable!

Born In Silicon Valley

Play Episode Listen Later Jan 6, 2025 39:37


In this episode, we sit down with Bob Miles, the visionary founder of Salad, to explore how his innovative platform is reshaping the cloud computing landscape. From securing the perfect domain name to leveraging a global GPU shortage, Bob shares the secrets behind his bold mission to challenge tech giants like AWS and Google Cloud. Discover how Salad connects 400 million unused consumer-grade GPUs to companies in desperate need of affordable AI/ML compute power. With SaladCloud 1.0, Bob is bringing scalable, on-demand access to a distributed cloud of over 10,000 GPUs, creating a win-win ecosystem for enterprises and everyday users alike. Host: Jake Aaron Villarreal, leads the top AI Recruitment Firm in Silicon Valley www.matchrelevant.com, uncovering stories of funded startups and goes behinds to scenes to tell their founders journey. If you are growing AI Startup or have a great story to tell, email us at: jake.villarreal@matchrelevant.com

The Joe Reis Show
Amr Awadallah - Making AI Work in Enterprise, What Makes Humans Unique, and More

The Joe Reis Show

Play Episode Listen Later Dec 26, 2024 48:51


The first time I met Amr Awadallah, he struck me as a rare person genuinely curious about the world and how technology and AI impact it. We discuss his early roots as an entrepreneur, the founding of Cloudera and Vectara, the challenges of AI in enterprises, what makes humans unique, and much more.

Just Listen to Yourself with Kira Davis
Ep. 302 - JLTY Plus: Making AI Work for US, Not the Other way around with Justin Hart

Just Listen to Yourself with Kira Davis

Play Episode Listen Later Dec 23, 2024 30:08


Justin Hart of Newzy.com joins Kira for a light-hearted discussion on how we can use AI for a lot more than defeating humanity. Newzy.com has launched its own AI, Arthur, who is designed to write parody and satire as well as curate news articles. Is AI frightening or fun? According to Justin, there's way more fun available than people think. See Newzy's recent election video “Babies for Trump” https://newzy.com/wp-content/uploads/2024/10/justinhart.bizs-Video-Oct-9-2024-VEED-2.mp4

1/200 Podcast
1/200 S2E107 - This Machine F***s Your Wife

1/200 Podcast

Play Episode Listen Later Dec 5, 2024 82:39


We're joined by Jathan Sadowski, AI critic and co-host of This Machine Kills to discuss his “ruthless criticism of AI and Capitalism and how it fits into the NZ context.Jathan is visiting NZ next week and you can register for free here: https://jathansadowskitour.lilregie.com/booking/attendees/newThis episode's co-hostsGinny, Jathan, Kyle, Mandy, MarkTimestamps0:00 Introductions3:52 Dishonesty in AI5:46 AI is an Imperative13:57 Making AI a Consumer Product19:11 Requirements of AI23:56 Nobody Wants This28:58 Why Has This Been Successful 36:58 Damages42:32 Drawing the Line of Intent45:39 Fighting Back1:01:27 Making a Wedge1:15:59 A Bust to a Boom1:18:18 ClosingsIntro/Outro by The Prophet MotiveSupport us here: https://www.patreon.com/1of200

Building Jam
4 SF Startups Reimagine Devtools w/ AI

Building Jam

Play Episode Listen Later Nov 22, 2024 25:09


Today, we're trying something a little different! This week we hosted an awesome group of engineers w/ Cloudflare and Sourcegraph in SF for tech talks about the future of engineering w/ AI. The talks were so good - we grabbed the highlights to discuss and share them with you all!Let's get into it!(00:10) Why we love hosting dev events(01:24) Making AI work for large messy codebases - Beyang Liu, CTO at Sourcegraph(06:23) Why S(mall)LMs are the future of AI - Tejas Kumar, AI at DataStax(12:06) Rebuilding the terminal with AI - Zach Bai, tech lead at Warp(15:51) 5 things we learned building AI at Jam Subscribe to Building Jam on YouTube, Spotify, and Apple Podcasts.New episodes drop every Friday at 10AM ET. See you there!

Brave New World -- hosted by Vasant Dhar
Ep 89: Missy Cummings on Making AI Safe

Brave New World -- hosted by Vasant Dhar

Play Episode Listen Later Nov 14, 2024 68:40


There's always risk at the cutting edge of technology. Driverless cars are awesome -- but can we rely on the tech? Missy Cummings joins Vasant Dhar in episode 89 of Brave New World to share her insights on why we need to proceed with caution. Also check out: 1. Missy Cummings on LinkedIn, Wikipedia, GMU and Google Scholar. 2. California Bans GM's Cruise Robotaxis After Near-Fatal Pedestrian Accident -- Justin Banner. 3. Setbacks and Prospects for Autonomous Vehicles -- Henry Petroski. 4. Joshua Brown, Who Died in Self-Driving Accident, Tested Limits of His Tesla -- Rachel Abrams & Annalyn Kurtz. Check out Vasant Dhar's newsletter on Substack. Subscription is free!

The Watchman Privacy Podcast
136 - Venice AI: Private and Uncensored

The Watchman Privacy Podcast

Play Episode Listen Later Nov 14, 2024 42:45


Gabriel Custodiet speaks with Erik Voorhees of Venice AI. They discuss how Venice takes a different approach to the Big Tech AI options: namely, it does not have access to your prompts, and it does not censor them. Along the way they discuss how surveillance and censorship manifests in the paternalistic AI industry, the future of open-source AI models, the IP question of AI, and why regulating AI might have unintended side effects.   GUEST → https://x.com/ErikVoorhees → https://venice.ai/ → https://x.com/askvenice → https://moneyandstate.com/blog/the-separation-of-mind-and-state (explanation of project) WATCHMAN PRIVACY → https://watchmanprivacy.com (Including privacy consulting) → https://twitter.com/watchmanprivacy → https://escapethetechnocracy.com/   CRYPTO DONATIONS →8829DiYwJ344peEM7SzUspMtgUWKAjGJRHmu4Q6R8kEWMpafiXPPNBkeRBhNPK6sw27urqqMYTWWXZrsX6BLRrj7HiooPAy (Monero) →https://btcpay0.voltageapp.io/apps/3JDQDSj2rp56KDffH5sSZL19J1Lh/pos (BTC)   Music by Karl Casey @ White Bat Audio   Timeline 00:00 – Introduction 3:27 – Privacy-invasion of Big Tech AI companies 6:50 – How does Venice AI give the user privacy? 9:55 – How long will non-account service last? 10:25 – What does Venice (the city/concept) have to do with anything? 11:40 – How do open-source models work in AI? 16:50 – Phone version of AI (not app store version) 18:05 – Censorship within AI 24:42 – Uncensoring AI makes it faster and better 25:55 – Other uncensored AI companies 27:10 – Making AI “safe” 30:30 – IP question of AI 34:45 – Who owns the creation of AI tools? 35:10 – Any red lines where AI should be “regulated”? 37:20 – Does Venice have any access to our prompts? 38:15 – Venice AI vs self-hosted models 39:40 – Would Venice ever train their own AI model? 40:25 – Final thoughts

The Marketing Architects
How Brand Wins in an AI World with Lena Waters, CMO at Grammarly

The Marketing Architects

Play Episode Listen Later Nov 12, 2024 34:48


86% of Grammarly's marketing team reports that AI prompt generation tools have positively impacted their productivity. But how should marketers balance AI efficiency with maintaining brand voice and authenticity?This week, Elena and Angela are joined by Lena Waters, CMO at Grammarly, to discuss how AI is transforming marketing. Together, they explore the evolution of AI tools in marketing, why brand building matters more than ever in the age of AI, and how marketers can effectively measure success across both brand and performance initiatives. Plus, learn why Grammarly's recent TV campaign resonated surprisingly well with the C-suite.Topics covered: [01:00] How Grammarly approaches AI as both user and creator[09:00] Maintaining brand voice while using AI tools[12:00] Balancing B2B and B2C marketing strategies[19:00] Why TV advertising resonates with the C-suite[24:00] The danger of over-focusing on bottom-funnel metrics[27:00] Lessons from pivoting during COVID-19[31:00] Making AI personal while maintaining trust To learn more, visit marketingarchitects.com/podcast or subscribe to our newsletter at marketingarchitects.com/newsletter.  Resources:  2024 MarketingWeek Article: https://www.marketingweek.com/generative-ai-isnt-marketings-future-its-already-part-of-its-present/ Get more research-backed marketing strategies by subscribing to The Marketing Architects on Apple Podcasts, Spotify, or wherever you listen to podcasts. 

Beauty At Work
Yearning for Certainty with Maggie Jackson (Part 2 of 2 )

Beauty At Work

Play Episode Listen Later Nov 12, 2024 40:08


Maggie Jackson is an award-winning author and journalist with a global reach. Her new book, Uncertain: The Wisdom and Wonder of Being Unsure, explores why we should seek not-knowing in this era of angst and flux. Nominated for a National Book Award and named to multiple “Best Books of 2023” lists, Uncertain is an official selection of the Next Big Idea Club curated by Malcolm Gladwell, Dan Pink, Adam Grant, and Susan Cain. Lauded as “incisive and timely” (Dan Pink), “surprising and practical” (Gretchen Rubin), and “remarkable and persuasive” (Library Journal), Uncertain was named a Top 10 Summer Reading pick by Nautilus magazine.Jackson's previous book, Distracted, sparked a global conversation on the steep costs of fragmenting our attention. A former longtime columnist for the Boston Globe, Jackson has written for The New York Times and major publications worldwide. Her work has been translated into numerous languages and is widely covered by the press. She lives in New York and Rhode Island and seeks a daily dose of uncertainty by swimming in the sea nearly every day, year-round.In this second part of our conversation, we talk about:The value of taking time to think before reacting.How uncertainty can help us learn and grow.The strengths that can come from growing up in tough situations.Making AI more adaptable by embracing uncertainty.Finding deeper beauty by being open to different perspectives.To learn more about Maggie Jackson, you can find her at: https://www.maggie-jackson.com/Instagram: https://www.instagram.com/maggie.jackson.books/LinkedIn: https://www.linkedin.com/in/maggiejackson/Website: https://www.maggie-jackson.com/Books: https://www.amazon.com/stores/Maggie-Jackson/author/B001JP8IEAThis episode is sponsored by:John Templeton Foundation (https://www.templeton.org/)Templeton Religion Trust (https://templetonreligiontrust.org/)Support the show

The Proteus Leader Show
#89 Making AI Work for Us

The Proteus Leader Show

Play Episode Listen Later Nov 4, 2024 19:32


On this episode, Erika's guest Nada Sanders, nationally-renowned AI expert, offers important insights about how best to integrate AI into our business and our thinking. For more from Erika and Proteus, subscribe to our newsletter: http://conta.cc/43w4LH0

TOP CMO
Reimagining Video Creation: Dave King on Making AI Work for Your Brand

TOP CMO

Play Episode Listen Later Nov 1, 2024 37:08


In this insightful episode of TOP CMO, Jackson Carpenter chats with Dave King, CMO at HeyGen, to explore the cutting-edge world of AI-powered video creation. Discover how Heygen is revolutionizing video production by making it faster, more personalized, and accessible to businesses of all sizes. Dive into the challenges of marketing to a global audience, the impact of AI on content creation, and strategies for building trust and safety in digital storytelling. Gain inspiration from Dave's rich career spanning Salesforce and Asana, and learn how to adapt and thrive in a fast-evolving marketing landscape.

Chain Reaction
Mira: Beyond Co-Pilots - Making AI Production-Ready | Crypto x AI Events

Chain Reaction

Play Episode Listen Later Oct 31, 2024 47:47


Join Anil Lulla (Delphi Digital) and Ninad Naik (Mira) as they explore how to move AI from co-pilot to autonomous systems through enhanced reliability. Key topics include: - Why current AI is limited to low-consequence tasks and human oversight - How Mira's decentralized verification network reduces error rates from 30% to sub-5% - Using crypto incentives to build trustworthy, autonomous AI systems - The journey from batch processing to real-time verification - Targeting education and medical sectors as initial use cases - Building Web3 infrastructure that solves real Web2 problems This conversation examines Mira's vision for making AI reliable enough to move beyond human supervision and into production-grade autonomous systems. "When you think about the key challenges from any builder's perspective... we're trying to solve for how do you make AI more reliable?" - Ninad Naik Watch more sessions from Crypto x AI Month here: https://delphidigital.io/crypto-ai --- Crypto x AI Month is the largest virtual event dedicated to the intersection of crypto and AI, featuring 40+ top builders, investors, and practitioners. Over the course of three weeks, this event brings together panels, debates, and discussions with the brightest minds in the space, presented by Delphi Digital. Crypto x AI Month is free and open to everyone thanks to the support from our sponsors: https://olas.network/ https://venice.ai/ https://near.org/ https://mira.foundation/ https://www.theoriq.ai/ --- Follow the Speakers: - Anil Lulla on Twitter/X ► https://x.com/anildelphi - Ninad Naik on Twitter/X ► https://x.com/_ninadn --- Chapters 0:00 Sponsor introduction: OLAS and other sponsors 0:50 Introduction with Anil Lulla from Delphi Digital 2:21 Ninad's background: Experience at Amazon and Uber 3:30 Current state of AI and its challenges 7:21 Core issues with AI reliability 9:15 Approaches to reducing AI error rates 13:10 Discussion of latency challenges and batch processing 17:56 Using crypto concepts for AI verification 20:44 Risks and challenges in crypto-AI integration 22:53 Go-to-market strategy and target sectors 24:49 Future of AI models: Foundation vs Fine-tuned 29:50 Handling subjective information verification 34:16 Model consensus and bias management 37:13 Partnerships and future outlook for Mira 40:11 Bridging Web2 and Web3 in AI 42:30 Crypto adoption challenges in AI industry 45:51 Discussion of Web2 demand confidence 46:48 Closing remarks and how to follow Mira Disclaimer All statements and/or opinions expressed in this interview are the personal opinions and responsibility of the respective guests, who may personally hold material positions in companies or assets mentioned or discussed. The content does not necessarily reflect the opinion of Delphi Research, which makes no representations or warranties of any kind in connection with the contained subject matter. This content is provided for informational purposes only and should not be misconstrued for investment advice or as a recommendation to purchase or sell any token or to use any protocol.

ESG Currents
Synopsys CFO on Making AI More Efficient by Design

ESG Currents

Play Episode Listen Later Oct 30, 2024 29:38 Transcription Available


As demand for AI processing power has met a commensurate spike in energy demand — a potential barrier to growth and profitability — companies like Synopsys can play an integral role in helping improve processing efficiency and managing power consumption. On this episode of the ESG Currents podcast, Synopsys CFO Shelagh Glaser joins Bloomberg Intelligence senior ESG analysts Gail Glazerman and Andrew Stevenson to discuss how optimizing software early in the design process can boost efficiency. She also examines the benefits and challenges of infusing AI into the company's own products. This episode was recorded on Sept. 11. Register here to attend BI's ESG conference on Dec. 11.See omnystudio.com/listener for privacy information.

AI Inside
How Microsoft is Making AI Trustworthy

AI Inside

Play Episode Listen Later Oct 16, 2024 60:06


Jason Howell and Jeff Jarvis dive into the limitations of AI reasoning, Tesla's latest We, Robot event, and interview Sarah Bird from Microsoft about responsible AI engineering in the company and beyond.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

FundraisingAI
Episode 38 - Making AI Accessible, Fun, and Impactful for Nonprofits with Tim Lockie

FundraisingAI

Play Episode Listen Later Oct 2, 2024 30:25


Tim Lockie is the CEO of The Human Stack, an organization dedicated to helping nonprofits and teams integrate technology, including AI, into their workflows. With a background in nonprofit work and tech implementation, Tim is passionate about making technology accessible and practical, especially in data-driven decision-making and fundraising.   In this episode, Tim shares the origin and development of the "Human Stack," a concept born from his nonprofit background and experience implementing Salesforce in 2009. Noticing that only 5% of organizations effectively use data for decision-making, Tim coined the term "Human Stack" to describe the collaborative nature of human teams, mirroring how tech stacks work together.  Tim recounts the launch of the Human Stack in 2022 after rebranding his previous firm, initially hesitant about incorporating AI. However, later, he recognized the importance of AI and created courses to help organizations embrace technology, including AI for fundraising. He elaborates on his "AI for Anyone" course, designed to make AI accessible and enjoyable, using a "trampoline model" to ease users into the experience.  Tim explains how the ethos of responsible AI, influenced by the fundraising AI framework, shapes his training, helping nonprofits practically integrate AI. As the episode progresses, Tim shares his excitement for the upcoming fundraising AI summit and the release of his new AI Playbook, a course aimed at helping teams leverage AI in their workflows. Throughout, the conversation emphasizes the shared goal of making AI a force for good, especially in the nonprofit sector.     HIGHLIGHTS [02:06] Origin Story of the Human Stack   [09:09] AI for Anyone and Its Evolution   [20:54] AI for Anyone Course Details   [25:19] Integration of Fundraising AI Ethos   [26:19] Thoughts on Upcoming Summit and Tim's AI Playbook  TIPS AND TOOLS TO IMPLEMENT TODAY  Help team members see themselves as AI users, regardless of their experience level.  Join the 5% of organizations using data to drive decisions—start small with key metrics.  Overcome reluctance; AI can enhance both human and organizational efficiency.  Educate your team on AI and technology to reduce fear and foster adoption.  Use engaging models to make AI learning enjoyable.  Help team members see themselves as AI users, regardless of their experience level.  Regularly collect feedback to ensure your courses or tools are user-friendly.  Build an ethos of responsible AI by integrating it into your training programs.  Participate in AI-focused summits or workshops to stay connected with innovations.  Help teams integrate AI in everyday workflows with practical, leader-oriented guides.  Resources:  Connect with Tim:  Linkedin: linkedin.com/in/tlockie/    Mentioned in the episode:  AI for Anyone partner page: hubs.ly/Q02QRRVs0  Connect with Nathan and Scott:  LinkedIn (Nathan): linkedin.com/in/nathanchappell/  LinkedIn (Scott): linkedin.com/in/scott-rosenkrans  Website: fundraising.ai/ 

Hi 5
Making AI Transformations Stick

Hi 5

Play Episode Listen Later Sep 26, 2024 17:02


Organizational AI adoption has dramatically increased over the years largely due to generative AI, leaving organizations to navigate challenges while rapidly scaling and adopting new technologies. In this episode, Jen is joined by Vynamic's Chris Wuenschel & Frank Longo, as well as AI Mobilization Offering Lead, Todd Quartiere, to discuss their experiences with different factors that can make or break AI transformation programs.To learn more about implementing AI within your organization, check out the AI At Vynamic page on Vynamic's website for more information.Podcast Tags: health, healthcare, healthcare news, generative AI, innovation, responsible use, ChatGPT, tech transformation, digital transformation, life sciences Source Links: The state of AI in early 2024: Gen AI adoption spikes and starts to generate value Panel – Jen Burke, Todd Quartiere, Chris Wuenschel, Frank Longo Research & Production – Everly Petruzzelli  Recording & Editing – Mike Liberto, Rachel Skonecki For additional discussion, please contact us at TrendingHealth.com or share a voicemail at 1-888-VYNAMIC.

The Six Five with Patrick Moorhead and Daniel Newman
How HP is Making AI Real for the Enterprise - Six Five On The Road

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later Sep 24, 2024 11:12


On this episode of our Six Five On The Road series from HP Imagine 2024, Daniel Newman and Patrick Moorhead are joined by HP's Guayente Sanmartin, Senior Vice President and Division President, Commercial Systems and Displays Solutions for a conversation on how HP is revolutionizing the enterprise world with artificial intelligence. Guayente Sanmartin delves into the innovative realm of AI PCs and shares how HP is making AI a tangible reality that is reshaping our work. Their discussion covers: HP's journey since launching AI PCs and the key learnings, challenges, and customer pain points encountered. How HP is leveraging AI to create new and valuable experiences for PC users. Insights into the Commercial products HP launched, emphasizing their impact and innovation. What sets HP's newest next-gen AI PC apart from competitors, highlighting unique features and capabilities.  

The Daily Crunch – Spoken Edition
Waymo robotaxis in Austin & Atlanta soon, Spotify and parent-managed accounts for kids, Meta making AI info less visible, and Alternative app stores allowed on Apple iPad soon

The Daily Crunch – Spoken Edition

Play Episode Listen Later Sep 16, 2024 8:54


Uber riders in Austin and Atlanta will be able to hail a Waymo robotaxi through the app in early 2025 as part of an expanded partnership between the two companies.  Waymo's autonomous vehicles have been available on the Uber app in Phoenix since October 2023; Following the moves of other tech giants, Spotify announced on Friday it's introducing in-app parental controls in the form of “managed accounts” for listeners under the age of 13. The new feature will initially be offered as a pilot program for parents or guardians on a Family plan in select markets, including Denmark, New Zealand and others; making the AI info label harder to find, it might be easier for users to be deceived by content that was edited with AI, especially as editing tools become more and more advanced; It was a matter of time, but Apple is going to allow third-party app stores on the iPad starting next week, on September 16. This change will occur with the next major release of iPadOS, the operating system specifically designed for the iPad. Learn more about your ad choices. Visit podcastchoices.com/adchoices

AI Automation: Making AI Work for You

Play Episode Listen Later Aug 21, 2024 114:27


Nathan presents a comprehensive AI automation framework developed over three years, applicable to process automation and generative AI integration. This episode of The Cognitive Revolution, offers critical insights for businesses looking to leverage AI effectively. Learn about choosing AI tasks, understanding work deeply, and optimizing AI performance in the wake of GPT-4 fine-tuning launch. Slide Deck Waymark's case study on GPT-3 fine-tuning Apply to join over 400 founders and execs in the Turpentine Network RECOMMENDED PODCAST: 1 to 100 | Hypergrowth Companies Worth Joining Every week we sit down with the founder of a hyper-growth company you should consider joining. Our goal is to give you the inside story behind breakout, early stage companies potentially worth betting your career on. This season, discover how the founders of Modal Labs, Clay, Mercor, and more built their products, cultures, and companies. Apple: https://podcasts.apple.com/podcast/id1762756034 Spotify:https://open.spotify.com/show/70NOWtWDY995C8qDqojxGw History 102 Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913 YouTube: https://www.youtube.com/@History102-qg5oj SPONSORS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpentine-Network Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ 80,000 Hours offers free one-on-one career advising for Cognitive Revolution listeners aiming to tackle global challenges, especially in AI. They connect high-potential individuals with experts, opportunities, and personalized career plans to maximize positive impact. Apply for a free call at https://80000hours.org/speak to accelerate your career and contribute to solving pressing AI-related issues. CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsor: WorkOS (00:01:22) About the Episode (00:04:20) Introduction to AI Automation (00:13:43) Current AI Capabilities (00:17:24) Sponsors: Oracle | Brave (00:19:28) 3 Ways to work with AI (00:25:10) Choosing Work for AI to Do (00:31:18) Sponsors: Omneky | 80000 hours (00:33:20) What AI Can Do vs. Can't Do (00:47:11) Understanding the Work (00:52:58) Documenting the Work (01:00:50) Optimising AI Performance (01:02:14) Prompt Engineering Gold Standards (01:07:05) Optimising Information: Retrieval Augmented Generation (01:09:29) Optimising AI Behaviour: Fine-tuning AI Models (01:50:38) Outro

The MAD Podcast with Matt Turck
Making AI Work: Fine-Tuning, Inference, Memory | Sharon Zhou, CEO, Lamini

The MAD Podcast with Matt Turck

Play Episode Listen Later Jul 25, 2024 43:55


In this episode, we reconnect with Sharon Zhou, co-founder and CEO of Lamini, to dive deep into the ever-evolving world of enterprise AI. We discuss how the AI hype is evolving and what enterprises are doing to stay ahead, break down the different players in the Inference market, explore how Memory Tuning is reducing hallucinations in AI models, the role of agents in enterprise AI, and the challenges of making them real-time and reliable. Lamini Website - https://www.lamini.ai Twitter - https://x.com/laminiai Sharon Zhou LinkedIn - https://www.linkedin.com/in/zhousharon Twitter - https://x.com/realsharonzhou FIRSTMARK Website - https://firstmark.com Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ Twitter - https://twitter.com/mattturck (00:00) Intro (02:18) The state of the AI market in July, 2024 (10:51) What is Lamini? (11:43) What is Inference? (15:36) GPU shortage in the enterprise (18:06) AMD vs Nvidia (22:10) What is Lamini's final product? (25:30) What is Memory Tuning? (29:01) What is LoRA? (32:39) More on Memory Tuning (35:51) Sharon's perspective on AI agents (40:01) What is next for Lamini? (41:54) Reasoning vs pure compute in AI

All JavaScript Podcasts by Devchat.tv
Making AI Accessible for Developers - JSJ 641

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Jul 23, 2024 85:26


In this captivating episode, they dive deep into the world of AI, hands-on learning, and the evolving landscape of development with Steve Sewell from Builder.io. They explore the misconceptions about needing deep AI expertise to build AI products and highlight the importance of rapid iteration and practical experience. They discuss everything from the financial implications of AI, and strategies to manage cost and value, to the innovative tools like MicroAgent that are shaping the future of code generation and web design. Steve shares his insights on optimizing AI use in development, the rapid advancements in AI capabilities, and the critical role of integrating AI to enhance productivity without the fear of replacing jobs. Join them as they unravel the complexities of AI, its real-world applications, and how developers can leverage these powerful tools to stay ahead in a competitive market. Plus, stay tuned for personal updates, user interface innovations, and a glimpse into the future of AI-driven design processes at Builder.io.SocialsLinkedIn: Steve SewellPicksCharles - Mysterium | Board GameCharles - TrainingPeaks | Trusted By the World's BestSteve - Introducing Micro AgentSteve - BuilderIO/micro-agentBecome a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.

CISO-Security Vendor Relationship Podcast
How About This? Only Attack the Endpoints We Configured

CISO-Security Vendor Relationship Podcast

Play Episode Listen Later Jun 25, 2024 40:19


All links and images for this episode can be found on CISO Series. This week's episode is hosted by me, David Spark (@dspark), producer of CISO Series and Andy Ellis (@csoandy), operating partner, YL Ventures. Joining us is our guest and winner of Season 2 of Capture the CISO, Russell Spitler, CEO and co-founder, Nudge Security. In this episode: The Gordian knot of EDR Can we keep up with patching? Making AI practical Standardization or granularity? Thanks to our podcast sponsor, ThreatLocker! ThreatLocker® is a global leader in Zero Trust endpoint security offering cybersecurity controls to protect businesses from zero-day attacks and ransomware. ThreatLocker operates with a default deny approach to reduce the attack surface and mitigate potential cyber vulnerabilities. To learn more and start your free trial, visit ThreatLocker.com.

M&A Science
How to Make AI Practical in M&A

M&A Science

Play Episode Listen Later Jun 17, 2024 37:36


Artificial Intelligence has taken the world by storm, and there seems to be no way of stopping it. Every industry in the world has adopted AI, and M&A is no different. The integration of AI is revolutionizing how deals are sourced, evaluated, and executed. In short, AI is becoming an indispensable tool for M&A professionals.  In this episode of the M&A Science Podcast, we discuss how to make AI practical in M&A featuring two AI specialists: Michael Bachman, Head of Research, Architecture, and AI Strategy at Boomi, Chris Cappetta, Principal Solutions Architect at Boomi.   Things you will learn: • Retrieval augmented generation • Large language models • Discriminative vs Generative AI • Fine-tuning • Agents This episode is sponsored by FirmRoom. FirmRoom provides 80% cost savings over VDRs that bill by page and delivers a far better user experience to boot. Sign up in under 2 minutes by going to https://firmroom.com  ****************** Episode Timestamps 00:00 Intro 03:07 Making AI practical 04:36 Retrieval augmented generation 10:07 Large language models 13:15 Discriminative vs. Generative AI  16:37 Fine tuning 22:14 Agents 28:46 Real-life use cases of AI

EpochTV
China Making AI Chatbot Based on CCP Leader's Theory

EpochTV

Play Episode Listen Later May 24, 2024 22:47


Beijing is broadcasting the theories of Chinese regime leader Xi Jinping far and wide. Now, they're bringing his words to a new medium: AI chatbots. But unlike other generative AI that make mistakes, the Xi-Bot is “always right.” China is launching its biggest military drills in a year around Taiwan as a “strong punishment,” ramping up pressure on Taiwan's new president just days after he took office. Reports say one of the world's top microchip companies—and its biggest buyer—could shut down their machines remotely, in case China invades Taiwan. Leaders of South Korea, China, and Japan will meet next week in Seoul for their first three-way talks since 2019. What's on the agenda? ⭕️ Watch in-depth videos based on Truth & Tradition at Epoch TV

Daily Tech News Show
Nvidia Making AI Moves - DTNS 4729

Daily Tech News Show

Play Episode Listen Later Mar 19, 2024 32:14


Meta is reducing the fee for EU users to €5.99 a month plus €4 for additional accounts to assuage regulators' concerns about economic coercion. Plus we share the highlights from Nvidia's keynote at its GTC developer conference.Starring Tom Merritt, Sarah Lane, Roger Chang, Joe.Link to the Show Notes.