Podcasts about llm

  • 2,084PODCASTS
  • 5,952EPISODES
  • 40mAVG DURATION
  • 6DAILY NEW EPISODES
  • Jul 11, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about llm

Show all podcasts related to llm

Latest podcast episodes about llm

Slate Star Codex Podcast
Now I Really Won That AI Bet

Slate Star Codex Podcast

Play Episode Listen Later Jul 11, 2025 15:51


  In June 2022, I bet a commenter $100 that AI would master image compositionality by June 2025. DALL-E2 had just come out, showcasing the potential of AI art. But it couldn't follow complex instructions; its images only matched the “vibe” of the prompt. For example, here were some of its attempts at “a red sphere on a blue cube, with a yellow pyramid on the right, all on top of a green table”. At the time, I wrote: I'm not going to make the mistake of saying these problems are inherent to AI art. My guess is a slightly better language model would solve most of them…for all I know, some of the larger image models have already fixed these issues. These are the sorts of problems I expect to go away with a few months of future research. Commenters objected that this was overly optimistic. AI was just a pattern-matching “stochastic parrot”. It would take a deep understanding of grammar to get a prompt exactly right, and that would require some entirely new paradigm beyond LLMs. For example, from Vitor: Why are you so confident in this? The inability of systems like DALL-E to understand semantics in ways requiring an actual internal world model strikes me as the very heart of the issue. We can also see this exact failure mode in the language models themselves. They only produce good results when the human asks for something vague with lots of room for interpretation, like poetry or fanciful stories without much internal logic or continuity. Not to toot my own horn, but two years ago you were naively saying we'd have GPT-like models scaled up several orders of magnitude (100T parameters) right about now (https://readscottalexander.com/posts/ssc-the-obligatory-gpt-3-post#comment-912798). I'm registering my prediction that you're being equally naive now. Truly solving this issue seems AI-complete to me. I'm willing to bet on this (ideas on operationalization welcome). So we made a bet! All right. My proposed operationalization of this is that on June 1, 2025, if either if us can get access to the best image generating model at that time (I get to decide which), or convince someone else who has access to help us, we'll give it the following prompts: 1. A stained glass picture of a woman in a library with a raven on her shoulder with a key in its mouth 2. An oil painting of a man in a factory looking at a cat wearing a top hat 3. A digital art picture of a child riding a llama with a bell on its tail through a desert 4. A 3D render of an astronaut in space holding a fox wearing lipstick 5. Pixel art of a farmer in a cathedral holding a red basketball We generate 10 images for each prompt, just like DALL-E2 does. If at least one of the ten images has the scene correct in every particular on 3/5 prompts, I win, otherwise you do. Loser pays winner $100, and whatever the result is I announce it on the blog (probably an open thread). If we disagree, Gwern is the judge. Some image models of the time refused to draw humans, so we agreed that robots could stand in for humans in pictures that required them. In September 2022, I got some good results from Google Imagen and announced I had won the three-year bet in three months. Commenters yelled at me, saying that Imagen still hadn't gotten them quite right and my victory declaration was premature. The argument blew up enough that Edwin Chen of Surge, an “RLHF and human LLM evaluation platform”, stepped in and asked his professional AI data labelling team. Their verdict was clear: the AI was bad and I was wrong. Rather than embarrass myself further, I agreed to wait out the full length of the bet and re-evaluate in June 2025. The bet is now over, and official judge Gwern agrees I've won. Before I gloat, let's look at the images that got us here. https://www.astralcodexten.com/p/now-i-really-won-that-ai-bet

CHURN.FM
E295 | The Future of SEO in an AI-First world with Kevin Indig | HyperGrowth Partners

CHURN.FM

Play Episode Listen Later Jul 10, 2025 41:56 Transcription Available


Today on the show we have Kevin Indig, Growth Advisor and Partner at HyperGrowth Partners, a stage accelerator that invests time into early-stage startups to help them achieve and sustain rapid growth post-Series A. Kevin is also the former Director of SEO at Shopify and VP of SEO & Content at G2, with a rich background advising top startups like Glean, Toast, and Reddit.In this episode, Kevin breaks down how AI is fundamentally changing the world of SEO. We explore why 2024 might have been the last year of peak organic traffic, how AI is creating higher-intent traffic that converts better, and why brand trust matters more than ever in search results.Kevin also dives into how LLMs use search engine data to ground responses, why traditional content strategies are losing relevance, and how modern companies should pivot toward first-party data, robust documentation, and strong communities.We also discuss the evolving role of Chrome, why Reddit is having a moment, and why retention—not just clicks—is becoming the ultimate SEO metric.As usual, I'm excited to hear what you think of this episode, and if you have any feedback, I would love to hear from you. You can email me directly on andrew@churn.fm. Don't forget to follow us on Twitter.Key Resources:WebsiteLinkedInGrowth MemoHyperGrowth PartnersGoogle I/OOpenAIChatGPTRedditPerplexityPostHogChurn FM is sponsored by Vitally, the all-in-one Customer Success Platform.

Category Visionaries
Lucas Mendes, CEO of Revelo: $48.7 Million Raised to Build the Backbone of Tech Talent for the Age of AI

Category Visionaries

Play Episode Listen Later Jul 10, 2025 26:19


Revelo has emerged as a critical player in the intersection of talent acquisition and AI development, transforming from a Latin American job board to a comprehensive tech talent platform serving both traditional staffing needs and the booming human data market for LLM training. With $48.7 million raised and a network of 400,000 pre-vetted engineers, Revelo has positioned itself at the forefront of two massive trends: remote work acceleration and the AI revolution. In this episode, Lucas Mendes, Co-founder and CEO of Revelo, shares the company's evolution from a simple recruiting platform to becoming the backbone of tech talent for the age of AI, including their pivot during COVID that led to 6x growth in three years and their recent expansion into human data services for hyperscalers training large language models. Topics Discussed: Revelo's origin story and pivot from a Brazilian job board to a nearshoring platform during COVID The dramatic revenue swings during the pandemic - from 80% revenue drop to overwhelming demand The emergence of human data for LLM training as a new business line, growing from 0% to 25% of revenue in 18 months Building specialized platforms for code annotation and LLM training that differ from general-purpose data labeling tools The consulting layer required to serve hyperscalers and why workforce suppliers alone can't compete Revelo's M&A strategy with five acquisitions completed and plans for more transformational deals The long-term vision of becoming the go-to destination for AI implementation talent across all engagement models   GTM Lessons For B2B Founders: Respond to market signals rather than forcing your vision: Lucas admits that both major pivots - the COVID nearshoring boom and the LLM training opportunity - came from inbound customer demand rather than proactive strategic decisions. He emphasizes being responsive to market signals: "I wish I could claim credit for that, but it was again, us responding to inbound interest from clients." B2B founders should remain agile and let customer demand guide major strategic decisions rather than forcing predetermined visions onto the market. Build deep expertise to differentiate from commodity suppliers: When serving hyperscalers, Revelo learned that being just a "workforce supplier" wasn't enough. Lucas explains: "There's too many of these companies out there for there to be any meaningful demand for somebody who's just a workforce supplier. You need to have done this before." The company invested heavily in developing consulting capabilities and domain expertise. B2B founders entering competitive markets should identify what specialized knowledge or capabilities will differentiate them from commodity providers. Leverage your founding team for new market exploration: When building the LLM training business, Lucas deployed his senior leadership team rather than hiring external executives. He explains: "You need to have a founding team for that phase... it's exhausting, it's excruciating, it's stressful, but it is very much an early stage startup." B2B founders should use their core team's entrepreneurial skills when exploring new markets, even if it means senior executives taking on hands-on roles outside their typical functions. Treat enterprise sales as a repeatable process across teams: Lucas discovered that selling to different teams within the same hyperscaler required starting from scratch each time. His solution: "Build a core corpus of sales collateral, like case studies and materials that they can socialize internally." B2B founders selling to large enterprises should systematize their sales process and create reusable materials that can be adapted for different internal stakeholders, treating each team as a separate sales opportunity. Use transparency to build trust with sophisticated buyers: When dealing with hyperscalers, Lucas found that honesty about capabilities was crucial: "You have to be really clear about what you can do and what you cannot... Some of these companies are saying, hey, we want to do projects where you'll do human data for code, but also some human data for video. We have to say no to that." B2B founders serving sophisticated enterprise clients should be transparent about their limitations, as attempting to oversell capabilities will ultimately damage relationships with buyers who can easily detect gaps in expertise.   //   Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe.  www.GlobalTalent.co   //   Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM       

Joey Pinz Discipline Conversations
#659 Pax8 Beyond-Shlomi Gian: ✉️ Smarter Email Security for MSPs: The Block & Coach Revolution

Joey Pinz Discipline Conversations

Play Episode Listen Later Jul 9, 2025 38:09 Transcription Available


Send us a textIn this episode recorded live at Pax8 Beyond 2025, Joey Pinz speaks with Shlomi Gian, a tech veteran driving the next chapter of email security with Inky. From anti-phishing to international expansion, Shlomi offers candid insight into how Inky is reshaping the MSP channel with a channel-only model and a coaching-first security approach.We cover Inky's evolution from an enterprise solution to a platform built exclusively for MSPs, leveraging AI, QR-code detection, and banner-based user feedback to block threats and educate users in real-time. Shlomi unpacks the “block and coach” model, Inky's proprietary LLM deployment, and why they run GenAI in-house for both privacy and cost control.He also shares his view on pricing transparency, vendor consolidation, and why MSPs should stop buying “50-cent security.” It's a conversation packed with real value for tech leaders navigating modern threats. 

Marketing Trends
The Secret To Scaling From $20 Million to $200 Million ARR (Extremely Fast)

Marketing Trends

Play Episode Listen Later Jul 9, 2025 65:03


Harmony Anderson didn't wait 90 days to make an impact at Superhuman — she launched a major campaign in her first five weeks. Harmony Anderson, Head of Marketing and Growth Product at Superhuman, breaks down why moving fast (and strategically) matters more than playing it safe, especially in high-growth startups. We dig into what it really takes to scale from $20M to $200M ARR, how to enter the enterprise market without abandoning your early adopters, and why traditional attribution models are falling behind in the age of AI and influencers. If you're navigating go-to-market pivots, building modern marketing infrastructure, or just trying to avoid another forgettable brand campaign — this episode is packed with insights. And congratulations to the Superhuman team for being acquired by Grammarly! Key Moments: 00:00 Harmony Anderson on Moving Upmarket and Scaling 01:35 Welcome to Marketing Trends 02:05 Harmony Anderson's Career Journey 08:33 Fast-Paced Marketing Strategies 13:20 Navigating the Dark Funnel 15:59 Balancing Brand and Attribution 16:47 The Role of Influencers in Modern Marketing 19:27 Positioning in the AI Market 24:41 Moving Up Market: Challenges and Strategies 35:06 Vision Setting and Company Evolution 36:11 Superhuman's Ambitious Roadmap 37:02 Unified Productivity and AI Integration 44:18 Scaling Operations for Rapid Growth 48:09 Innovative Tools and Harmony's Tech Stack 51:13 AI in Content Creation and Marketing 56:02 The Resurgence of Webinars 01:02:14 Superhuman for Startups Program Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org.

CFO Thought Leader
1112: The Value of Seeing Finance from the Front Lines | Nathan Winters, CFO, Zebra Technologies

CFO Thought Leader

Play Episode Listen Later Jul 9, 2025 36:51


When Nathan Winters led a supply chain team earlier in his career, he noticed something that would shape his leadership style: “The credibility you get by the operating leaders when they see you out in the field… is incredibly important.” Whether visiting customers, walking a manufacturing floor, or sitting in on operating meetings, Winters found that physical presence fostered trust—and that trust gave finance a real seat at the table.Today, as CFO of Zebra Technologies, Winters continues to emphasize business partnership grounded in proximity to operations. In the four years since he stepped into the CFO seat, Zebra has weathered post-COVID surges, global supply chain disruptions, and enterprise restructuring. The company's product footprint—often “hidden in plain sight,” from grocery checkout scanners to hospital wristbands—has expanded to include robotics and machine vision, Winters tells us.He's also broadened his own remit, taking on IT and cybersecurity leadership, including oversight of both the CIO and CISO. In that time, Zebra has reduced China-based production from 80% to 30% and introduced new AI capabilities like “Zebra Companion” to automate shelf management for retailers. Internally, Zebra launched a private LLM instance—“Z-GPT”—to streamline tasks from expense report queries to sales presentations.“Your job isn't to just close the books,” Winters tells us. “If you're not analyzing… finding new ways to think about things… you're getting passed up.” At Zebra, finance is not just a control function—it's a strategic force embedded in every operational stride.

B The Way Forward
Maybe These Systems Shouldn't Exist - AI Researcher Dr. Timint Gebru Asks the Questions Others Won't - Part II

B The Way Forward

Play Episode Listen Later Jul 9, 2025 44:49


Welcome to B The Way Forward Interludes - a series of conversations that don't necessarily fit in our regular season, but are just too good to not share. Dr. Timinit Gebru is one of the leading voices in AI research calling for more responsible and inclusive AI systems.  Or, as she puts it - “a technological future that serves our communities instead of one that is used for surveillance, warfare, and the centralization of power by Silicon Valley.”  As the Founder and Executive Director of the Distributed Artificial Intelligence Research Institute (DAIR), Timnit isn't just calling out the dangers of Big Tech's approach to AI - she and her colleagues are working to forge new approaches and new ways to imagine what our future can look like.  In part 2 of our conversation, Timnit shares why the “Distributed” aspect of DAIR is so important, the kinds of projects they undertake that could never be done within the halls of Big Tech or Academia, and why the current method of designing LLM's flies in the face of core engineering principles.   Plus, you all had a lot of questions about AI at our recent Responsible AI Forum by AnitaB.org - and Timnit has some answers for you. --- At AnitaB.org, our mission is to enable and equip women technologists with the tools, resources, and knowledge they need to thrive. Through innovative programs and initiatives, we empower women to chart new paths, better prepared to lead, advance, and achieve equitable compensation. Because when women succeed, they uplift their communities and redefine success on their terms, both professionally and personally. --- Connect with AnitaB.org Instagram - @anitab_org Facebook - /anitab.0rg LinkedIn - /anitab-org On the web - anitab.org  --- Our guests contribute to this podcast in their personal capacity. The views expressed in this interview are their own and do not necessarily represent the views of Anita Borg Institute for Women and Technology or its employees (“AnitaB.org”). AnitaB.org is not responsible for and does not verify the accuracy of the information provided in the podcast series. The primary purpose of this podcast is to educate and inform. This podcast series does not constitute legal or other professional advice or services. --- B The Way Forward Is… Hosted and Executive Produced by Brenda Darden Wilkerson. Produced by Avi Glijansky Associate Produced by Kelli Kyle Sound design and editing by Ryan Hammond  Mixing and mastering by Julian Kwasneski  Additional Producing help from Faith Krogulecki Operations Coordination for AnitaB.org by Quinton Sprull. Creative Director for AnitaB.org is Deandra Coleman Executive Produced by Dominique Ferrari, Stacey Book, and Avi Glijansky for Frequency Machine  Photo of Brenda Darden Wilkerson by Mandisa Media Productions For more ways to be the way forward, visit AnitaB.org

VoxTalks
S8 Ep34: How good are LLMs at doing our jobs?

VoxTalks

Play Episode Listen Later Jul 9, 2025 18:03


 In the second of special series recorded live at the PSE-CEPR Policy Forum 2025, we are asking, how good is AI at doing real-world job task? And how can we measure their capability without resorting to technical benchmarks that may not mean much in the workplace? Since we all became aware of large language models, LLMs scientists have been attempting to evaluate how good they are at performing expert tasks. The results of those tests can show us whether LLMs  can be useful complements to our work, or even replacements for us, as many fear. But setting or grading a test to decide whether an LLM can do a problem-solving job task, rather than solve an abstract problem, isn't easy to do.  Maria del Rio-Chanona, a computer scientist at UCL, tells Tim Phillips about her innovative work-in-progress, in which she asks an LLM to set a tricky workplace exam, then tells another LLM to take the test – which a third LLM evaluates.

tech 45'
Teaser - Romain Huet (OpenAI)

tech 45'

Play Episode Listen Later Jul 9, 2025 6:57


C'est la startup la mieux valorisée au mode ! 300 mds de dollars, voilà ce que vaut la maison-mère de ChatGPT en cet été 2025

Let's Know Things
Pay Per Crawl

Let's Know Things

Play Episode Listen Later Jul 8, 2025 17:56


This week we talk about crawling, scraping, and DDoS attacks.We also discuss Cloudflare, the AI gold rush, and automated robots.Recommended Book: Annie Bot by Sierra GreerTranscriptAlongside the many, and at times quite significant political happenings, the many, and at times quite significant military conflicts, and the many, at times quite significant technological breakthroughs—medical and otherwise—flooding the news these days, there's also a whole lot happening in the world of AI, in part because this facet of the tech sector is booming, and in part because while still unproven in many spaces, and still outright flubbing in others, this category of technology is already having a massive impact on pretty much everything, in some cases for the better, in some for the worse, and in some for better and worse, depending on your perspective.Dis- and misinformation, for instance, is a bajillion times easier to create, distribute, and amplify, and the fake images and videos and audio being shared, alongside all the text that seems to be from legit people, but which may in fact be the product of AI run by malicious actors somewhere, is increasingly convincing and difficult to distinguish from real-deal versions of the same.There's also a lot more of it, and the ability to very rapidly create pretty convincing stuff, and to very rapidly flood all available communication channels with that stuff, is fundamental to AI's impact in many spaces, not just the world of propaganda and misinformation. At times quantity has a quality all of its own, and that very much seems to be the case for AI-generated content as a whole.Other AI- and AI-adjacent tools are being used by corporations to improve efficiency, in some cases helping automated systems like warehouse robots assist humans in sorting and packaging and otherwise getting stuff ready to be shipped, as is the case with Amazon, which is almost to the point that they'll have more robots in their various facilities than human beings. Amazon robots are currently assisting with about 75% of all the company's global deliveries, and a lot of the menial, repetitive tasks human workers would have previously done are now being accomplished by robotics systems they've introduced to their shipping chain.Of course, not everyone is thrilled about this turn of events: while it's arguably wonderful that robots are being subbed-in for human workers who would previously have had to engage in the sorts of repetitive, physical tasks that can lead to chronic physical issues, in many cases this seems to be a positive side-benefit of a larger effort to phase-out workers whenever possible, saving the company money over time by employing fewer people.If you can employ 100 people using robots instead of 1000 people sans-robots, depending on the cost of operation for those robots, that might save you money because each person, augmented by the efforts of the robots, will be able to do a lot more work and thus provide more value for the company. Sometimes this means those remaining employees will be paid more, because they'll be doing more highly skilled labor, working with those bots, but not always.This is a component of this shift that for a long while CEOs were dancing around, not wanting to spook their existing workforce or lose their employees before their new robot foundation was in place, but it's increasingly something they're saying out loud, on investor calls and in the press, because making these sorts of moves are considered to be good for a company's outlook: they're being brave and looking toward a future where fewer human employees will be necessary, which implies their stock might be currently undervalued, because the potential savings are substantial, at least in theory.And it is a lot of theory at this point: there's good reason to believe that theory is true, at least to some degree, but we're at the very beginning phases of this seeming transition, and many companies that jumped too quickly and fired too many people found themselves having to hire them back, in some cases at great expense, because their production faltered under the weight of inferior automated, often AI-driven alternatives.Many of these tools simply aren't as reliable as human employees yet. And while they will almost certainly continue to become more powerful and capable—a recent estimate suggested that the current wave of large-language-model-based AI systems, for instance, are doubling in power every 7 months or so, which is wild—speculations about what that will mean, and whether that trend can continue, vary substantially, depending on who you talk to.Something we can say with relative certainty right now, though, is that most of these models, the LLM ones, at least, not the robot-driving ones, were built using content that was gathered and used in a manner that currently exists in a legal gray area: it was scraped and amalgamated by these systems so that they could be trained on a corpus of just a silly volume of human output, much of that output copyrighted or otherwise theoretically not-useable for this purpose.What I'd like to talk about today is a new approach to dealing with the potentially illegal scraping of copyrighted information by and for these systems, and a proposed new pricing scheme that could allow the creators of the content being scraped in this way to make some money from it.—Web scraping refers to the large-scale crawling of websites and collection of data from those websites.There are a number of methods for achieving this, including just manually visiting a bunch of websites and copying and pasting all the content from those sites into a file on your computer. But the large-scale version of that is something many companies, including entities like Google, do, and for various purposes: Google crawls the web to map it, basically, and then applies all sorts of algorithms and filters in order to build their search results. Other entities crawl the web to gather data, to figure out connections between different sorts of sites, and/or to price ads they sell on their own network of sites or the products they sell, and which they'd like to sell for a slightly lower price than their competition.Web scraping can be done neutrally, then, your website scraped by Google so it can add your site to its search results, the data it collects telling its algorithms where you should be in those results based on keywords and who links to your site and other such things, but it can also be done maliciously: maybe someone wants to duplicate your website and use it to get unsuspecting victims to install malware on their devices. Or maybe someone wants to steal your output: your writings, your flight pricing data, and so on.If you don't want these automated web-scrapers to use your data, or to access some portion or all of your site, you can put a file called robots.txt in your site's directory, and the honorable scrapers will respect that request: the googles of the world, for instance, have built their scrapers so that they look for a robots.txt file and read its contents before mapping out your website structure and soaking up your content to decide where to put you in their search results.Not all scrapers respect this request: the robots.txt standard relies on voluntary compliance. There's nothing forcing any scraper, or the folks running these scrapers, to look for or honor these files and what they contain.That said, we've reached a moment at which many scrapers are not just looking for keywords and linkbacks, but also looking to grab basically everything on a website so that the folks running the scrapers can ingest those images and that writing and anything else that's legible to their software into the AI systems they're training.As a result, many of these systems were trained on content that is copyrighted, that's owned by the folks who wrote or designed or photographed it, and that's created a legal quagmire that court systems around the world are still muddling through.There have been calls to update the robots.txt standard to make it clear what sorts of content can be scraped for AI-training purposes and what cannot, but the non-compulsory, not-legally-backed nature of such requests seem to make robots.txt an insufficient vehicle for this sort of endeavor: the land-grab, gold-rush nature of the AI industry right now suggests that most companies would not honor these requests, because it's generally understood that they're all trying to produce the most powerful AI possible as fast as possible, hoping to be at or near the top before the inevitable shakeout moment at which point most of these companies will go bankrupt or otherwise cease to exist.That's important context for understanding a recent announcement by internet infrastructure company Cloudflare, that said they would be introducing something along the lines of an enforceable robots.txt file for their customers called pay per crawl.Cloudflare is US-based company that provides all sorts of services, from domain registration to firewalls, but they're probably best known for their web security services, including their ability to block DDoS, or distributed denial of service attacks, where a hacker or other malicious actor will lash a bunch of devices they've compromised, through malware or otherwise, together, into what's called a botnet, and use those devices to send a bunch of traffic to a website or other web-based entity all at once.This can result in so much traffic, think millions or billions of visits per second—a recent attack that Cloudflare successfully ameliorated sent 7.3 terabytes per second against one of their customers, for instance—it can result in so much traffic that the targeted website becomes inaccessible, sometimes for long periods of time.So Cloudflare provides a service where they're basically like a firewall between a website and the web, and when something like a DDoS attack happens, Cloudflare's services go into action and the targeted website stays up, rather than being taken down.As a result of this and similarly useful offerings, Cloudflare security services are used by more than 19% of all websites on the internet, which is an absolutely stunning figure considering how big the web is these days—there are an estimated 1.12 billion websites, around 200 million of which are estimated to be active as of Q1 2025.All that said, Cloudflare recently announced a new service, called pay per crawl, that would use that same general principle of putting themselves between the customer and the web to actively block AI web scrapers that want to scrape the customer's content, unless the customer gives permission for them to do so.Customers can turn this service on or off, but they can also set a price for scraping their content—a paywall for automated web-scrapers and the AI companies running them, basically.The nature of these payments is currently up in the air, and it could be that content creators and owners, from an individual blogger to the New York Times, only earn something like a penny per crawl, which could add up to a lot of money for the Times but only be a small pile of pennies for the blogger.It could also be that AI companies don't play ball with Cloudflare and instead they do what many tech analysts expect them to do: they come up with ways to get around Cloudflare's wall, and then Cloudflare makes the wall taller, the tech companies build taller ladders, and that process just spirals ad infinitum.This isn't a new idea, and the monetization aspect of it is predicated on some early web conceptions of how micropayments might work.It's also not entirely clear whether the business model would make sense for anyone: the AI companies have long complained they would go out of business if they had to pay anything at all for the content they're using to train their AI models, big companies like the New York Times face possible extinction if everything they pay a lot of money to produce is just grabbed by AI as soon as it goes live, those AI companies making money from that content they paid nothing to make, and individual makers-of-things face similar issues as the Times, but without the leverage to make deals with individual AI companies, like the Times has.It also seems that AI chatbots are beginning to replace traditional search engines, so it's possible that anyone who uses this sort of wall will be excluded from the search of the future. Those whose content is gobbled up and used without payment will be increasingly visible, their ideas and products and so on more likely to pop up in AI-based search results, while those who put up a wall may be less visible; so there's a big potential trade-off there for anyone who decides to use this kind of paywall, especially if all the big AI companies don't buy into it.Like everything related to AI right now, then, this is a wild west space, and it's not at all clear which concepts will win out and become the new default, and which will disappear almost as soon as they're proposed.It's also not clear if and when the larger economic forces underpinning the AI gold rush will collapse, leaving just a few big players standing and the rest imploding, Dotcom Bubble style, which could, in turn, completely undo any defaults that are established in the lead-up to that moment, and could make some monetization approaches no longer feasible, while others, including possibly paywalls and micropayments, suddenly more thinkable and even desirable.Show Noteshttps://www.wired.com/story/pro-russia-disinformation-campaign-free-ai-tools/https://www.wsj.com/tech/amazon-warehouse-robots-automation-942b814fhttps://www.wsj.com/tech/ai/ai-white-collar-job-loss-b9856259https://w3techs.com/technologies/details/cn-cloudflarehttps://www.demandsage.com/website-statistics/https://blog.cloudflare.com/defending-the-internet-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos/https://en.wikipedia.org/wiki/Web_scrapinghttps://en.wikipedia.org/wiki/Robots.txthttps://developers.cloudflare.com/ai-audit/features/pay-per-crawl/use-pay-per-crawl-as-site-owner/set-a-pay-per-crawl-price/https://techcrunch.com/2025/07/01/cloudflare-launches-a-marketplace-that-lets-websites-charge-ai-bots-for-scraping/https://www.nytimes.com/2025/07/01/technology/cloudflare-ai-data.htmlhttps://creativecommons.org/2025/06/25/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai/https://arstechnica.com/tech-policy/2025/07/pay-up-or-stop-scraping-cloudflare-program-charges-bots-for-each-crawl/https://www.cloudflare.com/paypercrawl-signup/https://www.cloudflare.com/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/https://digitalwonderlab.com/blog/the-ai-paywall-era-a-turning-point-for-publishers-or-just-another-cat-and-mouse-game This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

Transform Your Workplace
Beyond the Chatbot: What AI Mascots Reveal About the Next Wave of Business Innovation

Transform Your Workplace

Play Episode Listen Later Jul 8, 2025 36:39


In this conversation, Brandon Laws sits down with Fritz Brumder, serial entrepreneur and co-founder of WiseOx, to explore how “AI mascots” are helping organizations move from one-off ChatGPT experiments to secure, organization-wide automation. Fritz explains why legacy chatbots struggled, how large-language-model (LLM) technology unlocks far richer support and HR workflows, and what an 80/20 security model looks like in practice. They also discuss practical first-steps for leaders, new skills employees should cultivate, and a realistic one-, two-, and ten-year outlook on AI's impact on talent and trust at work. If you're AI-curious but cautious about compliance, this episode offers a roadmap—and a few reality checks—to get started. Key Timestamps Time stamp Topic 00:00 Welcome, show purpose, and Xenium HR sponsor read 02:00 Introducing Fritz Brumder and the concept of “AI mascots” at WiseOx 03:30 Fritz's startup journey, timing the leap from video tech to GenAI 07:00 Chatbots 1.0 vs. LLM-powered assistants—what changed and why it matters 11:00 Training an AI on SOPs, policies, and compliance docs: HR use cases 15:00 Internal roll-outs first, then customer-facing support and sales assistants 18:00 Beyond chat: email, voice, multilingual interaction, and task automation 22:00 Data privacy & the 80/20 security model (private vector store + LLM) 26:00 Pricing, five-step implementation path, and change-management tips 30:00 Future-proof skills: clear problem-solving, prompt design, creative thinking 33:00 1-, 2-, and 10-year AI forecasts—why pace feels fast but adoption is a decade-long shift 34:30 Workflow examples: AI-driven applicant screening and onboarding 35:30 How to connect with Fritz and learn more about WiseOx; episode close   A QUICK GLIMPSE INTO OUR PODCAST Podcast: Transform Your Workplace, sponsored by Xenium HR Host: Brandon Laws “The Transform Your Workplace podcast is your go-to source for the latest workplace trends, big ideas, and time-tested methods straight from the mouths of industry experts and respected thought-leaders.” — Brandon Laws About Xenium HR Xenium HR is on a mission to transform workplaces by providing expert outsourced HR and payroll services for small and medium-sized businesses. With a people-first approach, Xenium helps organizations create thriving work environments where employees feel valued and supported. From navigating compliance to enhancing workplace culture, Xenium offers tailored solutions that empower growth and simplify HR. Whether managing employee relations, payroll processing, or implementing impactful training programs, Xenium is the trusted partner businesses rely on to elevate their workplace experience. Learn more › Connect with Brandon Laws LinkedIn: https://www.linkedin.com/in/lawsbrandon Instagram: https://www.instagram.com/lawsbrandon About: https://xeniumhr.com/about-xenium/meet-the-team/brandon-laws Connect with Xenium HR Website: https://xeniumhr.com/ LinkedIn: https://www.linkedin.com/company/xenium-hr Facebook: https://www.facebook.com/XeniumHR Twitter: https://twitter.com/XeniumHR Instagram: https://www.instagram.com/xeniumhr YouTube: https://www.youtube.com/user/XeniumHR

Gradient Dissent - A Machine Learning Podcast by W&B
How DeepL Built a Translation Powerhouse with AI with CEO Jarek Kutylowski

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Jul 8, 2025 42:42


In this episode of Gradient Dissent, Lukas Biewald talks with Jarek Kutylowski, CEO and founder of DeepL, an AI-powered translation company. Jarek shares DeepL's journey from launching neural machine translation in 2017 to building custom data centers and how small teams can not only take on big players like Google Translate but win.They dive into what makes translation so difficult for AI, why high-quality translations still require human context, and how DeepL tailors models for enterprise use cases. They also discuss the evolution of speech translation, compute infrastructure, training on curated multilingual datasets, hallucinations in models, and why DeepL avoids fine-tuning for each individual customer. It's a fascinating behind-the-scenes look at one of the most advanced real-world applications of deep learning.Timestamps: [00:00:00] Introducing Jarek and DeepL's mission [00:01:46] Competing with Google Translate & LLMs [00:04:14] Pretraining vs. proprietary model strategy [00:06:47] Building GPU data centers in 2017 [00:08:09] The value of curated bilingual and monolingual data [00:09:30] How DeepL measures translation quality [00:12:27] Personalization and enterprise-specific tuning[00:14:04] Why translation demand is growing [00:16:16] ROI of incremental quality gains [00:18:20] The role of human translators in the future [00:22:48] Hallucinations in translation models [00:24:05] DeepL's work on speech translation [00:28:22] The broader impact of global communication [00:30:32] Handling smaller languages and language pairs [00:32:25] Multi-language model consolidation [00:35:28] Engineering infrastructure for large-scale inference [00:39:23] Adapting to evolving LLM landscape & enterprise needs

Experiencing Data with Brian O'Neill
173 - Pendo's CEO on Monetizing an Analytics SAAS Product, Avoiding Dashboard Fatigue, and How AI is Changing Product Work

Experiencing Data with Brian O'Neill

Play Episode Listen Later Jul 8, 2025 43:49


Todd Olson joins me to talk about making analytics worth paying for and relevant in the age of AI. The CEO of Pendo, an analytics SAAS company, Todd shares how the company evolved to support a wider audience by simplifying dashboards, removing user roadblocks, and leveraging AI to both generate and explain insights. We also talked about the roles of product management at Pendo. Todd views AI product management as a natural evolution for adaptable teams and explains how he thinks about hiring product roles in 2025. Todd also shares how he thinks about successful user adoption of his product around “time to value” and “stickiness” over vanity metrics like time spent.    Highlights/ Skip to: How Todd has addressed analytics apathy over the past decade at Pendo (1:17) Getting back to basics and not barraging people with more data and power (4:02) Pendo's strategy for keeping the product experience simple without abandoning power users (6:44) Whether Todd is considering using an LLM (prompt-based) answer-driven experience with Pendo's UI (8:51) What Pendo looks for when hiring product managers right now, and why (14:58) How Pendo evaluates AI product managers, specifically (19:14) How Todd Olson views AI product management compared to traditional software product management (21:56) Todd's concerns about the probabilistic nature of AI-generated answers in the product UX (27:51) What KPIs Todd uses to know whether Pendo is doing enough to reach its goals (32:49)   Why being able to tell what answers are best will become more important as choice increases (40:05)   Quotes from Today's Episode “Let's go back to classic Geoffrey Moore Crossing the Chasm, you're selling to early adopters. And what you're doing is you're relying on the early adopters' skill set and figuring out how to take this data and connect it to business problems. So, in the early days, we didn't do anything because the market we were selling to was very, very savvy; they're hungry people, they just like new things. They're getting data, they're feeling really, really smart, everything's working great. As you get bigger and bigger and bigger, you start to try to sell to a bigger TAM, a bigger audience, you start trying to talk to the these early majorities, which are, they're not early adopters, they're more technology laggards in some degree, and they don't understand how to use data to inform their job. They've never used data to inform their job. There, we've had to do a lot more work.” Todd (2:04 - 2:58) “I think AI is amazing, and I don't want to say AI is overhyped because AI in general is—yeah, it's the revolution that we all have to pay attention to. Do I think that the skills necessary to be an AI product manager are so distinct that you need to hire differently? No, I don't. That's not what I'm seeing. If you have a really curious product manager who's going all in, I think you're going to be okay. Some of the most AI-forward work happening at Pendo is not just product management. Our design team is going crazy. And I think one of the things that we're seeing is a blend between design and product, that they're always adjacent and connected; there's more sort of overlappiness now.” Todd (22:41 - 23:28) “I think about things like stickiness, which may not be an aggregate time, but how often are people coming back and checking in? And if you had this companion or this agent that you just could not live without, and it caused you to come into the product almost every day just to check in, but it's a fast check-in, like, a five-minute check-in, a ten-minute check-in, that's pretty darn sticky. That's a good metric. So, I like stickiness as a metric because it's measuring [things like], “Are you thinking about this product a lot?” And if you're thinking about it a lot, and like, you can't kind of live without it, you're going to go to it a lot, even if it's only a few minutes a day. Social media is like that. Thankfully I'm not addicted to TikTok or Instagram or anything like that, but I probably check it nearly every day. That's a pretty good metric. It gets part of my process of any products that you're checking every day is pretty darn good. So yeah, but I think we need to reframe the conversation not just total time. Like, how are we measuring outcomes and value, and I think that's what's ultimately going to win here.” Todd (39:57) Links LinkedIn: https://www.linkedin.com/in/toddaolson/  X: https://x.com/tolson  todd@pendo.io 

Training Data
Mapping the Mind of a Neural Net: Goodfire's Eric Ho on the Future of Interpretability

Training Data

Play Episode Listen Later Jul 8, 2025 47:07


Eric Ho is building Goodfire to solve one of AI's most critical challenges: understanding what's actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society. Hosted by Sonya Huang and Roelof Botha, Sequoia Capital Mentioned in this episode: Mech interp: Mechanistic interpretability, list of important papers here Phineas Gage: 19th century railway engineer who lost most of his brain's left frontal lobe in an accident. Became a famous case study in neuroscience. Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020 Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons) Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek's reasoning model R1 Auto-interpretability: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model. (see episode with Arc co-founder Patrick Hsu) Paint with Ember: Canvas interface from Goodfire that lets you steer an LLM's visual output  in real time (paper here) Model diffing: Interpreting how a model differs from checkpoint to checkpoint during finetuning Feature steering: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy) Weight based interpretability: Method for directly decomposing neural network parameters into mechanistic components, instead of using features The Urgency of Interpretability: Essay by Anthropic founder Dario Amodei On the Biology of a Large Language Model: Goodfire collaboration with Anthropic

Deep Questions with Cal Newport
Ep. 360: One-Page Productivity

Deep Questions with Cal Newport

Play Episode Listen Later Jul 7, 2025 57:29


Trying to stick to complicated time management systems without any breaks can eventually lead to burn out. But if you stop organizational efforts altogether, your life can become a stressful mess. In this episode, Cal taps the wisdom of an elite running coach to devise what he calls one-page productivity: a minimum time management system, meant to be run for limited periods to help you recharge, but that also maintains just enough organization that you can avoid disaster. He argues such maintenance modes should be an important part of any time management practice. Cal then answers listener questions and concludes by reviewing the books he read in June 2025.Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here's the link: bit.ly/3U3sTvoVideo from today's episode:  youtube.com/calnewportmediaDeep Dive: One-Page Productivity [1:52]How do you approach decisions when you're torn between two reasonable options? [18:17]How can I navigate teaching with phone addicted teenagers? [22:06]Have you considered using LLM's to assist in your writing? [28:03]How many “thinking” walks do you take each week? [32:31]Do you have any recommendations for learning new material outside of the structured framework of a course? [35:00]CASE STUDY: A son explains his parents lifestyle engineering [37:21]CALL: Setting up workflows as a manager [41:09]JUNE BOOKS: The 5 Books Cal Read in June, 2025 [52:05]The Magic of Code (Samuel Arbesman)In the Swarm (Byung-Chul Han)The Fear Index (Robert Harriss)The Explore's Gene (Alex Hutchinson)Skywalking (Dale Pollock)Links:Buy Cal's latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal's “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal's monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?nytimes.com/athletic/6453809/2025/06/27/bruce-bochy-walking-exercise-creativity/Thanks to our Sponsors:cozyearth.com (Use code “DEEP”)oracle.com/deepquestionsvanta.com/deepquestionsshopify.com/deepThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering.

Unleashed - How to Thrive as an Independent Professional
613. Brian Stollery, AlphaSense's AI Market Intel for Consulting

Unleashed - How to Thrive as an Independent Professional

Play Episode Listen Later Jul 7, 2025 51:44


Show Notes: Brian Stollery talks about AlphaSense, an information provider that independent consultants and boutique firms are using to gain an edge over those who rely on chat GPT or consumer LLM tools. AlphaSense is built for this kind of work, pulling in verified content such as industry reports, broker research filings, earnings calls, expert calls, news, and internal research and internal content. It layers this with market-leading AI functionality that can read and synthesize all of it to deliver consulting-grade insights at scale. AlphaSense Explained Brian clarifies that AlphaSense is not primarily an expert network like AlphaSights, but rather a market company and enterprise intelligence search engine for the AI generation. It offers the depth and breadth of authoritative data that would be obtained from a legacy research platform with the intuitive user experience of modern AI tools. The value of AlphaSense lies in the deep, authoritative content set that is the foundation of AlphaSense, along with the speed and accuracy of the AI that allows users to quickly surface relevant insights. Brian also talks about the major categories of sources of proprietary information that feed into AlphaSense. The AlphaSense Platform The AlphaSense platform features an index where users can go to different things, such as portfolio monitors, research topics, expert insights, news, risk signals on consumer tech growth investment strategy, events, company documents, and talent job executive movements. The dashboard includes eight or nine widgets that provide a list of seven or eight articles on various topics. These articles are sourced from various sources, such as news articles or interviews with experts. The platform also has over 200,000 free recorded, transcribed expert calls, which are added to the library for analysis by the AI. How AlphaSense Gathers Information The interviewers are usually conducted by-side analysts, corporate users, and experts in respective fields. They work with corporate development teams and head of corporate strategy to conduct these interviews. The platform believes that a rising tide lifts all boats, and every expert call that happens throughout the AlphaSense is published back in the platform to further enhance and grow its library of expert calls from subject matter experts who are currently active in their industry.   AlphaSense Use Cases In management consulting, AlphaSense may not be suitable for calls that would be better suited to AlphaSights where the information is sensitive or should have restricted access.  However, the use case for AlphaSense is for commercial due diligence for private equity, where it allows users to get up to speed for engagement and quickly search across benchmark expert perspectives. This allows them to bolster their expertise within the management consulting space. AlphaSense is an institutional grade content engine that consolidates information from various sources, including expert calls, news, research reports, broker research, and more. It offers over 6000 vetted business and market news sources and trade journals, most of which require paywalls. AlphaSense allows users to bypass these paywalls and provides real-time insights from over 700 partners.   The AlphaSense Dashboard The dashboard includes relevant documents related to executive movements, risk signals, growth, and investment strategies. Users can explore the dashboard by searching for trigger words related to their watchlist of consumer tech companies. The AI can then pull relevant documents, such as expert insights, event transcripts, press releases, and news, to provide valuable insights for business development or due diligence. The Executive Search Function The document search module within AlphaSense allows users to get forensic insights from relevant documents, such as executive search, talent, and hiring practices. The AI can also generate summary responses, which are useful for top-tier consulting use cases. However, the AI may sometimes make a guess or hallucination if an answer is not available. This is why the Big Three and Big Four rely on AlphaSense for their consulting use cases. The AlphaSense Research Tool The AlphaSense generative search tool is a research analyst team in a box. The tool is designed to answer macro business questions, such as market size or pricing trends. Brian checks McKinsey, Bain and BCG's performance in 2025, including their revenue, talent, hiring, and growth areas. The AI agent breaks down these questions into subquestions and finds 3000 documents across the content library. It then extracts documents from expert calls, press releases, investor relations presentations, research reports, and sustainability reports. The AI outputs a summary of the documents. The tool is particularly useful for understanding the performance of consulting firms like McKinsey Bain and BCG. Quality Sources and Quantitative Data AlphaSense provides bullet points on McKinsey, revenue, growth, talent, and hiring, with links to expert calls and other sources of data. The AI outputs are deep linked and cited to the source, ensuring accuracy. For instance, McKinsey Sciences for Growth, a 2025 focus, integrates tech-enabled capabilities and AI. BCG reported $13.5 billion in 2024 revenue, achieving 10% global growth and expanding its workforce to 33,000 employees. AlphaSense also has sentence-level citations, ensuring every sentence is deep linked and cited to its source. AlphaSense uses various models from partners like open AI, sonnet four, and Gemini 2.5, all grounded in high-quality, relevant documents. The tool's intelligence selects the best model based on the use case, whether it's reasoning-based or quantitative or qualitative. The AI is a comprehensive market-leading library of authoritative content that consultants care about. Modes of Research and Meeting Prep for Management Consultants Brian shares the typical use cases for management consultants using generative search platforms. He highlights two modes: think longer and deep research. Brian used generative search to prepare for a meeting with a client at a mid-sized consulting firm, focusing on digital strategy. The AI summarized transcripts, expert calls, earnings calls, and press releases from iHeart, highlighting the company's focus on technology, digitization, and AI-enabled automation as the key to cost savings and digital revenue acceleration. The platform also offers an iPhone app for on-the-go access to insights. The AI analyzed bullet points and planned insights on every section, creating a comprehensive competitive intelligence report. The report includes chatter on core service offerings, engagement models, pricing structures, sector specialization, news partnerships, partnerships, and tech bets.    AlphaSense's Generative Grid Brian talks about using AlphaSense's generative grid, which is a generative AI-powered spreadsheet to aggregate documents and interrogate them. This is useful for tracking executive compensation and performance components for target accounts. The grid allows consulting users to analyze past performance and understand the current climate. Another use case is connecting consulting, transformation, and strategic advisory services to key performance indicators, such as free cash flow, human capital, strategic objectives, or EBITDA. By attaching value drivers directly to performance components, consultants can focus on adjusted EBITDA growth, cost optimization, Target, discover integration execution drive, adjusted ROTC, and revenue growth tied to executive compensation. AlphaSense for Understanding Business Development Brian explains that the use cases and projects of consultants using AlphaSense  vary, but one major use case is business development understanding. It helps in identifying companies' propensity for M&A or divestitures, such as changes in management or new strategic initiatives. AlphaSense also offers a deal scanner for M&A consultants looking at acquisitions or private equity deals across a portfolio of companies or industries. It also provides due diligence services, such as meeting prep, company research, trend analysis, market assessment, client benchmarking, and sentiment analysis. Alpha Sense's Access to Information Providers AlphaSense has access to SEC filings, newspapers, trade journals, investment bank coverage, and reports. AlphaSense also has access to other information providers like CrunchBase, capital, IQ, and Pitch Book. The Venn diagram highlights the overlap of information between AlphaSense and other information providers, such as CrunchBase, Morningstar, and CrunchBase. If a company's revenue or employee count is in CrunchBase, it can be accessed via AlphaSense. Alpha Sense vs. Capital IQ The conversation turns to the differences between AlphaSense and Capital IQ, a financial reporting platform. AlphaSense is an end-to-end intelligence engine that provides access to investment banking reports, but it requires downloading them one by one. It is not possible to search across all content sets at once. Capital IQ, on the other hand, offers valuable structured data, is great for downloading Industry Reports, and is a strategic database of financials and filings. It is also useful for importing statistical or financial models into Excel. AlphaSense, on the other hand, is an end-to-end intelligence engine that provides decision-ready insights across billions of data points. Timestamps: 03:23: Overview of AlphaSense's Content and AI Capabilities  07:27: Detailed Walkthrough of AlphaSense Dashboard 12:38: Exploring Different Categories of Information Sources  16:36: Generative Search and Deep Research Capabilities  26:05: Use Cases for Management Consultants  42:50: Comparison with Other Information Providers  49:22: Pricing and Accessibility Links: Website: https://www.alpha-sense.com/ Recently feature on AlphaSense on CNBC with more insight on our Deep Research differentiation: https://www.youtube.com/watch?v=0HJ8Egisg-w If folks want to reach out directly for their own personalized demo: Email: bstollery@alpha-sense.com LinkedIn: https://www.linkedin.com/in/briancity/ Unleashed is produced by Umbrex, which has a mission of connecting independent management consultants with one another, creating opportunities for members to meet, build relationships, and share lessons learned. Learn more at www.umbrex.com.

Flying Cat Marketing Podcast
Aligning marketing, sales and CS with Janet Jaiswal

Flying Cat Marketing Podcast

Play Episode Listen Later Jul 7, 2025 23:40


Welcome to Executive Conversations, where we dig into the gritty realities of leading modern marketing teams. In this episode, Maeva Cifuentes sits down with Janet Jaiswal, chief marketing officer at Blueshift and long-time marketing advisor. Janet unpacks why her team now owns 90 percent of pipeline, how she killed the vanity of MQLs in favour of BANT-qualified “stage 1” leads, and what it really takes to align marketing, sales and CS around the same revenue target. She explains the hidden CRM and training work that comes with that shift, the dangers of chasing efficiency before effectiveness, and why AI-powered search is rewriting the SEO rulebook. Janet also shares practical steps for surfacing in LLM results—from tweaking robots.txt to publishing Q&A-style content—and reveals how Blueshift is already closing deals that start with ChatGPT queries.

airhacks.fm podcast with adam bien
TornadoVM: The Need for GPU Speed

airhacks.fm podcast with adam bien

Play Episode Listen Later Jul 6, 2025 59:41


An airhacks.fm conversation with Michalis Papadimitriou (@mikepapadim) about: starting with Java 8, first computer experiences with Pentium 2, doom 2 and Microsoft Paint, university introduction to Object-oriented programming using Objects First and bluej IDE, Monte Carlo simulations for financial portfolio optimization in Java, porting Java applications to OpenCL for GPU acceleration achieving 20x speedup, working at Huawei on GPU hardware, writing unit tests as introduction to TornadoVM, working on FPGA integration and Graal compiler optimizations, experience at OctoAI startup doing AI compiler optimizations for TensorFlow and PyTorch models, understanding model formats evolution from ONNX to GGUF, standardization of LLM inference through Llama models, implementing GPU-accelerated Llama 3 inference in pure Java using TornadoVM, achieving 3-6x speedup over CPU implementations, supporting multiple models including Mistral and working on qwen 3 and deepseek, differences between models mainly in normalization layers, GGUF becoming quasi-standard for LLM model distribution, TornadoVM's Consume and Persist API for optimizing GPU data transfers, challenges with OpenCL deprecation on macOS and plans for Metal backend, importance of developer experience and avoiding python dependencies for Java projects, runtime and compiler optimizations for GPU inference, kernel fusion techniques, upcoming integration with langchain4j, potential of Java ecosystem with Graal VM and Project Panama FFM for high-performance inference, advantages of Java's multi-threading capabilities for inference workloads Michalis Papadimitriou on twitter: @mikepapadim

GigaBoots Podcasts
We Were Right about Xbox; It Was Obvious Part 2 | Big Think Dimension #330

GigaBoots Podcasts

Play Episode Listen Later Jul 4, 2025 216:22


We also watched Birdemic: https://youtu.be/Hz_8Q_3fw4Q Follow us on BlueSky! https://bsky.app/profile/gigaboots.com Podlord Song: https://youtu.be/jdkTdaNJsvs Industry Burning Down Song: https://youtu.be/6XJmalxng0Q Become a podlord or normal patron today! http://www.patreon.com/GBPodcasts RSS Feed: https://gbpods.podbean.com/ Kris' BlueSky: https://bsky.app/profile/kriswolfhe.art.social Dr. Aggro's BlueSky: https://bsky.app/profile/draggro.bsky.social Bob's BlueSky: https://bsky.app/profile/gigabob.bsky.social GB Main Patreon: http://www.patreon.com/gigaboots GB Fan Discord: https://discord.gg/XAGcxBk #PhilSpencer #Xbox #ThePlan   Tags: gigaboots,big think dimension,btd,weekly gaming news podcast,subway slammer,persona 5X,P5X,Phil Spencer,LLM,Xbox game studios,Perfect Dark,Fable,Forza Motorsport,Micorosoft,Satya Nadella,Everwild,Halo Studios,The Initiative,EA,Battlefield

B2B Vault: The Payment Technology Podcast
Genetica is the Future for Business! Meet Sarah Kabakoff | Biz To Biz Podcast

B2B Vault: The Payment Technology Podcast

Play Episode Listen Later Jul 4, 2025 35:41


In this episode of the Biz To Biz Podcast, we dive into the future of business intelligence with Sarah Kabakoff, CEO of Genetica, and a driving force behind intelligent automation in modern industries.With a deep background leading revenue and product strategy at AI-driven SaaS startups, Sarah and her team have spent the last decade transforming operations in restaurants, retail, and regulated markets through cutting-edge technology.At Genetica, Sarah is leading the charge on ServeAI, a data intelligence platform that merges enriched SQL layers, LLM-powered agents, and real-time business logic. The result? A revolutionary tool that eliminates the need for static dashboards, analysts, or complex BI systems.Follow Us On These Social Media Platforms

Eat Blog Talk | Megan Porta
713: Will Google Kill Your Blog? The New Rules of SEO, Content, and Audience Growth with Ryan Robinson

Eat Blog Talk | Megan Porta

Play Episode Listen Later Jul 3, 2025 52:35


Worried about SEO and AI changes in 2025? In episode 713, Ryan Robinson shares how food bloggers can adapt their strategy in a rapidly evolving digital landscape—one shaped by search shifts, AI tools, and new audience expectations. Ryan teaches 500,000 monthly readers how to start a blog and grow a profitable online business at ryrob.com. Co-founder of RightBlogger, a suite of 80+ powerful marketing tools for bloggers and small business owners. Recovering side project addict. In this episode, you'll learn why clinging to old SEO rules could be holding you back—and how to future-proof your blog with smarter content, evolving business models, and a mindset built for long-term success. Key points discussed include: - SEO is evolving—embrace the change: Expect SEO to shift as large language models (LLMs) redefine how people find content. - Focus on quality, not just traffic: Your audience may shrink, but the visitors who do find you will be more engaged. - Experiment constantly: Try new formats, topics, and tools to keep learning and adapting your content strategy. - Video is a huge opportunity: Real, authentic video content helps build deeper relationships that AI can't replicate. - Long-tail content is powerful: Optimize for specific, niche queries to stand out in AI-driven search results. - Diversify your income streams: Don't rely solely on ad revenue—explore memberships, products, and courses. - Your blog is your LLM training guide: AI models use your blog to understand your expertise—make it count. - Mindset matters most: View change as an opportunity to grow, experiment, and redefine success on your terms. If You Loved This Episode… You'll love Episode 708: AI is NOT a Threat - How to Use It to Revolutionize Your Blogging Workflow With Hanelore Dumitrache & Mariska Ramondino Connect with Ryan Robinson Website | Instagram  

Sweat Equity Podcast® Law Smith + Eric Readinger
How To Seriously Milk Entrepreneurial Comedy Like Nathan Fielder On The Rehearsal Season 2 | ROI Podcast™ ep. 488 | Law Smith @LawSmithWorks & Eric Readinger

Sweat Equity Podcast® Law Smith + Eric Readinger

Play Episode Listen Later Jul 3, 2025 32:30 Transcription Available


ROI Podcast™ hosted by Law Smith @LawSmithWorks and Eric Readinger! ROI is... Revenue Optimization Initiative? Results Oriented Innovation? Return On Investment? Who knows? ROI Podcast™! the #1 business/comedy podcast on earth! Entrepreneurship via guest interviews and generally chewing the comedy cud... Here's the episode description no LLM or LLAMA can create   Ever wondered how Nathan Fielder blends absurd comedy with serious business lessons? In ROI Podcast™ episode 488, Law Smith (@LawSmithWorks) and Eric Readinger break down Nathan's bizarre brilliance from "The Rehearsal," exploring why comedy often tells truths traditional methods can't. Dive into unique strategies of marketing through comedy, leveraging authentic connections, and unconventional problem-solving that'll make your business (and your brain) thrive. Also, comedy business strategy, authentic marketing, creative entrepreneurship, business podcast, comedy lessons, unconventional marketing, Andy Kaufman, Jackass comedy, ROI Podcast, marketing insights, business humor, comedic genius, and avoiding burnout.   Chapters cover: Nathan Fielder's comedic genius Why authenticity beats traditional marketing Lessons from "The Rehearsal" on communication and business Andy Kaufman-level dedication to the bit   Hit subscribe for more comedy-infused business insights—ROI Podcast-style.   #NathanFielder #BusinessComedy #CreativeMarketing #Entrepreneurship #Authenticity #ComedyPodcast #ROIshow #MarketingStrategy #BusinessInsights #TheRehearsal #FunnyPodcast #ComedyBusiness #ComedyLessons     Episode sponsored by @ZUPYAK  https://www.Zupyak.com → promo code → SWEAT @Flodesk -50% off https://flodesk.com/c/AL83FF @Incogni remove you personal data from public websites 50% off https://get.incogni.io/SH3ve @SQUARESPACE website builder → https://squarespacecircleus.pxf.io/sweatequity @CALL RAIL call tracking → https://bit.ly/sweatequitycallrail @LINKEDIN PREMIUM - 2 months free! → https://bit.ly/sweatequity-linkedin-premium @OTTER.ai → https://otter.ai/referrals/AVPIT85N     Hosts' Eric Readinger & Law Smith

Supermanagers
AI Becomes a Board Member and Thought Partner with Greg Shove, CEO at Section AI

Supermanagers

Play Episode Listen Later Jul 3, 2025 44:47


In this episode, Aydin sits down with Greg Shove, CEO of Section, to unpack how AI isn't just a productivity tool—it's a new cognitive layer for modern organizations. Greg shares how Section pivoted from executive education to AI enablement after a single eye-opening session with ChatGPT. He dives deep into what it really takes to embed AI into workflows, culture, and decision-making—and why “talking to AI” is now mandatory at his company. From building a company-wide second brain with Claude to simulating board meetings with GPT, Greg offers a masterclass in practical AI integration.Timestamps:1:35 – Greg's background: from flameouts to $250M in exits2:00 – Section's pivot from exec ed to AI enablement3:01 – The 6-month internal resistance to AI4:50 – Why training isn't enough: the real AI challenge is change management6:07 – Why treating AI like regular software is a strategic mistake8:26 – What successful AI deployments have in common10:02 – Lessons from Shopify, Duolingo, and Fiverr on AI expectations11:45 – The price of AI is too low—why that might change14:03 – AI vs. analyst time: “an hour becomes a minute”15:31 – Section's 25% productivity gain with AI18:58 – Measuring productivity impact without perfect data21:24 – Clever metrics: output per headcount, OKRs, AI shoutouts24:51 – Using Claude as a company “second brain”26:11 – Greg's AI desktop setup: Perplexity, GPT, Claude27:43 – The Section Expert: maintaining company context for AI29:27 – “Working with Greg” manual: how to humanize your AI input31:00 – The difference between values and operating principles34:42 – Roleplaying board members with AI before real board meetings36:05 – Claude vs. ChatGPT vs. humans: who gave better board insights?41:00 – AI for owner-operators: create your own board42:26 – What Greg's most excited about: how AI unlocks new opportunities44:08 – Where to find Greg & Section + listener discountTools & Technologies Mentioned:Claude (Anthropic): Used to build a company-wide second brain and simulate board member personasGPT (OpenAI): Used as a daily thought partner and board advisorPerplexity: A go-to AI for fast, accurate information lookupsSection Expert (Claude project): A centralized AI project workspace housing all of Section's key documents for brainstormingProfAI (Section's product): An AI-powered coach designed to teach people how to use AI effectivelyChatGPT for Teams: Mentioned as a better, paid alternative to free-tier toolsGemini Pro: Noted for its screen-sharing and future context-awareness potentialCopilot (Microsoft): One of several LLM tools tested during board simulationsSubscribe at⁠ thisnewway.com⁠ to get the step-by-step playbooks, tools, and workflows.

Manufacturing Hub
Ep. 213 - The AI Revolution in Manufacturing: Productivity Gains and Cultural Shifts

Manufacturing Hub

Play Episode Listen Later Jul 3, 2025 70:28


In this episode of Manufacturing Hub, we welcome back Billy Albritton for a deep dive into the evolving world of artificial intelligence in manufacturing. Billy first joined us on Episode 23, and now almost 200 episodes later, he returns to share his perspective on how far the space has come and what the future holds.Billy walks us through his journey from the military to advanced manufacturing and ultimately to becoming a leading voice in the AI and digital transformation space. We explore how large language models like ChatGPT are already changing how we write code, design solutions, and even train junior engineers. He offers real-world use cases of generative and agentic AI in industrial contexts and explains how tools like Cursor are already being used to automate everything from software development to curriculum generation.We unpack the cultural barriers that prevent AI from being adopted on the plant floor and how forward-looking companies can implement AI safely and ethically. From internal teams building custom tools to small agile firms delivering big results, the conversation highlights the shift in power and opportunity across the ecosystem.Billy also gives us a glimpse into the near future of AI-enhanced humanoid robots, local LLM deployments on the shop floor, and what might happen to traditional job roles as these technologies scale. Whether you are an engineer, developer, plant manager, or simply curious about how AI is impacting the real world, this episode will give you both insights and practical strategies to consider.Stick around until the end to hear Billy's predictions for the next five, ten, and twenty years in manufacturing. And if you're wondering how to get started with AI tools, Billy offers concrete advice and resources you can begin exploring today.Timestamps: 00:00 Welcome back Billy Albritton 02:00 Billy's path into manufacturing and robotics 06:00 How ChatGPT shifted Billy's perspective 10:00 What is agentic AI and why it matters 15:00 The changing role of junior developers 20:00 AI in traditional enterprise software vs the real factory floor 27:00 Challenges with industrial AI adoption 32:00 Internal vs external development strategies 37:00 Billy's go-to AI tools and workflows 45:00 Real-time AI assistants and the new software paradigm 53:00 Is there a ceiling to generative AI? 59:00 The future of robotics and humanoids 1:03:00 What happens to work in a post-AI world? 1:06:00 Advice for anyone looking to start with AI todayConnect with Billy https://www.linkedin.com/in/billyalbritton/Follow us Host: https://www.linkedin.com/in/vladromanov/ Show: https://www.manufacturinghub.live/ Joltek: https://www.joltek.com/Recommended Tools Mentioned Cursor: https://www.cursor.so/ OpenAI ChatGPT: https://chat.openai.com/ Claude by Anthropic: https://claude.ai/ Synthesia: https://www.synthesia.io/ Hugging Face: https://huggingface.co/ Leonardo AI: https://leonardo.ai/Subscribe for more conversations with the most innovative minds in manufacturing and industrial automation. New episodes every Thursday.

Generative Now | AI Builders on Creating the Future
Julie Bornstein: Building the Future of Fashion with AI

Generative Now | AI Builders on Creating the Future

Play Episode Listen Later Jul 3, 2025 43:46


In this episode of Generative Now, Lightspeed partner Michael Mignano sits down with the former Stitch Fix COO and founder of The Yes, Julie Bornstein. They talk about Julie's latest venture: Daydream, an AI-powered fashion discovery engine built for the LLM era. Julie shares how her decades at Nordstrom, Sephora, and Pinterest shaped her vision, why now is the moment for natural language search in shopping, and how AI will transform fashion.Episode Chapters: 00:00 Introduction to the Interview01:06 Julie Bornstein's New Venture: Daydream02:44 The Evolution of E-commerce and AI03:28 From Nordstrom to Daydream05:35 Technological Innovations in Fashion12:02 The Yes: Launching During a Pandemic15:15 Acquisition by Pinterest and Future Plans17:49 The Vision for Daydream22:57 Introduction to Style Passport23:27 Iterative Shopping Experience24:02 Bringing Brands Together24:45 Technical Implementation of Models25:24 Challenges with Large Models26:12 Building Mini Models for Fashion27:28 Competition from Large Model Providers30:25 Video Shopping and Social Media Integration31:43 The Role of Agents in Shopping35:20 Future of Shopping Interfaces37:13 Being a Serial Founder41:48 Launch and Future Plans43:09 Conclusion and FarewellStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

Radical Candor
Humanizing AI: Meet the Kim Scott Google Portrait 7 | 27

Radical Candor

Play Episode Listen Later Jul 2, 2025 43:53


Most leaders learn on the fly—and Kim knows the bruises that come with it. In this episode she joins longtime Google Distinguished Designer Ryan Germick to discuss the innovative "Kim Scott Portrait," an AI-powered tool designed by Google Labs (and trained by the real Kim) to scale Kim's expertise and deliver Radically Candid advice 24/7. Discover how this new technology aims to humanize AI, free authors from the burden of answering repetitive questions, and foster more productive communication in the workplace. Get all of the show notes at ⁠⁠⁠⁠⁠⁠⁠⁠RadicalCandor.com/podcast⁠⁠⁠⁠⁠⁠⁠⁠. Episode Links: Transcript Now You Can Talk Radical Candor 24/7 With the Kim Scott Portrait Google Portrait | Kim Scott Ryan Germick - Google | LinkedIn Connect: ⁠⁠⁠⁠⁠⁠⁠⁠Website⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠Bluesky⁠⁠⁠⁠⁠⁠⁠ Chapters: (00:00:00) Introduction Kim and Ryan Germick introduce the “Portrait” collaboration—an AI version of Kim designed to scale her coaching. (00:01:33) Live Coaching Demo Kim's Portrait answers a tough management question. (00:03:36) Why the Portrait Matters How the Portrait helps Kim reach more people and free up time for writing. (00:05:38) Kim's Next Book A look into Kim's upcoming optimistic novel set in 2070. (00:06:30) Family Interactions with the Portrait Funny and revealing story of Kim's son debating the AI. (00:08:10) The “Automated Kim” Origin Story How a team joke at Google inspired the Portrait concept. (00:09:29) Coaching at Scale Why books and AI scale Kim's message better than 1:1 coaching. (00:11:41) Personalized vs Generic AI The value of expert-driven Portraits over average LLM responses. (00:12:57) Training the Portrait Kim explains her hands-on role in fine-tuning its responses. (00:14:44) Solving Repetitive Questions How Portraits provide patient, consistent answers to FAQs. (00:16:07) Productive Disagreement Through Portraits The vision for AI-facilitated, respectful debates. (00:17:26) Expanding Globally Plans for multi-language and international Portrait availability. (00:17:48) Real-World Use Cases The ways Portraits support work, life, and social media decisions. (00:20:23) Empathy-Driven AI AI as a personal board of directors, with lived-experience expertise. (00:23:51) Empowering Creators Portraits can be embedded on creators' own platforms—no lock-in. (00:26:19) Lived Experience as Research Kim defends storytelling as a valid path to truth and insight. (00:28:24) Supporting New Managers Portraits offer guidance during the lonely transition into leadership. (00:31:11) Navigating Difficult Bosses Portraits can help employees manage up with empathy and agency. (00:33:30) Changing Workplace Culture Helping people shift from silence or aggression to Radical Candor. (00:36:17) Personality Extenders Portraits as scalable human touchpoints for the future. (00:38:51) Creating Your Own Portrait How to create your own Portrait and scale your voice. (00:39:48) Conclusion Learn more about your ad choices. Visit megaphone.fm/adchoices

Social Media News Live
AI Secrets Every Small Business Must Know

Social Media News Live

Play Episode Listen Later Jul 2, 2025 58:02


Trying to keep up with AI while running a business? You're not alone. In this episode of Social Media News Live, we're joined by Phil Pallen, branding strategist, keynote speaker, and author of AI for Small Business, to talk about how solopreneurs and small teams can use AI to boost productivity without losing the personal touch.Phil shares his practical framework for using AI in ways that actually make sense for your business, starting with tracking your time and identifying the tasks you should keep, delegate, or automate. He also walks us through how to train AI to sound like you, how he uses ChatGPT and Adobe Acrobat's AI tools in real client workflows, and why embracing AI doesn't mean sacrificing human connection, it means amplifying what makes you unique.We cover everything from chatbots to creative automation, ad management to brand voice, with real examples from Phil's own work as a creator and consultant. If you want to work smarter, not harder, this episode is packed with takeaways to help you get started or go further with AI.Key Points:A simple framework to decide what to keep, delegate, or automate using AI How to track your working time and use that data to build efficient systems Why brand voice matters more than ever in the age of AI, and how to train your LLM to use yours How Phil uses “don't take action yet” to structure better ChatGPT conversations The difference between AI assistants and agentic AI (and why it matters for customer service) Tools for smarter advertising, including Otis and Magai, and what to know about data security Why branding is evolving, but your personality still sets you apart Resources:Phil Pallen's Website – philpallen.coBook: AI for Small Business – (affiliate link) Available on AmazonFollow Phil on YouTube & Instagram – @philpallenExplore Phil's Recommended Tools – philpallen.co/tools----------------------Ecamm - Your go-to solution for crafting outstanding live shows and podcasts. - Get 15% off your first payment with promo code JEFF15SocialMediaNewsLive.com - Dive into our website for comprehensive episode breakdowns.Youtube.com - Tune in live, chat with us directly, and be part of the conversation. Or, revisit our archive of past broadcasts to stay updated.Facebook - Stream our show live and chat with us in real time. Connect, engage, and be a part of our community.Email - Subscribe and never miss a live show reminder.----------------------JeffSieh.com - Unlock the power of authentic storytelling with me! With over 20 years of marketing experience, I'm here to elevate your brand's narrative in an ever-competitive market. My...

Cloud Wars Live with Bob Evans
Slack API Terms Update Restricts Data Exports and LLM Usage

Cloud Wars Live with Bob Evans

Play Episode Listen Later Jul 2, 2025 2:02


Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.In today's Cloud Wars Minute, I dive into Slack's bold move to restrict API access to bulk data exports, effectively blocking the use of its platform data for LLM training and signaling a strategic pivot toward proprietary AI control and heightened data security Highlights00:03 — Salesforce has changed the API Terms of Service for Slack, which will stop companies from using LLMs to ingest data from the platform. Ultimately, the new policy prohibits the bulk export of Slack data via the API and confirms that data access through Slack APIs cannot be used for LLM training.00:21 — From now on, companies will have to use Slack's new real-time search API. In a blog post by the Slack developer team, the company states that this new API eliminates the need for large data exports from Slack, keeping customer data secure while maintaining support for key use cases like permission-based search.00:56 — Now, while Salesforce and Slack say the focus is on security, there is another angle being discussed, that this move encourages a shift towards proprietary technologies. It's difficult to pinpoint this trend. On one hand, we see a push for interoperability across the industry, while on the other, Slack's announcement on the real-time research API coincided with support for the Model Context Protocol.01:25 — Data is still the currency that drives AI and sharing it recklessly with any LLM that requires access can be counterproductive from a business standpoint. Companies like Salesforce don't want to be liable for data used by third-party applications, and none of the major tech companies want to stifle innovation with overly restrictive policies. Visit Cloud Wars for more.

Careers in Data Privacy
Kartikeya Raman: Associate Partner and DPO at Forvis Mazars

Careers in Data Privacy

Play Episode Listen Later Jul 2, 2025 29:56


Kartikeya has an MBA, LLM, and law degree.At Deloitte, he got his start in privacy.Kartikeya is an associate partner at Forvis Mazars.We will chat about how he became a privacy star!

Bio from the Bayou
Episode 93: How Startups and Universities Can Support Each Other and Thrive in Biotech

Bio from the Bayou

Play Episode Listen Later Jul 2, 2025 16:18


Bridging the gap between academia and industry isn't easy—but it's essential for innovation. In this episode, hosts Elaine Hamm, PhD, and James Zanewicz, JD, LLM, RTTP, explore how biotech startups and academic institutions can break down silos and build stronger partnerships. From shared resources and mutual funding opportunities to culture shifts and advisory support, they reveal how deeper engagement between innovation hubs and industry players can lead to better science, better business, and better outcomes. In this episode, you'll learn: Why understanding each other's goals and processes is key to successful startup–university collaborations. How universities can help startups find funding, credibility, and critical talent—and what startups offer in return. Actionable tips for building long-term, win-win partnerships that drive innovation forward. Whether you're spinning out of a lab or investing in university research, this episode will give you new strategies to connect and collaborate with purpose. Links: Connect with Elaine Hamm, PhD, and James Zanewicz, JD, LLM, RTTP, and learn about Tulane Medicine Business Development, the School of Medicine, and the National Primate Research Center. Connect with Katie Acuff, JD, MBA, Lauren Jardell, and Ellen Palmintier, JD, RN. Tune in to our previous episode on Boards of Directors and Scientific Advisory Boards. Connect with Ian McLachlan, BIO from the BAYOU producer. Check out BIO on the BAYOU and make plans to attend October 28 & 29, 2025. And click here to apply for a startup pitch slot. Learn more about BIO from the BAYOU - the podcast. Bio from the Bayou is a podcast that explores biotech innovation, business development, and healthcare outcomes in New Orleans & The Gulf South, connecting biotech companies, investors, and key opinion leaders to advance medicine, technology, and startup opportunities in the region.

Paul's Security Weekly
Simple Patterns for Complex Secure Code Reviews - Louis Nyffenegger - ASW #337

Paul's Security Weekly

Play Episode Listen Later Jul 1, 2025 38:26


Manual secure code reviews can be tedious and time intensive if you're just going through checklists. There's plenty of room for linters and compilers and all the grep-like tools to find flaws. Louis Nyffenegger describes the steps of a successful code review process. It's a process that starts with understanding code, which can even benefit from an LLM assistant, and then applies that understanding to a search for developer patterns that lead to common mistakes like mishandling data, not enforcing a control flow, or not defending against unexpected application states. He explains how finding those kinds of more impactful bugs are rewarding for the reviewer and valuable to the code owner. It involves reading a lot of code, but Louis offers tips on how to keep notes, keep an app's context in mind, and keep code secure. Segment Resources: https://pentesterlab.com/live-training/ https://pentesterlab.com/appsecschool https://deepwiki.com https://daniel.haxx.se/blog/2025/05/29/decomplexification/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-337

Thinking Elixir Podcast
259: Chris McCord on phoenix.new

Thinking Elixir Podcast

Play Episode Listen Later Jul 1, 2025 73:14


News includes the public launch of Phoenix.new - Chris McCord's revolutionary AI-powered Phoenix development service with full browser IDE and remote runtime capabilities, Ecto v3.13 release featuring the new transact/1 function and built-in JSON support, Nx v0.10 with improved documentation and NumPy comparisons, Phoenix 1.8 getting official security documentation covering OWASP Top 10 vulnerabilities, Zach Daniel's new "evals" package for testing AI language model performance, and ElixirConf US speaker announcements with keynotes from José Valim and Chris McCord. Saša Jurić shares his comprehensive thoughts on Elixir project organization and structure, Sentry's Elixir SDK v11.x adding OpenTelemetry-based tracing support, and more! Then we dive deep with Chris McCord himself for an exclusive interview about his newly launched phoenix.new service, exploring how AI-powered code generation is bringing Phoenix applications to people from outside the community. We dig into the technology behind the remote runtime and what it means for the future of rapid prototyping in Elixir. Show Notes online - http://podcast.thinkingelixir.com/259 (http://podcast.thinkingelixir.com/259) Elixir Community News https://www.honeybadger.io/ (https://www.honeybadger.io/utm_source=thinkingelixir&utm_medium=podcast) – Honeybadger.io is sponsoring today's show! Keep your apps healthy and your customers happy with Honeybadger! It's free to get started, and setup takes less than five minutes. https://phoenix.new/ (https://phoenix.new/?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord's phoenix.new project is open to the public https://x.com/chris_mccord/status/1936068482065666083 (https://x.com/chris_mccord/status/1936068482065666083?utm_source=thinkingelixir&utm_medium=shownotes) – Phoenix.new was opened to the public - a service for building Phoenix apps with AI runtime, full browser IDE, and remote development capabilities https://github.com/elixir-ecto/ecto (https://github.com/elixir-ecto/ecto?utm_source=thinkingelixir&utm_medium=shownotes) – Ecto v3.13 was released with new features including transact/1, schema redaction, and built-in JSON support https://github.com/elixir-ecto/ecto/blob/v3.13.2/CHANGELOG.md#v3132-2025-06-24 (https://github.com/elixir-ecto/ecto/blob/v3.13.2/CHANGELOG.md#v3132-2025-06-24?utm_source=thinkingelixir&utm_medium=shownotes) – Ecto v3.13 changelog with detailed list of new features and improvements https://github.com/elixir-nx/nx (https://github.com/elixir-nx/nx?utm_source=thinkingelixir&utm_medium=shownotes) – Nx v0.10 was released with documentation improvements and floating-point precision enhancements https://github.com/elixir-nx/nx/blob/main/nx/CHANGELOG.md (https://github.com/elixir-nx/nx/blob/main/nx/CHANGELOG.md?utm_source=thinkingelixir&utm_medium=shownotes) – Nx v0.10 changelog including new advanced guides and NumPy comparison cheatsheets https://paraxial.io/blog/phoenix-security-docs (https://paraxial.io/blog/phoenix-security-docs?utm_source=thinkingelixir&utm_medium=shownotes) – Phoenix 1.8 gets official security documentation covering OWASP Top 10 vulnerabilities https://github.com/phoenixframework/phoenix/pull/6295 (https://github.com/phoenixframework/phoenix/pull/6295?utm_source=thinkingelixir&utm_medium=shownotes) – Pull request adding comprehensive security guide to Phoenix documentation https://bsky.app/profile/zachdaniel.dev/post/3lscszxpakc2o (https://bsky.app/profile/zachdaniel.dev/post/3lscszxpakc2o?utm_source=thinkingelixir&utm_medium=shownotes) – Zach Daniel announces new "evals" package for testing and comparing AI language models https://github.com/ash-project/evals (https://github.com/ash-project/evals?utm_source=thinkingelixir&utm_medium=shownotes) – Evals project for evaluating AI model performance on coding tasks with structured testing https://bsky.app/profile/elixirconf.bsky.social/post/3lsbt7anbda2o (https://bsky.app/profile/elixirconf.bsky.social/post/3lsbt7anbda2o?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf US speakers beginning to be announced including keynotes from José Valim and Chris McCord https://elixirconf.com/#keynotes (https://elixirconf.com/#keynotes?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf website showing keynote speakers and initial speaker lineup https://x.com/sasajuric/status/1937149387299316144 (https://x.com/sasajuric/status/1937149387299316144?utm_source=thinkingelixir&utm_medium=shownotes) – Saša Jurić shares collection of writings on Elixir project organization and structure recommendations https://medium.com/very-big-things/towards-maintainable-elixir-the-core-and-the-interface-c267f0da43 (https://medium.com/very-big-things/towards-maintainable-elixir-the-core-and-the-interface-c267f0da43?utm_source=thinkingelixir&utm_medium=shownotes) – Saša Jurić's article on organizing Elixir projects with core and interface separation https://medium.com/very-big-things/towards-maintainable-elixir-boundaries-ba013c731c0a (https://medium.com/very-big-things/towards-maintainable-elixir-boundaries-ba013c731c0a?utm_source=thinkingelixir&utm_medium=shownotes) – Article on using boundaries in Elixir applications for better structure https://medium.com/very-big-things/towards-maintainable-elixir-the-anatomy-of-a-core-module-b7372009ca6d (https://medium.com/very-big-things/towards-maintainable-elixir-the-anatomy-of-a-core-module-b7372009ca6d?utm_source=thinkingelixir&utm_medium=shownotes) – Deep dive into structuring core modules in Elixir applications https://github.com/sasa1977/mixphxalt (https://github.com/sasa1977/mix_phx_alt?utm_source=thinkingelixir&utm_medium=shownotes) – Demo project showing alternative Phoenix project structure with core/interface organization https://github.com/getsentry/sentry-elixir/blob/master/CHANGELOG.md#1100 (https://github.com/getsentry/sentry-elixir/blob/master/CHANGELOG.md#1100?utm_source=thinkingelixir&utm_medium=shownotes) – Sentry updates Elixir SDK to v11.x with tracing support using OpenTelemetry Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources https://phoenix.new/ (https://phoenix.new/?utm_source=thinkingelixir&utm_medium=shownotes) – The Remote AI Runtime for Phoenix. Describe your app, and watch it take shape. Prototype quickly, experiment freely, and share instantly. https://x.com/chris_mccord/status/1936074795843551667 (https://x.com/chris_mccord/status/1936074795843551667?utm_source=thinkingelixir&utm_medium=shownotes) – You can vibe code on your phone https://x.com/sukinoverse/status/1936163792720949601 (https://x.com/sukinoverse/status/1936163792720949601?utm_source=thinkingelixir&utm_medium=shownotes) – Another success example - Stripe integrations https://openai.com/index/openai-codex/ (https://openai.com/index/openai-codex/?utm_source=thinkingelixir&utm_medium=shownotes) – OpenAI Codex, Open AI's AI system that translates natural language to code https://devin.ai/ (https://devin.ai/?utm_source=thinkingelixir&utm_medium=shownotes) – Devin is an AI coding agent and software engineer that helps developers build better software faster. Parallel cloud agents for serious engineering teams. https://www.youtube.com/watch?v=ojL_VHc4gLk (https://www.youtube.com/watch?v=ojL_VHc4gLk?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord's ElixirConf EU Keynote talk titled "Code Generators are Dead. Long Live Code Generators" Guest Information - https://x.com/chris_mccord (https://x.com/chris_mccord?utm_source=thinkingelixir&utm_medium=shownotes) – on X/Twitter - https://github.com/chrismccord (https://github.com/chrismccord?utm_source=thinkingelixir&utm_medium=shownotes) – on Github - http://chrismccord.com/ (http://chrismccord.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

What Gets Measured
Make Better Bets with Causal AI

What Gets Measured

Play Episode Listen Later Jul 1, 2025 60:14


Discover how causal AI transforms marketing analytics by solving the correlation vs. causation dilemma. Learn why outdated Marketing Mix Modeling (MMM) can't keep up, and how causal AI provides actionable, real-time insights for CMOs and CFOs. SHOWPAGE:  https://www.ninjacat.io/blog/wgm-podcast-make-better-bets-with-causal-ai  © 2025, NinjaCat

Le rendez-vous Tech
L'important c'est le chemin ET la destination – RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Jul 1, 2025 83:39


Au programme :IA vs auteurs: première victoire des LLMFairphone Gen 6, une première réussite?TikTok et Instagram bientôt sur votre télé?Le reste de l'actualitéInfos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Co-animé par Jérôme Keinborg (Bluesky).Co-animé par Cédric de LucaProduit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 625 – L'important c'est le chemin ET la destination – Victoire légale des LLM, Fairphone Gen 6, TikTok & Insta sur TV---Liens :

Application Security Weekly (Audio)
Simple Patterns for Complex Secure Code Reviews - Louis Nyffenegger - ASW #337

Application Security Weekly (Audio)

Play Episode Listen Later Jul 1, 2025 38:26


Manual secure code reviews can be tedious and time intensive if you're just going through checklists. There's plenty of room for linters and compilers and all the grep-like tools to find flaws. Louis Nyffenegger describes the steps of a successful code review process. It's a process that starts with understanding code, which can even benefit from an LLM assistant, and then applies that understanding to a search for developer patterns that lead to common mistakes like mishandling data, not enforcing a control flow, or not defending against unexpected application states. He explains how finding those kinds of more impactful bugs are rewarding for the reviewer and valuable to the code owner. It involves reading a lot of code, but Louis offers tips on how to keep notes, keep an app's context in mind, and keep code secure. Segment Resources: https://pentesterlab.com/live-training/ https://pentesterlab.com/appsecschool https://deepwiki.com https://daniel.haxx.se/blog/2025/05/29/decomplexification/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-337

Basketball Coach Unplugged ( A Basketball Coaching Podcast)
Ep 2362 How SportsVisio Delivers Advanced Basketball Stats and Highlights Using Just Your Phone ( Part 2)

Basketball Coach Unplugged ( A Basketball Coaching Podcast)

Play Episode Listen Later Jun 30, 2025 32:50


GET SPECIAL DISCOUNT HERE www.sportsvisio.ai/coachmode?utm_source=cyh&utm_medium=partner&utm_campaign=codemodesummer How SportsVisio Delivers Advanced Basketball Stats and Highlights Using Just Your Phone ( Part 2) Do you really need expensive hardware to get pro-level basketball stats and highlights? Think tracking and film breakdown has to be a weekend-long headache? Think again! In this episode, hosts Steve Collins and Bill Flitter welcome Sean O'Connor from SportsVisio—a coach, dad, and tech leader—who shares how any coach can turn basic game footage into advanced stats and instant highlights. How tech-savvy is your current coaching approach? You'll learn: How to use just your phone for reliable stats and highlights. Why empowering players with data deepens engagement. The secret to reclaiming hours from film breakdown. Let's change the game together! If you enjoyed this episode, please leave us a 5-star review. SportsVisio, AI sports tracking, computer vision, basketball stats, player highlights, coaching analytics, mobile sports app, video highlights, box score, advanced statistics, shot charts, coach mode, player minutes, possessions tracking, game flow analysis, LLM (large language models), video annotation, game summaries, lineup efficiency, rotation efficiency, hardware agnostic, video upload, 24-hour film turnaround, player engagement, scouting reports, export stats, season analytics, mobile and desktop access, AI assistant coach Learn more about your ad choices. Visit podcastchoices.com/adchoices

Daily Tech News Show (Video)
No Digital Tax on US, Eh? – DTNS Live 5050

Daily Tech News Show (Video)

Play Episode Listen Later Jun 30, 2025


What does Canadian rescinding its digital tax on American tech companies mean? Microsoft has created an LLM that can diagnose diseases 400% more accurately than human physicians. Is there a catch? Does watching or listening to videos and podcasts too fast inhibit your ability to retain what was discussed in them? And Google rolls out a suite of AI powered education tools for teachers. Will it help? Starring Sarah Lane, Robb Dunewood, Justin Robert Young, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

Supra Insider
#64: How top companies evaluate PM candidates in 2025 | Nickey Skarstad (Duolingo), Stephanie J. Neill (Stripe), and Chantal Cox (LTK)

Supra Insider

Play Episode Listen Later Jun 30, 2025 56:02


If you're navigating today's brutally competitive PM job market and wondering what “good” looks like, this episode will become your playbook. In this episode, Ben moderates a candid panel with three veteran product hiring managers:Nickey Skarstad – Director of Product, Duolingo (now leading Duolingo Math)Stephanie J. Neill – Head of Product, Stripe TaxChantal Cox – Director of Product, LTK (LiketoKnow.it creator platform)Together, they reverse-engineer every stage of their 2025 hiring funnels—from 30-second resume scans to offer debriefs—and spell out the signals that turn an applicant into a hire. You'll hear why product-sense interviews have moved to earlier in the process, how AI prototypes are becoming table-stakes, and the red flags (LLM-generated answers, recycled stories etc.) that get instant no-hires. Whether you're an aspiring PM or a manager revamping your own process, you'll leave with concrete, immediately applicable tactics for landing—or giving—an offer.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Tech Lead Journal
#222 - Closing the Knowledge Gap in Your Legacy Code with AI - Omer Rosenbaum

Tech Lead Journal

Play Episode Listen Later Jun 30, 2025 60:55


What if your most critical systems run on code that no one fully understands?In this episode, Omer Rosenbaum, CTO and co-founder of Swimm, explains how to use AI to close the knowledge gap in your legacy codebase. Discover the limitations of AI in understanding legacy code and learn novel approaches to automatically document complex systems, ensuring their critical business logic is preserved and understood within the organization. Beyond legacy systems, Omer also shares practical advice for how junior developers can thrive in the AI era and how teams and organizations can conduct more effective research.Key topics discussed:How junior developers can thrive in the age of AIThe danger of shipping code you don't fully understandWhy AI can't deduce everything from your code aloneHow writing documentation becomes more critical now with AIHow to analyze code that even LLMs struggle to read, like COBOLHow to keep your organization's knowledge base trustworthy and up to dateThe real danger of letting AI agents run uncheckedA practical approach to conducting more effective research  Timestamps:(00:00) Trailer & Intro(02:10) Career Turning Points(05:24) What Juniors Should Do in the Age of AI(11:05) Junior Developer's Responsbility When Using AI(14:50) AI and Critical Thinking(16:20) Understanding & Preserving Domain Knowledge(18:11) The Importance of Written Knowledge for AI Usage(21:51) Limitations of AI in Understanding Knowledge Base(26:34) The Limitations of LLM in Navigating Legacy Codebases (e.g. COBOL)(32:38) Effective Knowledge Sharing Culture in the Age of AI(34:54) Keeping Knowledge Base Up-to-Date(36:55) Keeping the Organization Knowledge Base Accurate(39:08) Fact Checking and Preventing AI Hallucination(41:24) The Potential of MCP(43:24) The Danger of AI Agents Hallucinating with Each Other(45:00) How to Get Better at Research(53:41) The Importance of Investing in Research(57:18) 3 Tech Lead Wisdom_____Omer Rosenbaum's BioOmer Rosenbaum is the CTO and co-founder of Swimm, a platform reinventing the way engineering organizations manage internal knowledge about their code base. Omer founded the Check Point Security Academy and was the Cyber Security Lead at ITC, an educational organization that trains talented professionals to develop careers in technology. Omer has a MA in Linguistics from Tel Aviv University and is the creator behind the Brief YouTube Channel.Follow Omer:LinkedIn – linkedin.com/in/omer-rosenbaum-034a08b9Twitter – x.com/Omer_RosSwimm – swimm.ioEmail – omer@swimm.io

Basketball Coach Unplugged ( A Basketball Coaching Podcast)
Ep 2361 How SportsVisio Delivers Advanced Basketball Stats and Highlights Using Just Your Phone ( Part 1)

Basketball Coach Unplugged ( A Basketball Coaching Podcast)

Play Episode Listen Later Jun 29, 2025 36:18


Get Discount HERE www.sportsvisio.ai/coachmode?utm_source=cyh&utm_medium=partner&utm_campaign=codemodesummer Do you really need expensive hardware to get pro-level basketball stats and highlights? Think tracking and film breakdown has to be a weekend-long headache? Think again! In this episode, hosts Steve Collins and Bill Flitter welcome Sean O'Connor from SportsVisio—a coach, dad, and tech leader—who shares how any coach can turn basic game footage into advanced stats and instant highlights. How tech-savvy is your current coaching approach? You'll learn: How to use just your phone for reliable stats and highlights. Why empowering players with data deepens engagement. The secret to reclaiming hours from film breakdown. Let's change the game together! If you enjoyed this episode, please leave us a 5-star review. KEYWORDS:SportsVisio, AI sports tracking, computer vision, basketball stats, player highlights, coaching analytics, mobile sports app, video highlights, box score, advanced statistics, shot charts, coach mode, player minutes, possessions tracking, game flow analysis, LLM (large language models), video annotation, game summaries, lineup efficiency, rotation efficiency, hardware agnostic, video upload, 24-hour film turnaround, player engagement, scouting reports, export stats, season analytics, mobile and desktop access, AI assistant coach How SportsVisio Delivers Advanced Basketball Stats and Highlights Using Just Your Phone Learn more about your ad choices. Visit podcastchoices.com/adchoices

Infinitum
Ovo je idealno za ovog malog od palube

Infinitum

Play Episode Listen Later Jun 28, 2025 116:54


Ep 262BumerangA1 Slovenija iPhone 16 + AirPods 4More than One Million Anker Power Banks Recalled Due to Fire and Burn Hazards; Manufactured by Anker InnovationsColin Cornaby: Another interesting tidbit from “Apple in China”Terence Eden: I've locked myself out of my digital lifeHow Kagi is building a better search for teamsRiccardo Mori: In case of emergency, break glassRose-Gold-Tinted Liquid GlassesTuomas Hämäläinen: Mockup time!Interview: Craig Federighi Opens Up About iPadOS, Its Multitasking Journey, and the iPad's Essence – MacStoriesiPhone Reportedly Moving to All-Screen Design in Two StagesApple to finally let iPhone games offer promo codes for IAP - 9to5MacApple Again Changes EU App Store Rules and Fees to Comply With DMASteve Troughton-Smith: So, upfront CTF cost is gone…Jeff Tyrrill: What Apple should doGary Marcus: LLMs don't do formal reasoning - and that is a HUGE problemChuck Darwin: After nuclear weapons testing began in 1945,atmospheric radiation contaminated new steel production worldwide. Graham-Cumming sees a parallel with today's web, where AI-generated content increasingly mingles with human-created material and contaminates it.Anthropic wins a major fair use victory for AI — but it's still in trouble for stealing booksJoe Fabisevich: That's the ballgame, copyright is effectively done for.Alek: AI sceptic in LLM adventure landJay Menna: sajt za tri i po sata i još jedan njegov sajt generisan korišćenjem N8N.I Convinced HP's Board to Buy Palm for $1.2B. Then I Watched Them Kill It in 49 DaysZahvalniceSnimano 28.6.2025.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartu

Unsolicited Feedback
AI's Distribution Shift: The Land Grab Ahead - Unsolicited Feedback S3E8

Unsolicited Feedback

Play Episode Listen Later Jun 27, 2025 43:23


Get ready for a crash course for you in the next great distribution shift. In this episode, Brian Balfour and Fareed Mosavat pull back the curtain on why AI's real battleground isn't the tech itself—it's the fight to be the next distribution platform. Fareed and Brian dissect the playbooks and cycles that crowned Facebook, Google, Apple and LinkedIn as the winners of their categories and turned them from open platforms into toll booths. The key part is who is going to be next, and what you need to know to play the game. We cover: Brian's prediction on which LLM will create the platform first—and exactly how to ride that wave before the gates slam shut. How startups need to play the game differently vs larger companies A candid debate on platform moats, memory vs. action, and whether PLG just made a roaring comeback. If you build, invest, or obsess over AI products, this 45-minute sprint will hand you the hard truths and the hidden opportunities shaping the next couple of years. Plug in, level up—and learn how to play the game before the game plays you.

Breaking Battlegrounds
Siding with Iran Is Insane, Hollywood's Wake-Up Call, and the Path Forward for America

Breaking Battlegrounds

Play Episode Listen Later Jun 27, 2025 81:32


This week on Breaking Battlegrounds, Chuck Warren is joined by guest co-host Shay Khatari for a compelling lineup of guests and conversations. Former British soldier and Middle East strategist Andrew Fox kicks things off, diving into his article, “The Moronic Obscenity of Siding with Iran.” With three tours in Afghanistan and firsthand experience with Iranian interference, Andrew explains why Western appeasement isn't just misguided—it's dangerous. Next, Hollywood executive and author Chris Fenton joins the show to discuss his RealClearPolitics piece, “Why This Lifelong Democrat Voted for Trump,” sharing how his global media career, stand against Chinese censorship, and new American-made film Bad Counselors reflect his deeper concern for freedom, fairness, and national sovereignty. Then, Sarah Hunt, President of the Joseph Rainey Center for Public Policy, breaks down why smart energy policy rooted in national security and innovation is essential in the global AI race—especially against China—and how her organization is working to revive the American Dream by empowering emerging leaders. Don't miss this impactful episode—and as always, stick around for Kiley's Corner, where Kiley gives an update on the Karen Read trial and shares the shocking story of four fifth graders who were plotting to stab a classmate.www.breakingbattlegrounds.voteTwitter: www.twitter.com/Breaking_BattleFacebook: www.facebook.com/breakingbattlegroundsInstagram: www.instagram.com/breakingbattlegroundsLinkedIn: www.linkedin.com/company/breakingbattlegroundsTruth Social: https://truthsocial.com/@breakingbattlegroundsShow sponsors:Invest Yrefy - investyrefy.comOld Glory DepotSupport American jobs while standing up for your values. OldGloryDepot.com brings you conservative pride on premium, made-in-USA gear. Don't settle—wear your patriotism proudly.Learn more at: OldGloryDepot.comDot VoteWith a .VOTE website, you ensure your political campaign stands out among the competition while simplifying how you reach voters.Learn more at: dotvote.vote4Freedom MobileExperience true freedom with 4Freedom Mobile, the exclusive provider offering nationwide coverage on all three major US networks (Verizon, AT&T, and T-Mobile) with just one SIM card. Our service not only connects you but also shields you from data collection by network operators, social media platforms, government agencies, and more.Use code ‘Battleground' to get your first month for $9 and save $10 a month every month after.Learn more at: 4FreedomMobile.comAbout our guest:Andrew Fox is a former soldier; research fellow specialising in the Middle East, Defence, and how Western societies are under attack from authoritarian regimes.I served in the RWF and the Parachute Regiment; three tours of Afghanistan (including one with US Special Forces), as well as the Middle East, Bosnia and N Ireland.Bachelor's in Law & Politics. War Studies MA, dissertation on strategy in the Middle East. Psychology MSc study on leadership and the psychology of disinformation. Level 7 qualifications in education; leadership & strategic management. PhD study, ongoing. Follow him on X @Mr_Andrew_Fox.Read: The moronic obscenity of siding with Iran-Company Founder, Chris Fenton, served as GM of DMG North America & President of DMG Entertainment Motion Picture Group, internationally orchestrating the creative, investment, and business activities of a multi-billion-dollar global media company headquartered in Beijing. During his tenure he served on the board of Valiant Entertainment, directing its eventual acquisition, and he worked closely with both Marvel and Hasbro, executing various projects to monetize their IP globally. As an author, Fenton chronicled much of his time at DMG in FEEDING THE DRAGON: Inside the Trillion Dollar Dilemma Facing Hollywood, the NBA, & American Business (Simon & Schuster).Most recently, and after three years of serving as President and CEO of Media Capital Technologies (MCT), a specialty finance company focused on strategic investments in premium content, Fenton stepped down to focus on formally advising companies, investors, brands, and Congress on how to best navigate sector disruptions and optimize America's complicated relationship with China and other challenging markets...AND HE LOVES IT!!! Follow him on X @TheDragonFeeder.-Sarah E. Hunt is a globally focused leader in climate advocacy, technology, and democracy. Her expertise is regularly sought by national publications such as The Wall Street Journal and The New York Times. As President of the Joseph Rainey Center for Public Policy, a think tank and leadership community in Washington D.C., Ms. Hunt leads her team to generate new solutions to some of our nation's most critical challenges and then cultivates a new generation of leaders to actually implement them.Prior to founding the Rainey Center, much of Hunt's background centered in the areas of climate change and election law. She launched a clean energy program at the American Legislative Exchange Council and a climate change program at the Niskanen Center. Before that, she managed state issues and ethics for a political consulting firm and practiced political law at a boutique law firm in the Pacific Northwest.She currently also serves as Director, Policy & Strategy at the Rob and Melani Walton Sustainability Solutions Service at Arizona State University.Ms. Hunt holds a BA in political science from the University of New Mexico, a JD from Willamette University College of Law, an LLM in international environmental law from Georgetown University Law Center, and an MPS in global advocacy from the George Washington University Graduate School of Political Management. She is admitted to the bar in Washington, DC, Oregon, and the 9th Circuit. Follow her on X @sarahehunt01. Get full access to Breaking Battlegrounds at breakingbattlegrounds.substack.com/subscribe

MLOps.community
AI Reliability, Spark, Observability, SLAs and Starting an AI Infra Company

MLOps.community

Play Episode Listen Later Jun 27, 2025 97:22


LLMs are reshaping the future of data and AI—and ignoring them might just be career malpractice. Yoni Michael and Kostas Pardalis unpack what's breaking, what's emerging, and why inference is becoming the new heartbeat of the data pipeline.// BioKostas PardalisKostas is an engineer-turned-entrepreneur with a passion for building products and companies in the data space. He's currently the co-founder of Typedef. Before that, he worked closely with the creators of Trino at Starburst Data on some exciting projects. Earlier in his career, he was part of the leadership team at Rudderstack, helping the company grow from zero to a successful Series B in under two years. He also founded Blendo in 2014, one of the first cloud-based ELT solutions.Yoni MichaelYoni is the Co-Founder of typedef, a serverless data platform purpose-built to help teams process unstructured text and run LLM inference pipelines at scale. With a deep background in data infrastructure, Yoni has spent over a decade building systems at the intersection of data and AI — including leading infrastructure at Tecton and engineering teams at Salesforce.Yoni is passionate about rethinking how teams extract insight from massive troves of text, transcripts, and documents — and believes the future of analytics depends on bridging traditional data pipelines with modern AI workflows. At Typedef, he's working to make that future accessible to every team, without the complexity of managing infrastructure.// Related LinksWebsite: https://www.typedef.aihttps://techontherocks.showhttps://www.cpard.xyz~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Kostas on LinkedIn: /kostaspardalis/Connect with Yoni on LinkedIn: /yonimichael/Timestamps:[00:00] Breaking Tools, Evolving Data Workloads[06:35] Building Truly Great Data Teams[10:49] Making Data Platforms Actually Useful[18:54] Scaling AI with Native Integration[24:04] Empowering Employees to Build Agents[28:17] Rise of the AI Sherpa[36:09] Real AI Infrastructure Pain Points[38:05] Fixing Gaps Between Data, AI[46:04] Smarter Decisions Through Better Data[50:18] LLMs as Human-Machine Interfaces[53:40] Why Summarization Still Falls Short[01:01:15] Smarter Chunking, Fixing Text Issues[01:09:08] Evaluating AI with Canary Pipelines[01:11:46] Finding Use Cases That Matter[01:17:38] Cutting Costs, Keeping AI Quality[01:25:15] Aligning MLOps to Business Outcomes[01:29:44] Communities Thrive on Cross-Pollination[01:34:56] Evaluation Tools Quietly Consolidating

FYI - For Your Innovation
From Customer Service To The Classroom – AI Agents Are Coming For It All With Alan Bekker

FYI - For Your Innovation

Play Episode Listen Later Jun 26, 2025 55:31


Brett Winton and ARK analyst Jozef Soja dive deep into the rapidly evolving world of AI agents—software entities that are increasingly automating enterprise functions like customer support. They explore why AI agents are gaining traction, how they're priced, and the potential for a new kind of agent-versus-agent arms race between companies and consumers. Later in the episode, they're joined by Dr. Alan Bekker, founder of eSelf.ai and former Head of Conversational AI at Snap, who shares his journey from building voice agents for call centers to launching a real-time, face-to-face AI tutoring platform. Alan offers insights into how the rise of large language models (LLM) is reshaping education, what makes a great AI tutor, and why a visual, embodied presence is crucial for learning.Key Points From This Episode:00:00:00 What enterprise AI agents actually do and how companies like Salesforce are pricing them00:03:41 Why $2 per AI conversation may already undercut human support costs00:05:04 The Return On Investment (ROI) model behind agent adoption and enterprise productivity00:06:41 Why agent-based software may retain higher pricing power than other AI tools00:09:11 The coming arms race: AI agents negotiating with other AI agents00:12:30 Scaling demand for customer service with intelligent automation00:15:04 Vertical vs. horizontal Software as a Service (SaaS) in the AI agent ecosystem00:16:43 AI's impact across the software stack—SaaS, Platform as a Service (PaaS) , and Infrastructure as a Service (IaaS)00:17:56 Why building your own AI apps may soon be cheaper than onboarding SaaS00:20:01 ARK's  internal hackathon and how non-engineers are becoming developers00:20:29 Guest: Dr. Alan Bekker joins to discuss the evolution of conversational AI00:22:04 The journey from decision trees to LLMs: Lessons from Snap's AI team00:27:32 Seeing GPT's impact from inside: OpenAI's early partner outreach00:31:47 Why face-to-face AI tutors found strong product-market fit in education00:33:59 eSelf's go-to-market strategy: Partnering with publishers as a business to business to consumer (B2B2C) wedge00:36:24 Pricing real-time AI tutoring tools in a margin-conscious market00:40:00 Business to consumer (B2C) aspirations: Moving toward a direct-to-student tutoring product00:44:56 What's still missing for real-time AI to match human-level teaching00:48:03 The psychological impact of avatars: Building trust through embodied agents00:51:43 Why personalization—not just LLM knowledge—matters in tutoring00:54:20 Democratizing learning: LLMs as the end of expert-driven education

The Divorce Survival Guide Podcast
Episode 329: Divorcing with ADHD: Tracy Otsuka on Trauma, Misdiagnosis, and Mental Overload

The Divorce Survival Guide Podcast

Play Episode Listen Later Jun 26, 2025 41:02


The brilliant Tracy Otsuka is back on the show for another rich conversation about why ADHD so often gets misdiagnosed (or completely missed) in women, how trauma can mimic or amplify ADHD symptoms, and what you can actually do to function and advocate for yourself if you're dealing with either (or both) during divorce. We also dig into the very real challenges of trying to function while your brain is in a constant state of overwhelm: whether that's from trauma, ADHD, or the mental chaos that comes when the lines between them blur. Tracy breaks down the importance of understanding how your brain is wired, why traditional systems so often fail neurodivergent women, and how to build supports that actually work for you. Whether you're navigating ADHD or the aftermath of trauma, reclaiming your own narrative isn't just important, it's necessary, especially if someone else is trying to write it for you. Here's what else we discuss in this episode: How ADHD presents differently in women than men and why so many of us go undiagnosed (3:23) The difference between a trauma state and ADHD and why knowing the distinction matters (10:15) What to do when your ADHD diagnosis is used against you by your partner or ex in a weaponized or manipulative way (22:10) How gender roles and stereotypes create additional shame and pressure for neurodivergent women (26:16) Tracy's brilliant tip for using ChatGPT as a digital support tool in divorce (30:58) Learn more about Tracy Otsuka: Tracy Otsuka, JD, LLM, AACC, ACC, is a certified ADHD coach and the host of the ADHD for Smart Ass Women podcast. Her book of the same name with Harper Collins - William Morrow is an Amazon Editors' Top 20 Best Nonfiction Book of 2024 recipient.  Over the past decade, she has empowered thousands of clients (from doctors and therapists to C-suite executives and entrepreneurs) to see their neurodivergence as a strength–not a weakness. Leveraging her analytical skills from her time as lead counsel at the U.S. Securities and Exchange Commission she helps clients boost productivity, improve finances, save relationships and live happier lives. Tracy's expertise and experience as an adult living with ADHD are regularly sought out by top tier media including Bloomberg, CBS Mornings, ABC News Live, Forbes, Inc, Prevention, ADDitude magazine, and The Goal Digger Podcast. When she's not sharing her thought leadership around ADHD on other platforms, she hosts her own podcast which ranks #1 in its category and has over 7 million downloads across 160 countries. She also moderates a Facebook group with over 100,000 members. A married mother of two, Tracy lives in Sonoma County outside of San Francisco.  Resources & Links:The Divorce Survival Guide Resource BundlePhoenix Rising: A Divorce Empowerment CollectiveFocused Strategy Sessions with Kate  Episode 287: ADHD for Smart Ass Women with Tracy Otsuka (Neurodivergence in Relationships) Tracy's book, ADHD for Smart Ass WomenTracy's podcast ADHD 2.0: New Science and Essential Strategies for Thriving with Distraction--from Childhood through Adulthood, Edward M. Hallowell, M.D. ChatGPTAimee Says AI =================== DISCLAIMER:  THE COMMENTARY AND OPINIONS AVAILABLE ON THIS PODCAST ARE FOR INFORMATIONAL AND ENTERTAINMENT PURPOSES ONLY AND NOT TO PROVIDE LEGAL OR PSYCHOLOGICAL ADVICE.  YOU SHOULD CONTACT AN ATTORNEY, COACH, OR THERAPIST IN YOUR STATE TO OBTAIN ADVICE WITH RESPECT TO ANY PARTICULAR ISSUE OR PROBLEM. Episode Link: https://kateanthony.com/podcast/episode-329-divorcing-with-adhd-tracy-otsuka-on-trauma-misdiagnosis-and-mental-overload/  

Sharp Tech with Ben Thompson
A Big Ruling on LLM Training and Midsummer Mail on NBA Salaries in Tech, Starting from Scratch in 2025, and More

Sharp Tech with Ben Thompson

Play Episode Listen Later Jun 26, 2025 76:50


On today's show Andrew and Ben begin by breaking down a favorable ruling for Anthropic in a case concerning copyrighted material, the fair use doctrine, and LLM training. Then: A midsummer mailbag with questions on huge salaries for big names in tech that may be past their prime, waiting for AI to suggest software solutions, starting careers from scratch in 2025, Huwaei's ascent and China's commitment to Apple, Taylor Swift, shortform video regulation, recommendations for would-be watch collectors, and more.

Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast

The AI vs. traditional search optimization debate isn't as binary as it seems. Tyler Einberger from Momentic explains why fundamental SEO principles work effectively across both environments. He discusses prioritizing technical implementations like server-side rendering for LLM visibility, addressing measurement challenges in AI search environments, and adapting strategies based on how different platforms crawl and index content.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Supermanagers
AI Agents Run Your Inbox, Calendar & Socials with Sam Partee

Supermanagers

Play Episode Listen Later Jun 26, 2025 40:46


What if your AI agent could send emails, check your calendar, and even text people on your behalf—all securely and with your permission? In this episode, Aydin and guest co-host Alexandra from Fellow talk with Sam Partee, co-founder of Arcade, about how AI agents are actually becoming useful in the real world.Sam breaks down how Arcade enables LLM-powered agents to act on your behalf across tools like Gmail, Slack, Salesforce, and more, without sacrificing security. He also shows us how he automates his own workflows, from email triage to iMessage replies, and shares how tools like Cursor and Claude are reshaping how engineers work day-to-day.Whether you're technical or not, this episode is packed with actionable insights on what it means to work in an AI-native company—and how to start doing it yourself.Timestamps0:00 – The future of agents impersonating people01:20 – Meet Sam Partee and his background in high-performance computing02:50 – What Arcade is and how it powers AI agents05:10 – Use case: ambient social media agents06:50 – “YOLO mode” vs. human-in-the-loop agent workflows07:30 – Building a lean AI-native company08:00 – Engineers are now 1.5x more productive—with caveats12:00 – Why the whole team (PMs, QA, etc.) should use tools like Cursor14:00 – How Markdown became the LLM-native format17:00 – Sam's iMessage agent and calendar automation18:45 – His AI-powered inbox (email triage + drafting)21:00 – Live demo: using Slack assistant “Archer” built with Arcade24:00 – How non-technical people can use these tools too27:00 – Cursor vs. Copilot: What's better?30:00 – Cursor agent mode and example developer workflows34:00 – Vector databases and prompt design35:00 – Using LLMs to redesign error handling and generate docs38:00 – Advice for teams adopting AI: start by buildingTools and Technologies:Arcade – Let AI agents act on your behalf (email, Slack, calendar, etc.) with secure OAuth.Cursor – LLM-native IDE with full-codebase context. Ideal for AI-assisted development.Claude – Chat interface + agent orchestration, paired with Arcade.LangGraph – Multi-agent orchestration framework with human-in-the-loop support.TailScale – Secure remote networking; enables Sam to access agents from anywhere.Twilio – Used for SMS reminders and notifications.Obsidian + Markdown – Sam uses Markdown + AI for personal notes and research.GitHub Copilot – Used in tandem with Cursor for inline suggestions and PR reviews.Subscribe to the channel for more behind-the-scenes looks at how top teams are rethinking work with AI.Subscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.

Wow If True
117: AI Boyfriends

Wow If True

Play Episode Listen Later Jun 25, 2025 63:14


Isabel and Amanda throw out their other plans and instead talk about the subreddit “myboyfriendisAI” which is about, you guessed it, AI boyfriends and other types of digital partners. We also discuss Meta's not-amazing practices and the LLM paper doing the rounds on the AI hater part of the internet, with Iz's human partner weighing in. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.