POPULARITY
Categories
0:00 - It's a Playoff Double Diparoo tomorrow which means we need a SUPER SPECIAL KEYS TO THE GAME.14:11 - We still don't know if Landeskog's playing tomorrow or what his exact usage will be. But let's dream a little bit...32:15 - It's time for our favorite part of the week DRUNK TAKESSSSSSSSSS
@PermissionToStanPodcast on Instagram (DM us & Join Our Broadcast Channel!) & TikTok!NEW Podcast Episodes every THURSDAY! Please support us by Favoriting, Following, Subscribing, & Sharing for more KPOP talk!Coachella Weekend 2 schedule with BLACKPINK members going solo JENNIE, LISA, along with ENHYPEN and XGComebacks: ME:I, MOONBIN Tribute, IVE, KAI (EXO), TWS, ONEW (SHINEE), MEOVV, NEXZ, FIFTY FIFTY, KEP1ER, BOYNEXTDOORMusic Videos: KAI (EXO), NEXZ, &TEAM, ENHYPEN, XG, UNISCoachella Weekend 1 Recap:LISA's outfits & stages were fucking cool!ENHYPEN gets new fans to google them upJENNIE surprisingly has 1 main outfit & kills itXG gets umbrellas wrecked by the windTWICE opens for COLDPLAY in KoreaTWICE/MISAMO SANA x MLB Korea Baseball InterviewLE SSERAFIM does the Pepero/Pocky challenge, who gets the closest?LE SSERAFIM reveals their phone usage and moreBTS JIN announces solo comeback 'Echo'JHOPE butchers Filipino food name in the PhilippinesSTRAY KIDS BANGCHAN warns rowdy fans in Peru to back up as some STAY collapseNot listening, STRAY KIDS HYUNJIN gets upset and has them walk off stage & cuts the lightSupport this podcast at — https://redcircle.com/permission-to-stan-podcast-kpop-multistans-andamp-weebs/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Imagine losing 170 freaking pounds! We share Gary's story, cover usage of creatine, loading, maintenance and higher dosages, major update at the US Southern Border and more. Membership Specials https://swolenormousx.com/membershipsDownload The Swolenormous App https://swolenormousx.com/swolenormousappMERCH - https://papaswolio.com/Watch the full episodes here: https://rumble.com/thedailyswoleSubmit A Question For The Show: https://swolenormousx.com/apsGet On Papa Swolio's Email List: https://swolenormousx.com/emailDownload The 7 Pillars Ebook: https://swolenormousx.com/7-Pillars-EbookTry A Swolega Class From Inside Swolenormous X: https://www.swolenormousx.com/swolegaGet Your Free $10 In Bitcoin: https://www.swanbitcoin.com/papaswolio/ Questions? Email Us: Support@Swolenormous.com
Laurence Holmes and Ryan McGuffey were joined by Cubs manager Craig Counsell to discuss the latest storylines surrounding the team, including center fielder Pete Crow-Armstrong's recent power surge.
In this episode, hosts Ash and Dusty tackle the pervasive issue of phone addiction, especially as it relates to individuals with ADHD. They explore how the constant stimulation of smartphones can lead to feelings of guilt and shame, as well as frustration from lost time. The conversation emphasizes the importance of understanding individual motivations behind phone usage and how it serves different needs, from distraction to connection. Both hosts share personal anecdotes and strategies for navigating phone usage while maintaining a healthy relationship with technology. The discussion also highlights that not all phone usage is inherently negative; it can be a valuable tool for engagement and connection when used mindfully. Ash and Dusty encourage listeners to shift their focus from guilt over phone habits to understanding the underlying reasons for their behavior. By linking phone usage to personal intentions and desired outcomes, individuals can find a balance that works for them, ensuring that their relationship with technology enhances rather than detracts from their lives. Episode links + resources: Join the Community | Become a Patron Our Process: Understand, Own, Translate. About Asher and Dusty For more of the Translating ADHD podcast: Episode Transcripts: visit TranslatingADHD.com and click on the episode Follow us on Twitter: @TranslatingADHD Visit the Website: TranslatingADHD.com
Today we are talking about Drupal Forge, how it works, and why it's changing Drupal with guest Darren Oh. We'll also cover ECA VBO as our module of the week. For show notes visit: https://www.talkingDrupal.com/497 Topics Elevator pitch for Drupal forge What is Drupal Forge built on What is the pricing model Does Drupal Forge only allow you to install Drupal CMS Drupal Forge and templates, was there an influence on Site Templates Why offer templates for Drupal Forge Camps Is Drupal Forge open source What is on the Roadmap How can people get involved Resources Drupal Forge on Drupal.org Drupal Forge DevPanel Guests Darren Oh - drupalforge.org Darren Oh Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Kathy Beck - kbeck303 MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted a powerful and flexible way to create views bulk operations without writing code? There's a module for that. Module name/project name: ECA VBO Brief history How old: created in May 2022 by mxh, a prolific maintainer in his own right, and an active member of the group that has made the ECA ecosystem so far-reaching Versions available: 1.1.1 and 2.1.1, the latter of which supports ^10.3 || ^11 Maintainership Actively maintained Security coverage Documentation: sort of. The README has step-by-step instructions, and the project page has links to both an example model and a tutorial video Number of open issues: 7 open issues, 1 of which are bugs against the current branch Usage stats: 320 sites Module features and usage With the module installed, your site will have a number of Events available within ECA, specifically for defining models that can perform bulk actions on the selected items in a view. In my own experience the most useful event is VBO: Execute Views bulk operation (one by one) From there, you can define the logic of what needs to happen to the selected items. I've used it for fairly simple operations like changing content to a specific moderation state, but you could define complex logic that is conditional on field values, site configuration, or even global factors like the time of day With one or more models defined, you can now add a field to your view for ECA bulk operations and then select which eligible models you want available in that specific view It's worth adding that the ECA model can also include logic to define who should have access to perform a particular operation, which could be as simple as checking the role of the current user, but can be as complex as you need I came across ECA VBO during some recent work on the Drupal Event Platform, which is already available to try out on Drupal Forge, but there should be a more formal announcement on that front soon
"Laisse-la observer cet usage en vue du jour de mon ensevelissement !" Méditation de l'évangile (Jn 12, 1-11) par Monique BaujardChant Final : "Je veux n'être qu'à toi" par ExoDistribué par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Are the Knicks ready to unleash PJ Tucker
The boys discuss what to do about bees in your office, the origin of the Seven Seas and why can't we have more public bathrooms
Pass protection can make or break an NFL offense. Craig and Logan break down how teams teach protection responsibilities — from linemen to backs — and how a player's college role can impact their ability to thrive at the next level. Follow Craig on Instagram: @craig_hoffman Follow Logan on Instagram: @logan_paulsen82 To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, Carrick Longley discusses Large Language Models (LLM) and influence. Key topics include: LLM 101 Usage and changes in prompt engineering Improving influence resonance and speed The recent DeepSeek model controversy Bias in foundational models and Software development Recording Date: 26 Mar 2025 Research Question: Guest suggests an interested student or researcher examine: Resources: ZenithFlow Company of One by Paul Jarvis Reddit: Local LLama Link to full show notes and resources Guest Bio: Dr. Carrick Longley is the Founder and CEO of ZenithFlow, a company pioneering privacy-first AI solutions for strategic communications. A former Marine Corps SIGINT and Technical IO Officer with a Ph.D. in Information Sciences, he leads the development of StoryForge, an advanced platform that transforms raw data into compelling narratives. Through ZenithFlow's local-first AI approach, Dr. Longley is revolutionizing how organizations leverage artificial intelligence to create impactful messaging while maintaining complete data privacy and control. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.
JUDGE JESS: My Husband's Towel Usage Is Tearing Us Apart full 496 Mon, 07 Apr 2025 13:51:27 +0000 7RS07nbNSXccv7Y6THb80B4II4YUt4Cm music,society & culture,news Kramer & Jess On Demand Podcast music,society & culture,news JUDGE JESS: My Husband's Towel Usage Is Tearing Us Apart Highlights from the Kramer & Jess Show. 2024 © 2021 Audacy, Inc. Music Society & Culture News False https://player.amperwavepodcasting.c
Today we are talking about Drupal Basics, Why we got away from them, and what we do to bring them back with guest Mike Anello. We'll also cover Entity Reference Override as our module of the week. For show notes visit: https://www.talkingDrupal.com/496 Topics Where did this idea come from Why do you feel more basic content is necessary How did Drupal get away from the basics How can we get more basic talks into Drupal events How do we balance basic content with new topics like recipes or Drupal CMS How do we entice speakers to take these talks Could this adversely affect attendance Question from Stephen: How do we address virtual events and that they are preferred by a younger crowd Will Florida Drupal Camp have a track Guests Mike Anello - drupaleasy.com ultimike Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Kathy Beck - kbeck303 MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted to replace a text field on entities you reference in your Drupal site? There's a module for that. Module name/project name: Entity Reference Override Brief history How old: created in Sep 2016 by Jeff Eaton, though recent releases are by Benjamin Melançon (mlncn) of Agaric Versions available: 2.0.0-beta3 which works with Drupal 10.1 or 11 Maintainership Actively maintained Security coverage, yes but needs a stable release Test coverage Documentation - user guide Number of open issues: 13 open issues, 2 of which are bugs against the 2.0.x branch Usage stats: 2,004 sites Module features and usage The module defines a new field type, with associated widgets and formatters. Your site editors will see a normal entity reference field (autocomplete or select) with an additional text field. Text provided in that additional field can be used to override a specific field in the referenced entity's display, or add a class to its rendered markup. This could be handy in use cases like showing people with project-specific roles, or showing related articles with the summary tweaked to be more relevant to the main content being viewed. It's not a super-common need, but if you need this capability, it can save having to set up a more complicated content architecture with some kind of intermediary entity I thought this module would be interesting because today's guest, Mike Anello, is listed as one of the maintainers. Mike, what can you tell us about your history with the module and how you've used it?
In this episode of “Waking Up With AI,” Katherine Forrest and Anna Gressel revisit how companies should brief their boards of directors about AI in 2025, offering insights into AI usage, risks and strategic considerations for senior management. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence
In this episode, we speak with Kathryn Clay and Mark Muriello, our partners from the International Bridge, Tunnel and Turnpike Association (IBTTA), and AAMVA Vehicle Program Manager Marcy Coleman about the upcoming IBTTA Road Usage Charging Summit Host: Ian Grossman Producer: Claire Jeffrey and Chelsey Hadwin Music: Gibson Arthur This episode is brought to you by CHAMP. CHAMP's government suite modernizes DMVs with a secure, configurable platform that replaces or enhances existing systems. Say goodbye to paperwork and delays—CHAMP streamlines operations, accelerates transactions, and simplifies workflows so your team can focus on serving constituents efficiently. Learn more at CHAMPtitles.com.
Dean continues to figure out what's causing the ceiling to sag down. Dean says to avoid painting concrete patio due to the moisture it creates and wear and tear usage + exterior epoxy concrete coating. Lastly, Dean advices a caller with their broken toilet tank and replacing their laminated floor.
In this episode, host Jim Love discusses a rise in unauthorized network scans targeting Juniper and Palo Alto devices, raising concerns about espionage and botnet activities. The podcast also delves into the controversial use of the Signal app by National Security Advisor Mike Waltz's team for sensitive communications, sparking debates on security and legality. Additionally, the episode highlights the potential misuse of OpenAI's advanced image generation tool for creating fraudulent documents. Finally, it covers the mysterious disappearance of cybersecurity professor JF Wang and his wife, following an FBI and Homeland Security investigation. 00:00 Introduction and Overview 00:23 Unauthorized Scans on Network Devices 02:01 National Security Concerns with Signal App 05:21 Risks of AI-Generated Images 07:44 The Disappearance of a Cybersecurity Professor 09:57 Conclusion and Upcoming Events
Matt Pauley breaks down the latest news across the St. Louis area from 6-8PM: - Is the Cardinals attendance issue becoming a large enough issue to be addressed? - You can't deny this past week of baseball has been FUN - John Kelly breaks down another gritty win for the Blues as they notch their 10th straight - Thoughts on the Cardinals pitching after Sonny was left in a little - Ryan Fagan on a roller-coaster start to the Cardinals season - Your calls on Sports Open Line! - Earl Austin Jr. on the newest Billikens & how the staff is navigating a new era of CBB - the special chance that fans are getting with the weekly Oli Marmol intwerview - Jen Siess joins in studio talking CITY's offensive struggles
In this episode, the Red Tractor assurance scheme responds to criticism by pledging to make farm audits easier for farmers.We speak to the Red Tractor interim chairman Alistair Mackintosh and operations director Philippa Wiltshire.Also in this episode – the government unveils long awaited plans to reduce pesticide usage by 10% within five years.We look at the likely impact on your farm business – and on food production.And a medical microbiologist says farmers need to remain vigilant after the first case of bird flu is confirmed in a sheep in Yorkshire.This episode of the Farmers Weekly Podcast is co-hosted by Johann Tasker, Louise Impey and Hugh Broom.Contact or follow Johann (X): @johanntaskerContact or follow Louise (X): @louisearableContact or follow (X): @sondesplacefarmFor Farmers Weekly, visit fwi.co.uk or follow @farmersweeklyTo contact the Farmers Weekly Podcast, email podcast@fwi.co.uk. In the UK, you can also text the word FARM followed by your message to 88 44 0.
Adam and Thomas discuss various economic themes, including the current state of inflation expectations in the US, consumer sentiment, and the implications of rising tariffs. They explore how these factors interact with political strategies and the broader economy, emphasising the historical context of tariffs and their impact on manufacturing. The conversation highlights the uncertainty in the economic landscape and the potential consequences for consumers and businesses alike. In this episode, the hosts delve into various economic themes, including inflation and consumer expectations, defence spending dynamics, and trends in antidepressant use across different countries. They explore how inflation affects pricing strategies, the implications of defence budgets on global security, and the surprising statistics surrounding mental health medication usage. The conversation also touches on unique economic indicators that may signal broader economic trends.If your life isn't complete without charts, then you need to follow the Comedian V Economist instagram. Comments on the show? Send them to cve@equitymates.com*****In the spirit of reconciliation, Equity Mates Media and the hosts of Comedian V Economist acknowledge the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respects to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people today. *****Comedian V Economist is a product of Equity Mates Media. This podcast is intended for education and entertainment purposes. Any advice is general advice only, and has not taken into account your personal financial circumstances, needs or objectives. Before acting on general advice, you should consider if it is relevant to your needs and read the relevant Product Disclosure Statement. And if you are unsure, please speak to a financial professional. Equity Mates Media operates under Australian Financial Services Licence 540697.For more information head to the disclaimer page on the Equity Mates website where you can find ASIC resources and find a registered financial professional near you.Comedian V Economist is part of the Acast Creator Network. Hosted on Acast. See acast.com/privacy for more information.
Labour's admitted the gang patch ban hasn't turned out as badly as they feared. The Deputy Police Commissioner has confirmed staff are pleasantly surprised at the ban's effectiveness, saying it's brought more control to the situation. It comes as Gisborne police are given more powers to deal with gangs. Labour police spokesperson Ginny Andersen told Mike Hosking the fact it's gone well is a good thing. She says given fears haven't eventuated of frontline officers getting hurt, she can recognise it's gone better than everyone thought it would. LISTEN ABOVE See omnystudio.com/listener for privacy information.
Adam Krellenstein is one of the co-founders of Counterparty and the current maintainer of the protocol. In this episode, he talks about his plan to improve Bitcoin native tokens and why you should consider giving XCP a chance. Time stamps: Introducing Adam Krellenstein (00:00:50) Counterparty's Unique Features (00:01:30) Historical Significance of Counterparty (00:02:09) Comparison with Mastercoin/Omni (00:03:19) The Birth of Bitcoin Maximalism (00:04:48) Vitalik's Influence and Ethereum (00:08:43) Pushback from Bitcoin Core Developers (00:09:16) Concerns about Data Storage on Bitcoin (00:10:36) Counterparty Software and Decentralization (00:12:51) Art and NFTs in Counterparty (00:14:51) Airdrop Announcement (00:14:24) Off-Chain Data Storage Challenges (00:16:00) Counterparty's Original Vision (00:18:19) The Popularity of Digital Art (00:18:51) The Role of Digitally Native Assets (00:19:32) Article about Counterparty on Bitcoin Takeover Website (00:21:08) The Role of Bitcoin in Counterparty (00:22:30) Decentralization of Counterparty Launch (00:23:12) XCP Token Creation and Burning Mechanism (00:24:27) Value Creation vs. Destruction (00:26:00) Onboarding Users with Dispensers (00:29:41) Counterparty's Development History (00:31:01) Challenges in Running Counterparty Nodes (00:35:30) Counterparty 2.0 and Ledger Fork? (00:39:17) Community Dynamics and Development (00:41:00) Future of Smart Contracts on Counterparty (00:42:58) Counterparty Ecosystem Health (00:43:46) Atomic Swaps Feature (00:44:18) Layer Two Proposals (00:45:18) Comparison with Ordinals & BRC20 (00:47:44) Counterparty's Functional Advantages (00:48:30) Sponsors (00:49:29) Expectations at Launch (00:52:31) Future of Counterparty (00:53:39) Cultural Shifts in Bitcoin (00:55:23) Collaboration with Ordinals (00:58:54) Differences Between Runes and Counterparty (01:00:09) Encoding Data in Bitcoin Transactions (01:01:07) Future Improvements for Counterparty (01:03:11) Community Dynamics (01:03:58) Counterparty Wallet Development (01:05:36) Bitcoin Head Cards (01:07:04) Community Engagement and Nostalgia (01:08:29) Bitcoin's Security Budget (01:09:21) Transaction Fees and Counterparty (01:10:33) Recommended Counterparty Wallets (01:11:30) Data Storage on Bitcoin (01:12:32) Ecosystem and Community Reception (01:15:08) Pepe Cash and Market Trends (01:16:45) Current Developments in Counterparty (01:18:10) Counterparty Roadmap (01:21:14) Use Cases of Counterparty (01:23:35) Stablecoin Possibilities (01:24:30) Can Bitcoin Privacy Kill Counterparty? (01:25:38) Counterparty's Resilience (01:26:26) Discussion on Counterparty's Usage (01:26:46) Closing Remarks and Where to Follow Adam Krellenstein (01:27:02) Counterparty Communication Channels (01:27:18) Airdrop Announcement (01:27:46) Shoutout to Chris Derose (01:28:25)
Today we are talking about AI in EDU, how it can provide efficiencies, and how you might start using it today with guests Brian Piper & Mike Miles . We'll also cover External Entities as our module of the week. For show notes visit: https://www.talkingDrupal.com/494 Topics How are you using AI with your team at Rochester How are you using AI with your team at MIT What are the AI policies at your institutions On the ingestion side how do you manage consumption Tips and tricks to incorporate AI into your work Can you talk more about using AI to distribute content outside the web Do you have tips for managers How have you seen EDUs using AI other than as assistive technology What are your favorite tools Have you done adversarial testing How does AI in Drupal impact EDU Where do you see AI in EDU in the future Resources Crawler rate limit Externalizing costs AI for U MidCamp 2024 session about YaleSites Tools Element 451 Builder io Deque Axe Devtools Descript Opus clips Kapwing HeyGen Synthesia Text to video Sora Veo Guests Brian Piper - brianwpiper.com Mike Miles - Mike-miles.com mikemiles86 Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Andrew Berry - lullabot.com deviantintegral MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted to connect your Drupal website to an external data source, to include their datasets into the presentation of your Drupal-managed content? There's a module for that Module name/project name: External Entities Brief history How old: created in May 2015 by attiks, though the most recent release is by Colan Schwartz (colan), a fellow Canadian Versions available: 8.x-2.0-beta1 and 3.0.0-beta4, the latter of which supports Drupal 10 and 11 Maintainership Actively maintained, latest release was less than a month ago Security coverage (though technically needs a stable release Test coverage Documentation: user guide Number of open issues: 77 open issues, 3 of which are bugs against the 3.x branch, though one is marked fixed now Usage stats: 679 sites Module features and usage The External Entities module lets you map fields from external data sources to fields on a “virtual” entity in Drupal. This allows for external data to be used with Drupal's powerful features like Views, Entity Queries, or Search API as well as use your local Drupal site's theme to theme data from an external source The module does provide a time-based caching layer for external entities, but you can also implement a more custom cache expiration logic through custom code External entities can also have annotations, essentially Drupal-managed information that will be associated with the external entity, and accessed as a normal field through all Drupal field operations. This could allow you to have Drupal-based comments on information from a different website, for example There is a sizeable ecosystem of companion modules, to help you connect to different kinds of external storage, as to help you aggregate data from multiple sources In my Drupal career I've worked on a number of higher ed websites, and the ability to display externally-managed data is a pretty common requirement, either from an HRIS system to show staff and faculty data, or a courseware solution like Banner. I thought this would be an interesting tangent to today's topic
I'd love to hear from you! Send a text message.DOWNLOAD FREE GUIDE: https://www.intentionaleaders.com/manage-time-maximize-lifeThe quest for perfect time management has led many of us down endless paths searching for the ideal system or tool that will magically organize our chaotic lives. But what if we've been approaching the problem entirely wrong? What if time management isn't about managing time at all, but rather about managing our brains?Time itself is merely a social construct—a way to organize our existence on this planet. The real challenge lies in how we think about time and the choices we make regarding its use. Oliver Burkeman's perspective in "4,000 Weeks" (the approximate lifespan of someone who lives to 80) offers a sobering reminder that our time is finite, forcing us to consider whether we're spending our limited weeks on what truly matters.The Eisenhower Time Matrix provides a practical framework for evaluating our activities through two crucial lenses: urgency and importance. While many of us excel at handling urgent and important matters (Quadrant 1), we often neglect important but not urgent activities (Quadrant 2) like strategic planning, relationship building, and health maintenance—until they become crises. Meanwhile, we waste countless hours on activities that are neither important nor urgent (Quadrant 4), or become addicted to other people's urgencies (Quadrant 3), mistaking busyness for productivity and importance.Perhaps most destructive is our tendency to multitask. Despite what many believe about their abilities, research consistently shows that multitasking damages our cognitive capital both short and long term. Instead, single-tasking with full presence—whether in work or with loved ones—proves far more effective. Techniques like the Pomodoro method can help train our brains to focus intently for short periods before taking earned breaks, dramatically improving both productivity and presence.The path forward involves small, consistent habit changes that gradually transform our relationship with time. By becoming more intentional about our attention, establishing healthy boundaries around urgency, and aligning our daily actions with our core values, we can reclaim control over our 4,000 weeks and live with greater purpose and less stress. Share this episode with someone who needs to hear this message, and remember: you have complete control over your most valuable resource—your attention.Be the Best Leader You Know Perform with Power, Lead with Impact, Inspire GrowthTo sharpen your skills and increase your confidence, check out the Confident Leader Course: https://www.intentionaleaders.com/confident-leader
A massive spike in meth use is being linked to a change in global shopping habits. Christopher Luxon has asked ministers to look into meth use, after annual wastewater results show a 96% increase in consumption last year compared to 2023. Massey University drug researcher Chris Wilkins told Mike Hosking it's likely a case of both people using more, and more people using. He says the increase represents the changes to the drug market, which is moving from a brick-and-mortar store, to a global online platform. LISTEN ABOVE See omnystudio.com/listener for privacy information.
Ian Hamilton and David Heaney discuss:Varjo XR-4 To Require Subscription For Some Features Metallica On Apple Vision ProQuest Thumb Microgestures & Improved Audio To ExpressionQuest Passthrough Camera APISnap Spectacles See Peridots Play Together & Now Use GPSLG Ceases XR Product Efforts, But Will Continue R&DMeta CTO Responds To Controversy Around Using "MR" To Mean Both MR And VRMeta's CTO Says He "Loved And Hoped To Keep" The Oculus BrandMeta Claims Open Store & Worlds Push Didn't Affect SpendingMeta Says Quest Headset Usage Up 30%Meta "Still Investing Massively In VR Gaming"Bigscreen Beyond 2 & Beyond 2e
In this episode, we sit with security leader and venture investor Sergej Epp to discuss the Cloud-native Security Landscape. Sergej currently serves as the Global CISO and Executive at Cloud Security leader Sysdig and is a Venture Partner at Picus Capital. We will dive into some insights from Sysdig's recent "2025 Cloud-native Security and Usage Report."Big shout out to our episode sponsor, Yubico!Passwords aren't enough. Cyber threats are evolving, and attackers bypass weak authentication every day. YubiKeys provides phishing-resistant security for individuals and businesses—fast, frictionless, and passwordless.Upgrade your security:https://yubico.comSergj and I dove into a lot of great topics related to Cloud-native Security, including:Some of the key trends in the latest Sysdig 2025 Cloud-native Security Report and trends that have stayed consistent YoY. Sergj points out that while attackers have stayed consistent, organizations have and continue to make improvements to their securitySergj elaborated on his current role as Sysdig's internal CISO and his prior role as a field CISO and the differences between the two roles in terms of how you interact with your organization, customers, and the community.We unpacked the need for automated Incident Response, touching on how modern cloud-native attacks can happen in as little as 10 minutes and how organizations can and do struggle without sufficient visibility and the ability to automate their incident response.The report points out that machine identities, or Non-Human Identities (NHI), are 7.5 times riskier than human identities and that there are 40,000 times more of them to manage. This is a massive problem and gap for the industry, and Sergj and I walked through why this is a challenge and its potential risks.Vulnerability prioritization continues to be crucial, with the latest Sysdig report showing that just 6% of vulnerabilities are “in-use”, or reachable. Still, container bloat has ballooned, quintupling in the last year alone. This presents real problems as organizations continue to expand their attack surface with expanded open-source usage but struggle to determine what vulnerabilities truly present risks and need to be addressed.We covered the challenges with compliance, as organizations wrestle with multiple disparate compliance frameworks, and how compliance can drive better security but also can have inverse impacts when written poorly or not keeping pace with technologies and threats.We rounded out the conversation with discussing AI/ML packages and the fact they have grown by 500% when it comes to usage, but organizations have decreased public exposure of AI/ML workloads by 38% since the year prior, showing some improvements are being made to safeguarding AI workloads from risks as well.
At Enterprise Connect 2025, David Sundstrom, Regional Sales Manager at LogiSense, highlighted how usage-based billing is transforming the way companies monetize software, AI, and connectivity services. Moving Beyond Seat-Based Pricing Traditionally, software and communications platforms relied on seat-based licensing, where businesses would buy a fixed number of user licenses. But as finance teams scrutinize renewals, questioning unused licenses and looking for cost efficiencies, enterprises are shifting to pay-as-you-go models. “A much more elegant way to monetize your product is what we call a usage-based or consumption-based model, where companies pay for what they use,” Sundstrom explained. LogiSense provides the billing infrastructure that enables businesses to move from static pricing to dynamic, real-time billing models. The shift comes with complexity—from tracking usage data to implementing flexible pricing structures—but LogiSense simplifies the process, making it easier for enterprises to roll out consumption-based services. Flexible Pricing Models Drive Growth A key advantage of LogiSense's platform is its ability to support diverse pricing models, including: Pay-per-use – Customers pay a fixed rate per event or interaction. Tiered pricing – Volume discounts reduce per-unit costs as usage increases. Pooled usage – Shared usage across an organization without individual licenses. Wallet-based billing – Prepaid drawdown models, commonly used in IoT and AI, where businesses commit to a spend level and use credits as needed. The AI Monetization Opportunity AI is reshaping industries, and LogiSense is positioned at the intersection of AI and monetization. According to Sundstrom, there are three primary ways companies are integrating AI into their revenue models: Enhancing existing services – AI is being used to automate processes and improve efficiency within existing platforms. Monetizing AI-driven tasks – Businesses are charging per AI-generated action, such as automated customer responses or data processing. Outcome-based billing – AI is delivering measurable results, and companies are shifting from activity-based billing to billing for successful outcomes. A leading example of outcome-based billing is Salesforce, which is monetizing AI-assisted tasks based on results rather than usage. LogiSense's billing platform supports this shift, allowing companies to charge based on the tangible value AI delivers. Why MSPs, MSSPs, and Carriers Should Pay Attention Managed service providers (MSPs), managed security service providers (MSSPs), and carrier service providers are also prime candidates for usage-based billing models. MSPs and MSSPs can bill clients based on service consumption, reducing waste and improving cost alignment. Carrier service providers can bill on behalf of their partners, allowing them to adopt flexible models without building custom billing solutions in-house. Where to Learn More For those interested in learning more about usage-based billing and LogiSense's solutions, the company's website, logisense.com, offers additional resources. Additionally, LogiSense will host the second annual Usage Economy Summit on November 5, 2025, in San Francisco. Following a successful inaugural event, this conference will bring together industry leaders to discuss the future of monetization, AI-driven billing models, and the impact of flexible pricing strategies. As enterprises rethink how they charge for their services, LogiSense is helping businesses adapt, scale, and grow with smarter, usage-based billing solutions. #EnterpriseConnect #LogiSense #UsageBasedBilling #SubscriptionEconomy #AI #Monetization #MSP #IoT
Wednesday's second hour.
Episode 26: Parents Defending Education v. Olentangy Local School District, et al.Parents Defending Education v. Olentangy Local School District, argued before the en banc U.S. Court of Appeals for the Sixth Circuit on March 19, 2025. Argued by Cameron Norris (on behalf of Parents Defending Education); Elliott Gaiser, Solicitor General of Ohio (on behalf of Ohio and 22 other states as amici curiae); and Jaime Santos (on behalf of the Olentangy Local School District Board of Education, et al.).Background of the case, from the Institute for Free Speech's second amicus brief (in support of reversal): While students may freely identify as having genders that do not correspond to their biological sex, other students enjoy the same right to credit their own perceptions of reality—and to speak their minds when addressing their classmates. Students cannot be compelled to speak in a manner that confesses, accommodates, and conforms to an ideology they reject—even if that ideology's adherents are offended by any refusal to agree with them or endorse their viewpoint. Yet that is what the Olentangy school district's speech code does.“Pronouns are political.” Dennis Baron, What's Your Pronoun? 39 (2020). History shows that people have long used pronouns to express messages about society and its structure—often in rebellion against the prevailing ideology. And the same is true today. Choosing to use “preferred” or “non-preferred” pronouns often “advance[s] a viewpoint on gender identity.” Meriwether v. Hartop, 992 F.3d 492, 509 (6th Cir. 2021). So mandating that students use “preferred” pronouns or none at all elevates one viewpoint while silencing the other. It compels students to adopt the district's ideology on gender identity while at school, and in doing so, “invades the sphere of intellect and spirit which it is the purpose of the First Amendment to our Constitution to reserve from all official control.” W. Va. Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943).Statement of the Issues, from the Brief of Appellant Parents Defending Education:The use of gender-specific pronouns is a “hot issue” that “has produced a passionate political and social debate” across the country. Meriwether v. Hartop, 992 F.3d 492, 508-09 (6th Cir. 2021). One side believes that gender is subjective and so people should use others' “preferred pronouns”; the other side believes that sex is immutable and so people should use pronouns that correspond with biological sex. Id. at 498. Like the general public, students have varying views on this important subject, and the Supreme Court has long recognized that students don't “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.” Tinker v. Des Moines Indep. Cmty. Sch. Dist., 393 U.S. 503, 506 (1969). Yet the Olentangy Local School District has adopted policies that punish speech expressed by one side of the debate—the use of pronouns that are contrary to another student's identity. The district court upheld the Policies as consistent with the First Amendment and denied PDE's preliminary-injunction motion.The issues presented in this appeal are:Whether the District's speech policies likely violate the First Amendment because they compel speech, discriminate based on viewpoint, prohibit speech based on content without evidence of a substantial disruption, or are overbroad.Whether, if PDE is likely to succeed on the merits, the remaining preliminary-injunction criteria favor issuing a preliminary injunction.Resources:CourtListener docket page for Parents Defending Education v. Olentangy Local School Dist, et al.Brief for Appellant Parents Defending EducationBrief for Appellee Olentangy Local School Dist, et al.Supplemental En Banc Brief of Plaintiff-Appellant Parents Defending EducationInstitute for Free Speech first amicus brief (in support of rehearing en banc)Institute for Free Speech second amicus brief (in support of reversal)
In this Ask Us Anything episode we answer 4 common questions that we get asked:Should you have a $ minimum for jobs?Should you connect with your customers on social media?What frequency should you meet with your crew leads and what should be discussed?Should you connect with your homeowners before an estimate if they book an appointment online?Need someone to bounce questions with and help you navigate your business in 2025? Schedule a free business analysis meeting with us at www.elitebusinessadvisors.com!
In this episode of HSS Presents, Dr. Peter Sculco is joined by his father, Dr. Thomas Sculco—a renowned expert in joint replacement and director of the Complex Joint Reconstruction Center at HSS—to discuss the evolving role of cones in revision knee arthroplasty. They review how bone loss was managed prior to the advent of cones, the resurgence of impaction grafting, and the shift from hybrid to fully cemented stems for improved outcomes. Dr. Thomas Sculco shares his philosophy on the appropriate use of cones, cautioning against overuse due to cost and potential for unnecessary bone removal. The conversation also addresses complex re-revision scenarios and challenges with cone fixation in sclerotic bone. Blending decades of surgical expertise with insights on innovation, cost-effectiveness, and patient-centered care, this episode offers a thoughtful look at current trends in revision arthroplasty. To learn more about The Stavros Niarchos Complex Joint Reconstruction Center at HSS, visit our HSS webpage and our X account.
Today we are talking about The Drupal Developer Survey, Last year's results, and How it helps Drupal with guest Mike Richardson. We'll also cover HTMX as our module of the week. For show notes visit: https://www.talkingDrupal.com/493 Topics What is the Drupal Developer Survey How often does it come out How did it come to be What type of information does it collect Do you look at other surveys What were some of the most interesting stats last year Core contributors How do you expect last year to compare to this year Do you think the outlook will be more positive with Drupal CMS Drop off in Drupal 7 Home users DDEV usage AI questions Security questions Resources Drupal Developer Survey 2024 Results 2025 Drupal Developer Survey HTMX Sucks Guests Mike Richardson - Ironstar Dev Survey richo_au Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Andrew Berry - lullabot.com deviantintegral MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted to replace Drupal's AJAX capabilities with a lightweight library that has no additional dependencies? There's a module for that. Module name/project name: HTMX Brief history How old: created in May 2023 by wouters_f though recent releases are by fathershawn of Memorial Sloan Kettering Cancer Center Versions available: 1.3.5 and 1.4.0, both of which support Drupal 10.3 and 11 Maintainership Actively maintained, latest release less than a month ago Security coverage Test coverage Documentation included in the repo as well as online Number of open issues: 3 open issues, 1 of which is a bug Usage stats: 92 sites Module features and usage To use HTMX, you need to attach the library to the render array of one or more elements where you want to use it, and then add data attributes to your render array that indicate how you want HTMX to react to user behaviour HTMX can help make your Drupal sites more interactive by dynamically loading or reloading parts of a page, giving it a more “application-like” user experience There is a planning issue to discuss gradually replace Drupal's current AJAX system with HTMX, and a related Proof Of Concept showing how that could work with an existing Drupal admin form A number of elements in the current AJAX system also rely on jQuery, so adopting HTMX would also help to phase out jQuery in core. HTMX is also significantly more lightweight than JS frameworks like React HTMX is really a developer-oriented project, which is why I thought it would be appropriate for this week's episode
Streamline daily admin tasks with AI-powered insights, natural language queries, and automation using Microsoft 365 Admin Copilot. Quickly recap key updates, monitor service health, and track important changes—all in one place. No more digging through multiple pages—just ask Copilot for the answers you need, grounded in real-time data from your tenant. From finding users and managing licenses to generating visual insights and automating tasks with PowerShell, use Copilot to simplify complex admin workflows and save valuable time. For Copilot in the admin center to light up, all you need is one active Microsoft 365 Copilot license for any user in your tenant and from the Microsoft 365 admin center, you can get started right away. Jeremy Chapman, Director of Microsoft 365, demonstrates how to leverage Copilot for proactive guidance, whether in the Microsoft 365 admin center or directly within Copilot Chat. ► QUICK LINKS: 00:00 - Copilot in Microsoft 365 00:42 - Use Copilot for change management 02:13 - Stay ahead of upcoming changes 03:31 - User and licensing queries 04:21 - Generate Visual Insights for Licensing and Usage 04:50 - Author PowerShell scripts for bulk operations 06:07 - Copilot Chat using Microsoft 365 Admin agent 07:37 - Copilot admin coming soon 07:51- Wrap up ► Link References For more information, check out https://aka.ms/CopilotinMAC Start using Microsoft 365 Copilot in the Microsoft 365 admin center at https://admin.microsoft.com ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Sara emphasized the importance of maximizing CRM usage for effective conversions, discussed the benefits of using a CRM system in real estate, and shared strategies for tracking and organizing leads. The team also discussed strategies for improving agent performance and customer engagement, with a focus on personalizing communication and empathizing with clients' situations.
In today's episode of the podcast, brought to you by Sound Agriculture, listen to a conversation between Rockwell City, Iowa grower James Hepp and Carlisle County, Ky., grower Joel Reddick, as they discuss their on-farm cover crop strategies during the Young Farmer Panel at the 2025 National No-Tillage Conference.
While everyone is now repeating that 2025 is the “Year of the Agent”, OpenAI is heads down building towards it. In the first 2 months of the year they released Operator and Deep Research (arguably the most successful agent archetype so far), and today they are bringing a lot of those capabilities to the API:* Responses API* Web Search Tool* Computer Use Tool* File Search Tool* A new open source Agents SDK with integrated Observability ToolsWe cover all this and more in today's lightning pod on YouTube!More details here:Responses APIIn our Michelle Pokrass episode we talked about the Assistants API needing a redesign. Today OpenAI is launching the Responses API, “a more flexible foundation for developers building agentic applications”. It's a superset of the chat completion API, and the suggested starting point for developers working with OpenAI models. One of the big upgrades is the new set of built-in tools for the responses API: Web Search, Computer Use, and Files. Web Search ToolWe previously had Exa AI on the podcast to talk about web search for AI. OpenAI is also now joining the race; the Web Search API is actually a new “model” that exposes two 4o fine-tunes: gpt-4o-search-preview and gpt-4o-mini-search-preview. These are the same models that power ChatGPT Search, and are priced at $30/1000 queries and $25/1000 queries respectively. The killer feature is inline citations: you do not only get a link to a page, but also a deep link to exactly where your query was answered in the result page. Computer Use ToolThe model that powers Operator, called Computer-Using-Agent (CUA), is also now available in the API. The computer-use-preview model is SOTA on most benchmarks, achieving 38.1% success on OSWorld for full computer use tasks, 58.1% on WebArena, and 87% on WebVoyager for web-based interactions.As you will notice in the docs, `computer-use-preview` is both a model and a tool through which you can specify the environment. Usage is priced at $3/1M input tokens and $12/1M output tokens, and it's currently only available to users in tiers 3-5.File Search ToolFile Search was also available in the Assistants API, and it's now coming to Responses too. OpenAI is bringing search + RAG all under one umbrella, and we'll definitely see more people trying to find new ways to build all-in-one apps on OpenAI. Usage is priced at $2.50 per thousand queries and file storage at $0.10/GB/day, with the first GB free.Agent SDK: Swarms++!https://github.com/openai/openai-agents-pythonTo bring it all together, after the viral reception to Swarm, OpenAI is releasing an officially supported agents framework (which was previewed at our AI Engineer Summit) with 4 core pieces:* Agents: Easily configurable LLMs with clear instructions and built-in tools.* Handoffs: Intelligently transfer control between agents.* Guardrails: Configurable safety checks for input and output validation.* Tracing & Observability: Visualize agent execution traces to debug and optimize performance.Multi-agent workflows are here to stay!OpenAI is now explicitly designs for a set of common agentic patterns: Workflows, Handoffs, Agents-as-Tools, LLM-as-a-Judge, Parallelization, and Guardrails. OpenAI previewed this in part 2 of their talk at NYC:Further coverage of the launch from Kevin Weil, WSJ, and OpenAIDevs, AMA here.Show Notes* Assistants API* Swarm (OpenAI)* Fine-Tuning in AI* 2024 OpenAI DevDay Recap with Romain* Michelle Pokrass episode (API lead)Timestamps* 00:00 Intros* 02:31 Responses API * 08:34 Web Search API * 17:14 Files Search API * 18:46 Files API vs RAG * 20:06 Computer Use / Operator API * 22:30 Agents SDKAnd of course you can catch up with the full livestream here:TranscriptAlessio [00:00:03]: Hey, everyone. Welcome back to another Latent Space Lightning episode. This is Alessio, partner and CTO at Decibel, and I'm joined by Swyx, founder of Small AI.swyx [00:00:11]: Hi, and today we have a super special episode because we're talking with our old friend Roman. Hi, welcome.Romain [00:00:19]: Thank you. Thank you for having me.swyx [00:00:20]: And Nikunj, who is most famously, if anyone has ever tried to get any access to anything on the API, Nikunj is the guy. So I know your emails because I look forward to them.Nikunj [00:00:30]: Yeah, nice to meet all of you.swyx [00:00:32]: I think that we're basically convening today to talk about the new API. So perhaps you guys want to just kick off. What is OpenAI launching today?Nikunj [00:00:40]: Yeah, so I can kick it off. We're launching a bunch of new things today. We're going to do three new built-in tools. So we're launching the web search tool. This is basically chat GPD for search, but available in the API. We're launching an improved file search tool. So this is you bringing your data to OpenAI. You upload it. We, you know, take care of parsing it, chunking it. We're embedding it, making it searchable, give you this like ready vector store that you can use. So that's the file search tool. And then we're also launching our computer use tool. So this is the tool behind the operator product in chat GPD. So that's coming to developers today. And to support all of these tools, we're going to have a new API. So, you know, we launched chat completions, like I think March 2023 or so. It's been a while. So we're looking for an update over here to support all the new things that the models can do. And so we're launching this new API. It is, you know, it works with tools. We think it'll be like a great option for all the future agentic products that we build. And so that is also launching today. Actually, the last thing we're launching is the agents SDK. We launched this thing called Swarm last year where, you know, it was an experimental SDK for people to do multi-agent orchestration and stuff like that. It was supposed to be like educational experimental, but like people, people really loved it. They like ate it up. And so we are like, all right, let's, let's upgrade this thing. Let's give it a new name. And so we're calling it the agents SDK. It's going to have built-in tracing in the OpenAI dashboard. So lots of cool stuff going out. So, yeah.Romain [00:02:14]: That's a lot, but we said 2025 was the year of agents. So there you have it, like a lot of new tools to build these agents for developers.swyx [00:02:20]: Okay. I guess, I guess we'll just kind of go one by one and we'll leave the agents SDK towards the end. So responses API, I think the sort of primary concern that people have and something I think I've voiced to you guys when, when, when I was talking with you in the, in the planning process was, is chat completions going away? So I just wanted to let it, let you guys respond to the concerns that people might have.Romain [00:02:41]: Chat completion is definitely like here to stay, you know, it's a bare metal API we've had for quite some time. Lots of tools built around it. So we want to make sure that it's maintained and people can confidently keep on building on it. At the same time, it was kind of optimized for a different world, right? It was optimized for a pre-multi-modality world. We also optimized for kind of single turn. It takes two problems. It takes prompt in, it takes response out. And now with these agentic workflows, we, we noticed that like developers and companies want to build longer horizon tasks, you know, like things that require multiple returns to get the task accomplished. And computer use is one of those, for instance. And so that's why the responses API came to life to kind of support these new agentic workflows. But chat completion is definitely here to stay.swyx [00:03:27]: And assistance API, we've, uh, has a target sunset date of first half of 2020. So this is kind of like, in my mind, there was a kind of very poetic mirroring of the API with the models. This, I kind of view this as like kind of the merging of assistance API and chat completions, right. Into one unified responses. So it's kind of like how GPT and the old series models are also unifying.Romain [00:03:48]: Yeah, that's exactly the right, uh, that's the right framing, right? Like, I think we took the best of what we learned from the assistance API, especially like being able to access tools very, uh, very like conveniently, but at the same time, like simplifying the way you have to integrate, like, you no longer have to think about six different objects to kind of get access to these tools with the responses API. You just get one API request and suddenly you can weave in those tools, right?Nikunj [00:04:12]: Yeah, absolutely. And I think we're going to make it really easy and straightforward for assistance API users to migrate over to responsive. Right. To the API without any loss of functionality or data. So our plan is absolutely to add, you know, assistant like objects and thread light objects to that, that work really well with the responses API. We'll also add like the code interpreter tool, which is not launching today, but it'll come soon. And, uh, we'll add async mode to responses API, because that's another difference with, with, uh, assistance. I will have web hooks and stuff like that, but I think it's going to be like a pretty smooth transition. Uh, once we have all of that in place. And we'll be. Like a full year to migrate and, and help them through any issues they, they, they face. So overall, I feel like assistance users are really going to benefit from this longer term, uh, with this more flexible, primitive.Alessio [00:05:01]: How should people think about when to use each type of API? So I know that in the past, the assistance was maybe more stateful, kind of like long running, many tool use kind of like file based things. And the chat completions is more stateless, you know, kind of like traditional completion API. Is that still the mental model that people should have? Or like, should you buy the.Nikunj [00:05:20]: So the responses API is going to support everything that it's at launch, going to support everything that chat completion supports, and then over time, it's going to support everything that assistance supports. So it's going to be a pretty good fit for anyone starting out with open AI. Uh, they should be able to like go to responses responses, by the way, also has a stateless mode, so you can pass in store false and they'll make the whole API stateless, just like chat completions. You're really trying to like get this unification. A story in so that people don't have to juggle multiple endpoints. That being said, like chat completions, just like the most widely adopted API, it's it's so popular. So we're still going to like support it for years with like new models and features. But if you're a new user, you want to or if you want to like existing, you want to tap into some of these like built in tools or something, you should feel feel totally fine migrating to responses and you'll have more capabilities and performance than the tech completions.swyx [00:06:16]: I think the messaging that I agree that I think resonated the most. When I talked to you was that it is a strict superset, right? Like you should be able to do everything that you could do in chat completions and with assistants. And the thing that I just assumed that because you're you're now, you know, by default is stateful, you're actually storing the chat logs or the chat state. I thought you'd be charging me for it. So, you know, to me, it was very surprising that you figured out how to make it free.Nikunj [00:06:43]: Yeah, it's free. We store your state for 30 days. You can turn it off. But yeah, it's it's free. And the interesting thing on state is that it just like makes particularly for me, it makes like debugging things and building things so much simpler, where I can like create a responses object that's like pretty complicated and part of this more complex application that I've built, I can just go into my dashboard and see exactly what happened that mess up my prompt that is like not called one of these tools that misconfigure one of the tools like the visual observability of everything that you're doing is so, so helpful. So I'm excited, like about people trying that out and getting benefits from it, too.swyx [00:07:19]: Yeah, it's a it's really, I think, a really nice to have. But all I'll say is that my friend Corey Quinn says that anything that can be used as a database will be used as a database. So be prepared for some abuse.Romain [00:07:34]: All right. Yeah, that's a good one. Some of that I've tried with the metadata. That's some people are very, very creative at stuffing data into an object. Yeah.Nikunj [00:07:44]: And we do have metadata with responses. Exactly. Yeah.Alessio [00:07:48]: Let's get through it. All of these. So web search. I think the when I first said web search, I thought you were going to just expose a API that then return kind of like a nice list of thing. But the way it's name is like GPD for all search preview. So I'm guessing you have you're using basically the same model that is in the chat GPD search, which is fine tune for search. I'm guessing it's a different model than the base one. And it's impressive the jump in performance. So just to give an example, in simple QA, GPD for all is 38% accuracy for all search is 90%. But we always talk about. How tools are like models is not everything you need, like tools around it are just as important. So, yeah, maybe give people a quick review on like the work that went into making this special.Nikunj [00:08:29]: Should I take that?Alessio [00:08:29]: Yeah, go for it.Nikunj [00:08:30]: So firstly, we're launching web search in two ways. One in responses API, which is our API for tools. It's going to be available as a web search tool itself. So you'll be able to go tools, turn on web search and you're ready to go. We still wanted to give chat completions people access to real time information. So in that. Chat completions API, which does not support built in tools. We're launching the direct access to the fine tuned model that chat GPD for search uses, and we call it GPD for search preview. And how is this model built? Basically, we have our search research team has been working on this for a while. Their main goal is to, like, get information, like get a bunch of information from all of our data sources that we use to gather information for search and then pick the right things and then cite them. As accurately as possible. And that's what the search team has really focused on. They've done some pretty cool stuff. They use like synthetic data techniques. They've done like all series model distillation to, like, make these four or fine tunes really good. But yeah, the main thing is, like, can it remain factual? Can it answer questions based on what it retrieves and get cited accurately? And that's what this like fine tune model really excels at. And so, yeah, so we're excited that, like, it's going to be directly available in chat completions along with being available as a tool. Yeah.Alessio [00:09:49]: Just to clarify, if I'm using the responses API, this is a tool. But if I'm using chat completions, I have to switch model. I cannot use 01 and call search as a tool. Yeah, that's right. Exactly.Romain [00:09:58]: I think what's really compelling, at least for me and my own uses of it so far, is that when you use, like, web search as a tool, it combines nicely with every other tool and every other feature of the platform. So think about this for a second. For instance, imagine you have, like, a responses API call with the web search tool, but suddenly you turn on function calling. You also turn on, let's say, structure. So you can have, like, the ability to structure any data from the web in real time in the JSON schema that you need for your application. So it's quite powerful when you start combining those features and tools together. It's kind of like an API for the Internet almost, you know, like you get, like, access to the precise schema you need for your app. Yeah.Alessio [00:10:39]: And then just to wrap up on the infrastructure side of it, I read on the post that people, publisher can choose to appear in the web search. So are people by default in it? Like, how can we get Latent Space in the web search API?Nikunj [00:10:53]: Yeah. Yeah. I think we have some documentation around how websites, publishers can control, like, what shows up in a web search tool. And I think you should be able to, like, read that. I think we should be able to get Latent Space in for sure. Yeah.swyx [00:11:10]: You know, I think so. I compare this to a broader trend that I started covering last year of online LLMs. Actually, Perplexity, I think, was the first. It was the first to say, to offer an API that is connected to search, and then Gemini had the sort of search grounding API. And I think you guys, I actually didn't, I missed this in the original reading of the docs, but you even give like citations with like the exact sub paragraph that is matching, which I think is the standard nowadays. I think my question is, how do we take what a knowledge cutoff is for something like this, right? Because like now, basically there's no knowledge cutoff is always live, but then there's a difference between what the model has sort of internalized in its back propagation and what is searching up its rag.Romain [00:11:53]: I think it kind of depends on the use case, right? And what you want to showcase as the source. Like, for instance, you take a company like Hebbia that has used this like web search tool. They can combine like for credit firms or law firms, they can find like, you know, public information from the internet with the live sources and citation that sometimes you do want to have access to, as opposed to like the internal knowledge. But if you're building something different, well, like, you just want to have the information. If you want to have an assistant that relies on the deep knowledge that the model has, you may not need to have these like direct citations. So I think it kind of depends on the use case a little bit, but there are many, uh, many companies like Hebbia that will need that access to these citations to precisely know where the information comes from.swyx [00:12:34]: Yeah, yeah, uh, for sure. And then one thing on the, on like the breadth, you know, I think a lot of the deep research, open deep research implementations have this sort of hyper parameter about, you know, how deep they're searching and how wide they're searching. I don't see that in the docs. But is that something that we can tune? Is that something you recommend thinking about?Nikunj [00:12:53]: Super interesting. It's definitely not a parameter today, but we should explore that. It's very interesting. I imagine like how you would do it with the web search tool and responsive API is you would have some form of like, you know, agent orchestration over here where you have a planning step and then each like web search call that you do like explicitly goes a layer deeper and deeper and deeper. But it's not a parameter that's available out of the box. But it's a cool. It's a cool thing to think about. Yeah.swyx [00:13:19]: The only guidance I'll offer there is a lot of these implementations offer top K, which is like, you know, top 10, top 20, but actually don't really want that. You want like sort of some kind of similarity cutoff, right? Like some matching score cuts cutoff, because if there's only five things, five documents that match fine, if there's 500 that match, maybe that's what I want. Right. Yeah. But also that might, that might make my costs very unpredictable because the costs are something like $30 per a thousand queries, right? So yeah. Yeah.Nikunj [00:13:49]: I guess you could, you could have some form of like a context budget and then you're like, go as deep as you can and pick the best stuff and put it into like X number of tokens. There could be some creative ways of, of managing cost, but yeah, that's a super interesting thing to explore.Alessio [00:14:05]: Do you see people using the files and the search API together where you can kind of search and then store everything in the file so the next time I'm not paying for the search again and like, yeah, how should people balance that?Nikunj [00:14:17]: That's actually a very interesting question. And let me first tell you about how I've seen a really cool way I've seen people use files and search together is they put their user preferences or memories in the vector store and so a query comes in, you use the file search tool to like get someone's like reading preferences or like fashion preferences and stuff like that, and then you search the web for information or products that they can buy related to those preferences and you then render something beautiful to show them, like, here are five things that you might be interested in. So that's how I've seen like file search, web search work together. And by the way, that's like a single responses API call, which is really cool. So you just like configure these things, go boom, and like everything just happens. But yeah, that's how I've seen like files and web work together.Romain [00:15:01]: But I think that what you're pointing out is like interesting, and I'm sure developers will surprise us as they always do in terms of how they combine these tools and how they might use file search as a way to have memory and preferences, like Nikum says. But I think like zooming out, what I find very compelling and powerful here is like when you have these like neural networks. That have like all of the knowledge that they have today, plus real time access to the Internet for like any kind of real time information that you might need for your app and file search, where you can have a lot of company, private documents, private details, you combine those three, and you have like very, very compelling and precise answers for any kind of use case that your company or your product might want to enable.swyx [00:15:41]: It's a difference between sort of internal documents versus the open web, right? Like you're going to need both. Exactly, exactly. I never thought about it doing memory as well. I guess, again, you know, anything that's a database, you can store it and you will use it as a database. That sounds awesome. But I think also you've been, you know, expanding the file search. You have more file types. You have query optimization, custom re-ranking. So it really seems like, you know, it's been fleshed out. Obviously, I haven't been paying a ton of attention to the file search capability, but it sounds like your team has added a lot of features.Nikunj [00:16:14]: Yeah, metadata filtering was like the main thing people were asking us for for a while. And I'm super excited about it. I mean, it's just so critical once your, like, web store size goes over, you know, more than like, you know, 5,000, 10,000 records, you kind of need that. So, yeah, metadata filtering is coming, too.Romain [00:16:31]: And for most companies, it's also not like a competency that you want to rebuild in-house necessarily, you know, like, you know, thinking about embeddings and chunking and, you know, how of that, like, it sounds like very complex for something very, like, obvious to ship for your users. Like companies like Navant, for instance. They were able to build with the file search, like, you know, take all of the FAQ and travel policies, for instance, that you have, you, you put that in file search tool, and then you don't have to think about anything. Now your assistant becomes naturally much more aware of all of these policies from the files.swyx [00:17:03]: The question is, like, there's a very, very vibrant RAG industry already, as you well know. So there's many other vector databases, many other frameworks. Probably if it's an open source stack, I would say like a lot of the AI engineers that I talk to want to own this part of the stack. And it feels like, you know, like, when should we DIY and when should we just use whatever OpenAI offers?Nikunj [00:17:24]: Yeah. I mean, like, if you're doing something completely from scratch, you're going to have more control, right? Like, so super supportive of, you know, people trying to, like, roll up their sleeves, build their, like, super custom chunking strategy and super custom retrieval strategy and all of that. And those are things that, like, will be harder to do with OpenAI tools. OpenAI tool has, like, we have an out-of-the-box solution. We give you the tools. We use some knobs to customize things, but it's more of, like, a managed RAG service. So my recommendation would be, like, start with the OpenAI thing, see if it, like, meets your needs. And over time, we're going to be adding more and more knobs to make it even more customizable. But, you know, if you want, like, the completely custom thing, you want control over every single thing, then you'd probably want to go and hand roll it using other solutions. So we're supportive of both, like, engineers should pick. Yeah.Alessio [00:18:16]: And then we got computer use. Which I think Operator was obviously one of the hot releases of the year. And we're only two months in. Let's talk about that. And that's also, it seems like a separate model that has been fine-tuned for Operator that has browser access.Nikunj [00:18:31]: Yeah, absolutely. I mean, the computer use models are exciting. The cool thing about computer use is that we're just so, so early. It's like the GPT-2 of computer use or maybe GPT-1 of computer use right now. But it is a separate model that has been, you know, the computer. The computer use team has been working on, you send it screenshots and it tells you what action to take. So the outputs of it are almost always tool calls and you're inputting screenshots based on whatever computer you're trying to operate.Romain [00:19:01]: Maybe zooming out for a second, because like, I'm sure your audience is like super, super like AI native, obviously. But like, what is computer use as a tool, right? And what's operator? So the idea for computer use is like, how do we let developers also build agents that can complete tasks for the users, but using a computer? Okay. Or a browser instead. And so how do you get that done? And so that's why we have this custom model, like optimized for computer use that we use like for operator ourselves. But the idea behind like putting it as an API is that imagine like now you want to, you want to automate some tasks for your product or your own customers. Then now you can, you can have like the ability to spin up one of these agents that will look at the screen and act on the screen. So that means able, the ability to click, the ability to scroll. The ability to type and to report back on the action. So that's what we mean by computer use and wrapping it as a tool also in the responses API. So now like that gives a hint also at the multi-turned thing that we were hinting at earlier, the idea that like, yeah, maybe one of these actions can take a couple of minutes to complete because there's maybe like 20 steps to complete that task. But now you can.swyx [00:20:08]: Do you think a computer use can play Pokemon?Romain [00:20:11]: Oh, interesting. I guess we tried it. I guess we should try it. You know?swyx [00:20:17]: Yeah. There's a lot of interest. I think Pokemon really is a good agent benchmark, to be honest. Like it seems like Claude is, Claude is running into a lot of trouble.Romain [00:20:25]: Sounds like we should make that a new eval, it looks like.swyx [00:20:28]: Yeah. Yeah. Oh, and then one more, one more thing before we move on to agents SDK. I know you have a hard stop. There's all these, you know, blah, blah, dash preview, right? Like search preview, computer use preview, right? And you see them all like fine tunes of 4.0. I think the question is, are we, are they all going to be merged into the main branch or are we basically always going to have subsets? Of these models?Nikunj [00:20:49]: Yeah, I think in the early days, research teams at OpenAI like operate with like fine tune models. And then once the thing gets like more stable, we sort of merge it into the main line. So that's definitely the vision, like going out of preview as we get more comfortable with and learn about all the developer use cases and we're doing a good job at them. We'll sort of like make them part of like the core models so that you don't have to like deal with the bifurcation.Romain [00:21:12]: You should think of it this way as exactly what happened last year when we introduced vision capabilities, you know. Yes. Vision capabilities were in like a vision preview model based off of GPT-4 and then vision capabilities now are like obviously built into GPT-4.0. You can think about it the same way for like the other modalities like audio and those kind of like models, like optimized for search and computer use.swyx [00:21:34]: Agents SDK, we have a few minutes left. So let's just assume that everyone has looked at Swarm. Sure. I think that Swarm has really popularized the handoff technique, which I thought was like, you know, really, really interesting for sort of a multi-agent. What is new with the SDK?Nikunj [00:21:50]: Yeah. Do you want to start? Yeah, for sure. So we've basically added support for types. We've made this like a lot. Yeah. Like we've added support for types. We've added support for guard railing, which is a very common pattern. So in the guardrail example, you basically have two things happen in parallel. The guardrail can sort of block the execution. It's a type of like optimistic generation that happens. And I think we've added support for tracing. So I think that's really cool. So you can basically look at the traces that the Agents SDK creates in the OpenAI dashboard. We also like made this pretty flexible. So you can pick any API from any provider that supports the ChatCompletions API format. So it supports responses by default, but you can like easily plug it in to anyone that uses the ChatCompletions API. And similarly, on the tracing side, you can support like multiple tracing providers. By default, it sort of points to the OpenAI dashboard. But, you know, there's like so many tracing providers. There's so many tracing companies out there. And we'll announce some partnerships on that front, too. So just like, you know, adding lots of core features and making it more usable, but still centered around like handoffs is like the main, main concept.Romain [00:22:59]: And by the way, it's interesting, right? Because Swarm just came to life out of like learning from customers directly that like orchestrating agents in production was pretty hard. You know, simple ideas could quickly turn very complex. Like what are those guardrails? What are those handoffs, et cetera? So that came out of like learning from customers. And it was initially shipped. It was not as a like low-key experiment, I'd say. But we were kind of like taken by surprise at how much momentum there was around this concept. And so we decided to learn from that and embrace it. To be like, okay, maybe we should just embrace that as a core primitive of the OpenAI platform. And that's kind of what led to the Agents SDK. And I think now, as Nikuj mentioned, it's like adding all of these new capabilities to it, like leveraging the handoffs that we had, but tracing also. And I think what's very compelling for developers is like instead of having one agent to rule them all and you stuff like a lot of tool calls in there that can be hard to monitor, now you have the tools you need to kind of like separate the logic, right? And you can have a triage agent that based on an intent goes to different kind of agents. And then on the OpenAI dashboard, we're releasing a lot of new user interface logs as well. So you can see all of the tracing UIs. Essentially, you'll be able to troubleshoot like what exactly happened. In that workflow, when the triage agent did a handoff to a secondary agent and the third and see the tool calls, et cetera. So we think that the Agents SDK combined with the tracing UIs will definitely help users and developers build better agentic workflows.Alessio [00:24:28]: And just before we wrap, are you thinking of connecting this with also the RFT API? Because I know you already have, you kind of store my text completions and then I can do fine tuning of that. Is that going to be similar for agents where you're storing kind of like my traces? And then help me improve the agents?Nikunj [00:24:43]: Yeah, absolutely. Like you got to tie the traces to the evals product so that you can generate good evals. Once you have good evals and graders and tasks, you can use that to do reinforcement fine tuning. And, you know, lots of details to be figured out over here. But that's the vision. And I think we're going to go after it like pretty hard and hope we can like make this whole workflow a lot easier for developers.Alessio [00:25:05]: Awesome. Thank you so much for the time. I'm sure you'll be busy on Twitter tomorrow with all the developer feedback. Yeah.Romain [00:25:12]: Thank you so much for having us. And as always, we can't wait to see what developers will build with these tools and how we can like learn as quickly as we can from them to make them even better over time.Nikunj [00:25:21]: Yeah.Romain [00:25:22]: Thank you, guys.Nikunj [00:25:23]: Thank you.Romain [00:25:23]: Thank you both. Awesome. Get full access to Latent.Space at www.latent.space/subscribe
Credits: 0.25 AMA PRA Category 1 Credit™ CME/CE Information and Claim Credit: https://www.pri-med.com/online-education/podcast/frankly-speaking-cme-423 Overview: The use of electronic devices has increased across all ages, cultures, and socio-economic levels. Usage was also affected by the COVID-19 pandemic. There is a growing body of evidence that screen time can impact cognition and executive function in developing minds, and both the American Academy of Pediatrics (AAP) and the WHO have recommendations on screen time exposure for children. Join us as we discuss recent evidence looking at the impact of screentime on toddlers' cognition and executive function over time. Episode resource links: Fitzpatrick, C., Florit, E., Lemieux, A., Garon-Carrier, G, Mason, L. Associations between preschooler screentime trajectories and executive function. Academic Pediatrics, Volume 0, Issue 0, 102603. Schmidt-Persson J, Rasmussen MGB, Sørensen SO, et al. Screen Media Use and Mental Health of Children and Adolescents: A Secondary Analysis of a Randomized Clinical Trial. JAMA Netw Open. 2024;7(7):e2419881. doi:10.1001/jamanetworkopen.2024.19881 AAP Screen Time Guidelines Schmidt-Persson J, Rasmussen MGB, Sørensen SO, et al. Screen Media Use and Mental Health of Children and Adolescents: A Secondary Analysis of a Randomized Clinical Trial. JAMA Netw Open. 2024;7(7):e2419881. doi:10.1001/jamanetworkopen.2024.19881 Raj D, Ahmad N, Mohd Zulkefli NA, Lim PY. Stop and Play Digital Health Education Intervention for Reducing Excessive Screen Time Among Preschoolers From Low Socioeconomic Families: Cluster Randomized Controlled Trial. J Med Internet Res. 2023;25:e40955. Published 2023 May 4. doi:10.2196/40955 Guest: Susan Feeney, DNP, FNP-BC, NP-C Music Credit: Matthew Bugos Thoughts? Suggestions? Email us at FranklySpeaking@pri-med.com
Today we are talking about Pantheon Content Publisher, How it brings Google Docs to Drupal, and why you might want to use it with guests Chris Reynolds & John Money. We'll also cover QR Code Fields as our module of the week. For show notes visit: https://www.talkingDrupal.com/492 Topics What is Pantheon Content Publisher Why was Pantheon Content Publisher created How does it work with Google docs How do you handle revisions How do you target environments Can you do structured content How do reference existing content How does this use GraphQL What are some of the use cases you are seeing Who should not use Pantheon Content Publisher Can I develop the SDCs locally with Pantheon Content Publisher What is the ingestion layer like AI layer Talking Drupal workflow Do you have a process for bulk publishing How does startup look Is it PCC or PCP Can Pantheon Content Publisher customers push their own non google content Is Pantheon Content Publisher open source Is there a cost Can you translate content Resources Pantheon Content Publisher docs Pantheon Content Publisher module Pantheon Content Publisher Roadmap Guests Chris Reynolds - jazzsequence.com jazzsequence John Money - john.money Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Andrew Berry - lullabot.com deviantintegral MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted your Drupal site to generate various kinds of QR codes? There's a module for that. Module name/project name: QR Code Fields Brief history How old: created in Nov 2023 by Sujan Shrestha of Nepal Versions available: 1.1.1 and 2.1.3, the latter of which works with Drupal 10 and 11 Maintainership Actively maintained Number of open issues: 4 open issues, none of which are bugs Usage stats: 134 sites Module features and usage This module defines not just one but 9 new fields for generating QR codes, including for URLs, vCards, MeCards, Events, and more Each field QR accepts inputs based on the associated information that should be exposed. So a URL QR Code field only accepts an input for the URL destination, while an Event QR Code has inputs for a summary, description, location, start, and end. The module also provides a custom block plugin for each type of QR code, to make it easier to display your QR codes wherever you need for your specific use case The QR Code Fields module also defines a service for generating QR code images, which could also be useful for more custom implementations.
Credits: 0.25 AMA PRA Category 1 Credit™ CME/CE Information and Claim Credit: https://www.pri-med.com/online-education/podcast/frankly-speaking-cme-423 Overview: The use of electronic devices has increased across all ages, cultures, and socio-economic levels. Usage was also affected by the COVID-19 pandemic. There is a growing body of evidence that screen time can impact cognition and executive function in developing minds, and both the American Academy of Pediatrics (AAP) and the WHO have recommendations on screen time exposure for children. Join us as we discuss recent evidence looking at the impact of screentime on toddlers' cognition and executive function over time. Episode resource links: Fitzpatrick, C., Florit, E., Lemieux, A., Garon-Carrier, G, Mason, L. Associations between preschooler screentime trajectories and executive function. Academic Pediatrics, Volume 0, Issue 0, 102603. Schmidt-Persson J, Rasmussen MGB, Sørensen SO, et al. Screen Media Use and Mental Health of Children and Adolescents: A Secondary Analysis of a Randomized Clinical Trial. JAMA Netw Open. 2024;7(7):e2419881. doi:10.1001/jamanetworkopen.2024.19881 AAP Screen Time Guidelines Schmidt-Persson J, Rasmussen MGB, Sørensen SO, et al. Screen Media Use and Mental Health of Children and Adolescents: A Secondary Analysis of a Randomized Clinical Trial. JAMA Netw Open. 2024;7(7):e2419881. doi:10.1001/jamanetworkopen.2024.19881 Raj D, Ahmad N, Mohd Zulkefli NA, Lim PY. Stop and Play Digital Health Education Intervention for Reducing Excessive Screen Time Among Preschoolers From Low Socioeconomic Families: Cluster Randomized Controlled Trial. J Med Internet Res. 2023;25:e40955. Published 2023 May 4. doi:10.2196/40955 Guest: Susan Feeney, DNP, FNP-BC, NP-C Music Credit: Matthew Bugos Thoughts? Suggestions? Email us at FranklySpeaking@pri-med.com
B2B SaaS Pricing models have evolved over the past few years with 67% of SaaS companies now saying they have introduced at least one element of Usage-Based Pricing. Though this benchmark does not tell the whole story of pricing, as the primary pricing model is still based upon a subscription fee per user or a subscription fee per user + a Usage-Based or Value-Based pricing variable.During this episode, CAC and Growth cover a wide range of current pricing trends and benchmarks including:Subscription vs Usage vs Value vs Hybrid pricing model adoptionAdoption of AI and the associated pricing strategiesImpact on Growth Rates based upon pricing model usedUtilization of "Usage CAPS" and how they are chargedTreatment of Usage-Based Revenue when calculating ARRIf you are a student of the SaaS industry, and/or are evaluating if your current pricing model is optimized for your customers and your company's financial performance - this episode is chalked full of unique insights and thought-provoking commentary from Dave "CAC" Kellogg and Ray "Growth" RikeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Hugh Millen joins Dave Softy Mahler and Dick Fain to talk about DK Metcalf reportedly requesting a trade from the Seahawks, how the situation all might have gotten out, his usage with Seattle since 2019, how other teams would have used him, and his value.
Today we are talking about OpenY, a distribution for YMCAs, why it was created, and how it's used today with guests Avi Schwab and Brent Wilker. We'll also cover AI Media Image as our module of the week. For show notes visit: https://www.talkingDrupal.com/491 Topics What is OpenY Why is it important to the YMCA How many Y's use it Is each Y independent technologically Why doesn't the Y create a platform as a service How do you get the message out about OpenY What does a Y pay for and how do they pay What is the governance layer like Any thoughts on recipes How does theming work New features to come How does ImageX support OpenY Resources MOTW FLDC session: From Chatbots to Content Magic: The AI-Driven Future of Drupal YMCA Website Services (OpenY) Glossary YMCA Sandboxes https://sandboxes.y.org/ https://sandbox-carnation-std.y.org/ Get in touch with ImageX about Open Y Avi's sourdough recipe base and flour https://tartinebakery.com/stories/country-bread https://www.janiesmill.com/ Guests Brent Wilker - ImageX.co brent.wilker Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Avi Schwab - froboy.org froboy MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted to use AI to generate images, and save them directly into the Drupal media library once you have the result you want? There's a module for that. Module name/project name: AI Media Image Brief history How old: created in Feb 2025 by coffeymachine Versions available: 1.0.0-alpha2 Maintainership Actively maintained Security coverage: technically, but needs a stable release Number of open issues: 2 open issues, neither of which are bugs Usage stats: 9 sites Module features and usage We have talked before on the podcast about a couple of ways you could use AI to generate images directly within a Drupal website. One used all the latest OpenAI APIs and the other had media library integration, though it only worked through its own admin form. Both were built to specifically DALL•E, OpenAI's image generation service. This new module is a big leap forward because it's based on Drupal's powerful and rapidly innovating AI module, so it can work with multiple AI image generation services. What's more, AI Media Image plugs into the Drupal core media system, so you can use the tool to generate images directly within the media library, including when you open it up in a modal to populate an entity reference field. This makes it significantly more intuitive to use this capability as part of a normal content creation flow. There are a couple of things that may not be intuitive when you first start using AI Media Image. For example, by default it uses the prompt you used to create the image as the alt text that will be saved to the media library. That seems unexpected to me, but if the prompt exceeds the max alt text length of 255 characters then it will throw an error and then you can overwrite the value of the prompt field to contain proper alt text before saving the image to the media library. This is one of the open issues mentioned earlier and resolving it would really improving the experience of using this module I got to play around with this module while preparing a demo for a session about AI I delivered with Mike Anello at Florida Drupalcamp on the weekend, so we'll try to include a link in the notes so you can also watch for that recording and see this module in action
In a new study, researchers found that pornography use is prevalent among all demographics, and Christian men and women surveyed, sadly, feel ok about their habit. Nick Stumbo, Executive Director of Pure Desire Ministries, joins us to review the study results. Pure Desire Ministries – A safe place for men, women, and young adults to find hope and healing from the effects of sexual brokenness. (https://puredesire.org/)
We're saying goodbye to February with a Friday news roundup! This week, the team is talking about transportation in its many forms. Executive producer Hayley Sperling is on the Metro beat, giving details on an east side bus crash and the latest on contract negotiations between Metro workers and the city. Meanwhile, newsletter editor Rob Thomas digs into why the Dane County Regional Airport had a near-record-breaking year in 2024. Finally, host Bianca Martin has the details on the money flying into our state's Supreme Court race. Mentioned on the show: Services impacted as Madison Metro workers decline overtime, cite ongoing contract negotiations [Channel 3000] The Mayor's (Ambitious) Plan For Addressing Madison's Housing Crisis [City Cast Madison] Wanna talk to us about an episode? Leave us a voicemail at 608-318-3367 or email madison@citycast.fm. We're also on Instagram! Want more Madison news delivered right to your inbox? Subscribe to the Madison Minutes morning newsletter. Looking to advertise on City Cast Madison? Check out our options for podcast and newsletter ads. Learn more about the sponsors of this February 28th episode here: Dane County Humane Society Learn more about your ad choices. Visit megaphone.fm/adchoices
Send Everyday AI and Jordan a text messageWhat happens when.... AI agents are everywhere? To learn, we tapped into the insights from one of the leading voices in AI, Babak Hodjat, who's resume includes helping create the tech behind the original AI agents like Siri. So, how do enterprises prepare for a multi-agent environment? Tune in and find out. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Babak questions on AI agentsUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Understanding Agents and Large Language Models2. Implementing Multi-Agent Systems3. Hallucinations and Errors in AI Systems4. Usage and Organization within Multi-Agent EnvironmentsTimestamps:00:00 "Rethinking Enterprise with Multi-AI Agents"05:33 AI Agents Buzz at Davos07:57 Code Execution via Agent Tools10:03 Emerging Trend: Multi-Agent AI Integration14:40 Responsible Multi-Agent System Design19:35 Multi-Agent System Alignment Challenges21:19 Resilient AI Through Redundancy26:26 Generative AI Business Strategies27:45 Rethinking Human-Device Interaction31:16 Multi-Agent Enterprise IntegrationKeywords:Everyday AI, podcast, generative AI, agents, large language models, enterprise companies, multi agent environments, decision making process, Cognizant, Neuro AI, startup culture, agentic AI environments, technology services, AI first company, natural language processing, decision systems, agentification, POC (proof of concept), modular software, agent alignment, AI ethics, human in the loop, multi agent systems, organizational decision making, enterprise productivity, knowledge worker, conversational systems, AI strategy, AI safety, organizational agility. Ready for ROI on GenAI? Go to youreverydayai.com/partner
On today's show, Dane is joined by Wolves beat writer Chris Hine from the Minnesota Star Tribune. Chris was on site and in the locker room for the Wolves comeback win in Oklahoma City on Monday night, and described the vibe in the locker room after the game and what the players had to say. Also some conversation on Anthony Edwards, Naz Reid and the young vets now serving as mentors. Specific topics and timestamps below... - The vibe in the locker room after the win in OKC (1:00) - The Wolves' defensive formula against SGA (12:00) - Ant's usage + the fatigue factor (21:00) - Naz Reid's player option and what his next contract might look like (34:00) - The young vets showing the rookies the way (46:00) - Chris' rankings of the airports around the NBA (52:00) If you'd like to support our partners... -- Try out our new sponsor WtrMln Wtr at Whole Foods or Target: https://drinkwtrmln.com/ -- Contact Adrianna Lonick with Coldwell Banker Realty for a free consultation at: https://www.thedancingrealtor.com/ or call/text 715-304-9920 -- For more information on Treasure Island Watch Parties, visit https://www.ticasino.com -- Try out SKIMS Men's underwear: https://skims.com/collections/menswear -- This episode is brought to you by BetterHelp. Give online therapy a try at https://www.betterhelp.com/DANEMOORE and get on your way to being your best self. -- Get yourself a pair of Duer jeans for 20% by going to: https://www.shopduer.com/danemoore -- Contact Your Home Improvement Company: https://www.yourhomeimprovementco.com/ -- Follow Falling Knife on Instagram for weekly schedule updates: https://www.instagram.com/fallingknifebc/ -- Sign up for Prize Picks, promo code "DANE" for a signup bonus: https://www.prizepicks.com/ -- Want to advertise on the show? Reach out to DaneMooreProductions@gmail.com -- Support the show by subscribing for $5 a month: https://www.patreon.com/DaneMooreNBA -- #BlueWireVideo Learn more about your ad choices. Visit podcastchoices.com/adchoices
Wed, 26 Feb 2025 22:45:00 GMT http://relay.fm/connected/541 http://relay.fm/connected/541 The Suspicious Shape of Spaghetti 541 Federico Viticci, Stephen Hackett, and Myke Hurley Myke is away, so Stephen and Federico are relaunching every shirt from the show's archives. Then, the guys talk through iOS 18.4 and Ticci's new WiFi. Lastly, Stephen chats with Connected headphone expert Merri about her new Powerbeats Pro 2. Myke is away, so Stephen and Federico are relaunching every shirt from the show's archives. Then, the guys talk through iOS 18.4 and Ticci's new WiFi. Lastly, Stephen chats with Connected headphone expert Merri about her new Powerbeats Pro 2. clean 4630 Myke is away, so Stephen and Federico are relaunching every shirt from the show's archives. Then, the guys talk through iOS 18.4 and Ticci's new WiFi. Lastly, Stephen chats with Connected headphone expert Merri about her new Powerbeats Pro 2. This episode of Connected is sponsored by: Google Gemini: Supercharge your creativity and productivity. Incogni: Take your personal data back with Incogni! Use code CONNECTED with this link and get 60% off an annual plan. Ecamm: Powerful live streaming platform for Mac. Get one month free. Guest Starring: Merri Hackett Links and Show Notes: Thank you Discord user The E.J.J. for the intro clip. Get Connected Pro: Preshow, postshow, no ads. Submit Feedback
Zack Abbott is the CEO and Co-founder of ZBiotics, and the inventor of ZBiotics' proprietary technology. He has a PhD in microbiology & immunology from the University of Michigan where he studied bacterial gene regulation. Prior to starting ZBiotics, Zack worked in clinical trial design as well as researching HIV vaccines and pursuing novel antibiotics in both academia and industry. Learn more about Zack on Episode 183 where we dive into the pre-alcohol probiotic he created. In this episode, Zack discusses the significance of fiber in gut health and introduces his new product, Sugar to Fiber. The conversation highlights the common fiber deficiency among Americans, the various benefits of fiber, and the innovative approach of using probiotics to convert sugar into fiber, enhancing dietary diversity and gut health. Episode Highlights: • Most people are fiber-deficient.