POPULARITY
Categories
From May 2, 2023: Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.On this episode of Arbiters of Truth, Lawfare's occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they're skeptical that its protections apply to generative AI. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
On this episode of Inside the Firm does density ever create affordable housing, then some hot takes from Tyler Suomala, and finally, what is generative engine optimization and why should it be your next focus? Join us as we go back Inside the Firm!
How can marketers stay discoverable as AI reshapes the search landscape?This special Hard Corps Marketing Show takeover episode features an episode from the Connect To Market podcast, hosted by Casey Cheshire. In this conversation, Casey sits down with Al Sargent, Senior Director, Product, Solution & Partner Marketing at ZEDEDA, to explore the evolving field of Generative Engine Optimization (GEO). Al delivers a strategic and practical breakdown of how marketers can adapt to the rise of AI tools like ChatGPT, Perplexity, and Gemini- tools that are reshaping how content is discovered and consumed.He explains how GEO differs from traditional SEO, why documentation matters more than ever, and how to leverage content repetition and high-authority sites to maintain discoverability. Al also shares his personal journey in tech and marketing, emphasizing the role of empathy, mentorship, and community-building in professional growth.In this episode, we cover:How to use LLM.txt files and structured content to improve visibilityThe importance of repurposing and repeating content across platformsLeveraging high-authority sites like Wikipedia to support GEOWhy competitive comparison content is crucial in the AI search era
Two service pros, same service, same price, same city… so why does AI recommend them and not you? Here's the truth. AI doesn't guess. It chooses the clearer, safer, more repeated name. This is where answer engine optimization (AEO) meets your daily reality as a service pro. In 2025, SEO alone won't save you. Generative engine optimization, AI discovery, and digital networking are now the trails that decide who gets chosen. In this episode of the Branding Momentum podcast, you'll learn how AI makes the tie-break between you and a competitor: • How to make your niche and specialty crystal clear so both AI search and humans know exactly what you do • Why consistency across LinkedIn, your website, podcasts, and transcripts teaches AI to connect the dots• Where to show up through digital networking—guest podcasts, collaborations, newsletters, articles, and communities—so your name travels further • Why one collaboration multiplies your visibility across feeds, YouTube clips, blogs, and transcripts faster than twenty solo posts in your own bubble If you've been visible but not chosen, this is the missing piece. I'm Veronica Di Polo, a marketing strategist based in Moraira, Spain, helping service-based business owners get leads with words that sell. _______________________
If you didn't have the chance to attend Content Marketing World 2025, give this episode a listen! I talk with Lee Chapman and Morgan Norris about their time and we unpack the event's key takeaways from topics on storytelling and AI, to the future of platforms like LinkedIn and Reddit.In this episode Wendy Covey welcomes Lee Chapman, President of TREW, and Morgan Norris, Senior Brand Strategist, to discuss their experiences at Content Marketing World 2025. Morgan Norris presented a manufacturing masterclass, focusing on executing a campaign in 30 days. This session provided attendees with actionable strategies to build brand awareness quickly, a skill increasingly necessary in today's fast-paced market. Lee Chapman participated in the mentor-mentee program, emphasizing the value of community and shared learning within the industry.The episode delves into key themes from the conference, including storytelling as a strategy, AI and search disruption, and innovative content systems. AI and search were hot topics, with discussions on generative engine optimization (GEO) and the evolving role of AI in content creation. The speakers emphasized the need for unique content to stand out in a sea of AI-generated material. The episode also explored the potential of platforms like LinkedIn and Reddit for industrial marketing, highlighting the importance of thought leadership and community engagement.TakeawaysStorytelling is crucial for aligning sales and marketing, as highlighted by Pam Didner's session.Farah Kober demonstrated the impact of storytelling on brand perception and purchase intent.Generative engine optimization (GEO) is becoming a key focus in AI and search strategies.Unique content is essential to stand out in a market saturated with AI-generated material.Interactive content like configurators and ROI generators can enhance user engagement.Creating comparative content can enhance your brand's visibility and engagement.Users on Reddit are highly intentional, often seeking specific answers or engaging in community discussions.ResourcesConnect with Lee on LinkedInConnect with Morgan on LinkedInConnect with Wendy on LinkedInRegister for the Industrial Marketing Summit
Before Siri had sass and Alexa started judging your music taste, the original virtual assistant was quietly revolutionizing the '90s—powered by many patents and a whole lot of foresight. Now, as AI goes from buzzword to boss, we ask, will it transform your job, your home… or just steal your knowledge? This week, Dave, Esmee and Rob speak with Kevin Surace, Futurist, Inventor & "Father" of the Virtual Assistant, about exploring the evolution of AI, what the future might hold, and how disruptive innovation can shake up your organization in ways you might not expect. TLDR: 00:40 – Introduction of Kevin Surace 05:12 – Rob gets confused by Google Maps reviews and selfies 08:15 – Deep dive into the evolution of AI with Kevin 52:00 – How intelligent agents can help manage digital noise and support mental well-being 1:07:30 – Wrapping up the book the Joy Success Cycle and heading to a concert GuestKevin Surace: https://www.linkedin.com/in/ksurace/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini
The future of learning isn’t static — it’s responsive, reflective, and built for capability. In this follow-up to our first AI episode, Zoe, Michelle, and Ausmed CEO Will Egan explore the next phase of AI in learning: generative learning — and the governance frameworks that make it safe and useful in real-world healthcare settings. From rubrics and role-based prompts to closed systems and guardrails, this episode takes a look into how generative AI can personalise learning, support performance, and finally help us bridge the gap between theory and practice. Contact the show at podcast@ausmed.com.au Follow Ausmed on LinkedIn, Facebook & Instagram Resources: AI and Privacy | Guide AI Bias and Cultural Safety in Aged Care | Guide Using AI in Healthcare Education | Guide How AI and Machine Learning Is Impacting Nursing | Guide What does AI mean for healthcare in Australia? | Guide Why Reflection Matters | Guide Thinking Differently About Change (Pt. 1) | Thought Leadership Smart Strategies for Healthcare Education | Thought Leadership Learn More About Ausmed: https://lnk.bio/ausmed/organisationsSee omnystudio.com/listener for privacy information.
In this episode, we explore the evolving impact of generative artificial intelligence (GAI) on the workforce, with a focus on how GAI can affect Asian American professionals. Drawing from recent research, we highlight how tasks requiring human agency—such as interpersonal communication and organizing—are gaining value, while roles centered on data processing and analysis face increasing automation. Tune in for strategies on up-skilling and re-skilling, plus a few alter ego career pivots as we imagine our lives beyond AI.Link to article about GAI.
Can we have a normal conversation about AI? Brian talks with Meghan Sullivan about the effect of rapidly advancing technology on human dignity and our understanding of the imago Dei. Dr. Brian Doak is an Old Testament scholar and professor.Meghan Sullivan is a decorated scholar and teacher at the University of Notre Dame, where she is professor of philosophy.Check out the opening ND Summit Keynote on the DELTA Framework and the Institute for Ethics and the Common Good.New York Times article: Finding God in the App StoreIf you enjoy listening to the George Fox Talks podcast and would like to watch, too, check out our channel on YouTube! We also have a web page that features all of our podcasts, a sign-up for our weekly email update, and publications from the George Fox University community.
AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it. This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values. In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we'll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we'll also touch upon ethical and responsible AI. 01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on? Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let's get into specific scenarios within each industry. Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That's great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let's take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I'm sure AI has its constraints too, right? Can you talk about what happens if AI isn't able to echo human ethics? Hemant: AI can fail due to lack of ethics. AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us. Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorithm as fair. But it uses zip codes in decision-making, effectively penalizing people based on race. That's not fairness. That's harm disguised as data. Inclusivity depends on the intent sustainability. Inclusive design isn't just a check box. It needs a long-term commitment. For example, controllers for gamers with disabilities are only possible because of sustained R&D and intentional design choices. Without investment in inclusion, accessibility is left behind. Transparency depends on the safeguard robustness. Transparency is only useful if the system is secure and resilient. For example, a medical AI may be explainable, but if it is vulnerable to hacking, transparency won't matter. Accountability depends on the safeguard privacy and traceability. You can't hold people accountable if there is no trail to follow. For example, after a fatal self-driving car crash, deleted system logs meant no one could be held responsible. Without auditability, accountability collapses. So remember, outcomes are what we see, but they rely on intent to guide priorities and safeguards to support execution. That's why humans must have a final say. AI has no grasp of ethics, but we do. 13:16 Nikita: So, what you're saying is ethical intent and robust AI safeguards need to go hand in hand if we are to truly leverage AI we can trust. Hemant: When it comes to AI, preventing harm is a must. Take self-driving cars, for example. Keeping pedestrians safe is absolutely critical, which means the technology has to be rock solid and reliable. At the same time, fairness and inclusivity can't be overlooked. If an AI system used for hiring learns from biased past data, say, mostly male candidates being hired, it can end up repeating those biases, shutting out qualified candidates unfairly. Transparency and accountability go hand in hand. Imagine a loan rejection if the AI's decision isn't clear or explainable. It becomes impossible for someone to challenge or understand why they were turned down. And of course, robustness supports fairness too. Loan approval systems need strong security to prevent attacks that could manipulate decisions and undermine trust. We must build AI that reflects human values and has safeguards. This makes sure that AI is fair, inclusive, transparent, and accountable. 14:44 Lois: Before we wrap, can you talk about why AI can fail? Let's continue with your analogy of the tree. Can you explain how AI failures occur and how we can address them? Hemant: Root elements like do not harm and sustainability are fundamental to ethical AI development. When these roots fail, the consequences can be serious. For example, a clear failure of do not harm is AI-powered surveillance tools misused by authoritarian regimes. This happens because there were no ethical constraints guiding how the technology was deployed. The solution is clear-- implement strong ethical use policies and conduct human rights impact assessment to prevent such misuse. On the sustainability front, training AI models can consume massive amount of energy. This failure occurs because environmental costs are not considered. To fix this, organizations are adopting carbon-aware computing practices to minimize AI's environmental footprint. By addressing these root failures, we can ensure AI is developed and used responsibly with respect for human rights and the planet. An example of a robustness failure can be a chatbot hallucinating nonexistent legal precedence used in court filings. This could be due to training on unverified internet data and no fact-checking layer. This can be fixed by grounding in authoritative databases. An example of a privacy failure can be AI facial recognition database created without user consent. The reason being no consent was taken for data collection. This can be fixed by adopting privacy-preserving techniques. An example of a fairness failure can be generated images of CEOs as white men and nurses as women, minorities. The reason being training on imbalanced internet images reflecting societal stereotypes. And the fix is to use diverse set of images. 17:18 Lois: I think this would be incomplete if we don't talk about inclusivity, transparency, and accountability failures. How can they be addressed, Hemant? Hemant: An example of an inclusivity failure can be a voice assistant not understanding accents. The reason being training data lacked diversity. And the fix is to use inclusive data. An example of a transparency and accountability failure can be teachers could not challenge AI-generated performance scores due to opaque calculations. The reason being no explainability tools are used. The fix being high-impact AI needs human review pathways and explainability built in. 18:04 Lois: Thank you, Hemant, for a fantastic conversation. We got some great insights into responsible and ethical AI. Nikita: Thank you, Hemant! If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…. Lois: And Lois Houston, signing off! 18:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Welcome to the CanadianSME Small Business Podcast, hosted by Maheen Bari. In this episode, we explore how businesses can harness data, cloud, and AI to unlock growth, improve efficiency, and transform insights into strategic action.Joining us is Omar Kazi, Senior AI and Enterprise Solution Architect at MIR Digital Solutions, who brings over 25 years of experience in enterprise technology and AI. Omar shares how mid-sized businesses are adopting cloud and AI, the role of analytics in differentiation, and his vision for the future of business technology.Key Highlights:1. AI Adoption for SMEs: Practical cloud AI use cases boosting growth and efficiency.2. Generative AI in Action: How tools like ChatGPT and CoPilot enhance research and creativity.3. Analytics for Differentiation: Turning data into predictive insights with Excel and Power BI.4. The Role of Partners: Why a single AI services partner ensures clarity, growth, and ROI.5. Future Outlook: Omar's vision for technology and AI in shaping competitive advantage.Special Thanks to Our Partners:RBC: https://www.rbcroyalbank.com/dms/business/accounts/beyond-banking/index.htmlUPS: https://solutions.ups.com/ca-beunstoppable.html?WT.mc_id=BUSMEWAGoogle: https://www.google.ca/A1 Global College: https://a1globalcollege.ca/ADP Canada: https://www.adp.ca/en.aspxFor more expert insights, visit www.canadiansme.ca and subscribe to the CanadianSME Small Business Magazine. Stay innovative, stay informed, and thrive in the digital age!Disclaimer: The information shared in this podcast is for general informational purposes only and should not be considered as direct financial or business advice. Always consult with a qualified professional for advice specific to your situation.
Alexander Schlager, CEO of Aiceberg.ai, discusses the intersection of artificial intelligence (AI) and cybersecurity, emphasizing the importance of securing AI-powered workflows. Aiceberg employs traditional machine learning techniques to safeguard generative AI systems, providing a deterministic and explainable approach to security. This method allows organizations to understand how their AI systems operate and ensures that they can trace and audit the decisions made by these systems, which is crucial in an era where AI incidents may lead to legal challenges.The conversation highlights the need for organizations to establish robust governance frameworks as they adopt AI technologies. Schlager points out that many businesses are still grappling with basic cybersecurity measures, which complicates their ability to implement effective AI governance. He stresses that organizations must assess their existing security postures and ensure that they are prepared for the rapid deployment of agentic AI, which allows non-technical users to create and manage AI workflows independently.Schlager provides concrete examples of how Aiceberg's technology is integrated into real-world applications, such as in the banking sector, where AI workflows may involve third-party interactions. He explains that Aiceberg monitors these interactions to classify and respond to potential security threats, ensuring that organizations can demonstrate compliance and safety in the event of an incident. This proactive approach to security is essential for maintaining trust and accountability in AI systems.Finally, the discussion touches on the broader implications of AI adoption, including the potential for improved customer experiences across various industries. Schlager notes that while AI can enhance service delivery, organizations must navigate the challenges of user expectations and the maturity of their AI implementations. By focusing on customer service and experience, companies can unlock significant value from their AI investments, but they must also prioritize security and governance to mitigate risks. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Making a Scene Presents Suno Studio: A Deep Dive into the First True Generative Audio WorkstationThe world of music production has always lived on the edge of technology. From the tape machines of the 1950s to the MIDI revolution of the 1980s and the digital audio workstations (DAWs) of the 2000s, each leap forward has reshaped how artists create. Now, in the mid-2020s, we're entering a new era: the rise of the Generative Audio Workstation—a platform where artificial intelligence is not just an assistant but an active collaborator. http://www.makingascene.org
Summary In this episode of the AI for Sales podcast, Chad Burmeister speaks with Ali Parandeh about the evolution of AI, particularly focusing on generative AI and its transformative impact on customer experience and sales processes. They discuss the advancements in AI tools, the importance of context in AI applications, and the skills sales professionals need to thrive in an AI-driven landscape. The conversation also touches on innovative technologies that are shaping the future of AI and how businesses can leverage these tools for efficiency and personalization. Takeaways Generative AI is revolutionizing how we interact with technology. AI can create personalized customer experiences in real-time. Advanced AI applications can automate complex sales processes. AI tools are becoming essential for modern businesses. Context engineering will be crucial for effective AI deployment. Sales professionals should embrace AI tools to enhance productivity. Automation can free up time for more creative tasks. AI can assist in content creation and documentation. Understanding AI's capabilities is key for future success. Curiosity and experimentation with AI tools can lead to innovation. Chapters 00:00 Introduction to AI and Generative AI 02:30 Transforming Customer Experience with AI 04:26 Advanced AI Applications in Sales 10:21 Innovative AI Tools and Success Stories 13:11 Navigating AI's Impact on Creativity 17:54 The Future of AI Technologies 21:47 Essential Skills for Sales Professionals The AI for Sales Podcast is brought to you by BDR.ai, Nooks.ai, and ZoomInfo—the go-to-market intelligence platform that accelerates revenue growth. Skip the forms and website hunting—Chad will connect you directly with the right person at any of these companies.
Security leaders from CyberArk, Fortra, and Sysdig share actionable strategies for securely implementing generative AI and reveal real-world insights on data protection and agent management.Topics Include:Panel explores practical security approaches for GenAI from prototype to productionThree-phase framework discussed: planning, pre-production, and production security considerationsSecurity must be built-in from start - data foundation is criticalUnderstanding data location, usage, transformation, and regulatory requirements is essentialFortra's security conglomerate approach integrates with AWS native tools and partnersMachine data initially easier for compliance - no PII or HIPAA concernsIdentity paradigm shift: agents can dynamically take human and non-human roles97% of organizations using AI tools lack identity and access policiesSecurity responsibility increases as you move up the customization stackOWASP Top 10 for GenAI addresses prompt injection and data poisoningRigorous model testing including adversarial attacks before deployment is crucialSysdig spent 6-9 months stress testing their agent before production releaseTension exists between moving fast and implementing proper security controlsDifferent security approaches needed based on data sensitivity and model usageZero-standing privilege and intent-based policies critical for agent managementMulti-agent systems create "Internet of Agents" with exponentially multiplying risksDiscovery challenge: finding where GenAI is running across enterprise environmentsAPI security and gateway protection becoming critical with acceptable latencyTop customer need: translating written AI policies into actionable controlsThreat modeling should focus on impact rather than just vulnerability severityParticipants:Prashant Tyagi - Go-To-Market Identity Security Technology Strategy Lead, CyberArkMike Reed – Field CISO, Cloud Security & AI, FortraZaher Hulays – Vice President Strategic Partnerships, SysdigMatthew Girdharry - WW Leader for Observability & Security Partnerships, Amazon Web ServicesFurther Links:CyberArk: Website – LinkedIn – AWS MarketplaceFortra: Website – LinkedIn – AWS MarketplaceSysdig: Website – LinkedIn – AWS MarketplaceSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
In this episode, host Paige West speaks with Matthias Huber, Sr. Director, Solutions Manager, IoT/Embedded & Edge Computing, Supermicro, all about accelerating Edge AI infrastructure for predictive and Generative AI.
We're back! In this Season 5 premiere, the team reunites after their summer break to kick off an exciting new chapter. Join us as we catch up, share bold predictions for the year ahead, and explore big questions, like whether 2026 will be the year of the autonomous organization. Expect candid reflections, lively discussion, and a sneak peek at what's coming up this season. We are very keen this season to establish a feedback loop with listeners, so will be doing shows exploring listener questions and challenges - something we are really looking forward to. Please get in touch with us, via LinkedIn, Substack or cloudrealities@capgemini.com, if you have questions or challenges for us, we'd love to hear from you!TLDR: 00:20 – We're back! 00:35 – Catching up on what we did during the summer break 10:48 – Planning ahead until Christmas: Microsoft Ignite, AWS re:Invent, an AI mini-series and cool guests 20:27 – Tech talk: iPhone 17, deep democracy training, and the human impact of innovation 32:10 – Will autonomous organizations powered by agents emerge within 12–18 months? 40:45 – Reflections inspired by Jaws, climbing adventures, and Bruce Springsteen HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini
Yinon Costica is the co-founder and VP of product at Wiz, which sold to Google for $32 billion in cash. Costica joins Big Technology Podcast to discuss the extent of the cybersecurity threats that generative AI is creating, from vulnerabilities in AI software to the risks involved in “vibe coding.” Tune in to hear how attackers are using AI, why defenders face new asymmetries, and what guardrails organizations need now. We also cover Google's $32 billion acquisition of Wiz, the DeepSeek controversy, post-quantum cryptography, and the future risks of autonomous vehicles and humanoid robots. Hit play for a sharp, accessible look at the cutting edge of AI and cybersecurity.---Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016bQuestions? Feedback? Write to: bigtechnologypodcast@gmail.com 00:00 Opening and guest intro01:05 AI as a new software stack04:25 Core AI tools with RCE flaws06:18 Cloud infrastructure risks09:20 How secure is AI-written code13:54 Agents and security reviewers17:38 How attackers use AI today22:09 Asymmetry: attackers vs. defenders32:36 What Wiz actually does40:11 DeepSeek case and media spin
Get ready to discover a whole new way for people to find your business online at the forefront of the AI revolution in this insight-packed episode. Joining the conversation is Dom Wells, Founder & CEO of Onfolio Holdings (NASDAQ: ONFO), a public holding company for profitable online businesses. After acquiring an SEO agency, Dom turned it into a smart new service called Generative Engine Optimization (GEO) — helping businesses show up inside answers, tips, and conversations from tools like ChatGPT, Bing Copilot, and Claude. GEO is changing how brands get noticed in 2025 and beyond. Instead of just showing links, AI tools give people clear answers. GEO makes sure your business is part of those answers, so you’re seen where customers are already looking. In this episode, you’ll find out: How GEO is different from old-school SEO. Why AI search feels more helpful and personal than regular search engines. Simple ways to check and boost your visibility in tools like ChatGPT and Bing Copilot. Real stories of businesses getting fast results with GEO — plus how to grab free audits and tips shared during the show.
In this episode of Founded & Funded, Madrona Investor Joe Horsman sits down with Jeff Leek and Rob Bradley, co-founders of Synthesize Bio, a foundation model company for biology that's unlocking experiments researchers could never run in the lab. Jeff, chief data officer at the Fred Hutch Cancer Center, and Rob, the McIlwain Family endowed chair in data science, share: •Why a startup the right fit for generative genomics •How generative genomics could reshape research, drug trials, and more •Why RNA is the right starting point for a generative AI model in biology •What this breakthrough means for the future of drug development •Why now is biology's “ChatGPT moment” •What makes Synthesize a true foundation model for biology (not a point solution) Whether you're a founder, biotech innovator, or AI researcher, this is a must-listen conversation about the intersection of AI, biology, and the future of medicine. Transcript:https://www.madrona.com/the-future-of-biology-is-generative-inside-synthesize-bios-rna-ai-model Chapters: (2:00) – Why a Foundation Model for Biology? (5:00) – The Case for RNA (7:00) – Biology's Large Language Model Analogy (9:00) – Solving Impossible Problems (11:00) – Validation & Testing (14:00) – Balancing Big Picture & Specific Biology (15:00) – Why a Company, Not Just a Lab (16:00) – Has Biology Had Its “ChatGPT Moment”? (19:00) – The Data Challenge (23:00) – Real-World Use Cases (26:00) – How Research Will Look in 10 Years (28:00) – Increasing the Odds of Discovery (29:00) – Clinical Trials & Precision Medicine (31:00) – Access & Next Steps
The entry-level project management job isn't what it used to be. With AI automating many of the classic coordinator tasks, the ground floor seems to have disappeared—leaving aspiring PMs wondering how to even get started. In this episode, Galen Low sits down with Benjamin Chan, Founder of CLYMB Consulting, to unpack what this shift really means for junior PMs, hiring managers, and the next generation of project leaders.Together, they explore how AI is reshaping the role of the project coordinator, what skills and traits are most valuable in today's job market, and how organizations can reimagine career paths to make sure talent isn't left behind. Whether you're breaking in, hiring, or mentoring, this conversation is full of real-world perspective and actionable ideas for navigating the new career landscape.Resources from this episode:Join DPM MembershipSubscribe to the newsletter to get our latest articles and podcastsConnect with Ben on LinkedInCheck out CLYMB Consulting
In this powerful episode, Yvette Bethel and Edwin Clamp explore one of the most urgent challenges facing organizations, institutions, and societies today: How do we shift from degenerative systems—those that deplete trust, extract value, and reinforce corruption—toward generative ecosystems that create value, restore integrity, and serve the whole? Join us as we unpack: What makes a system degenerative or corrupt (and how to recognize the signals) The hidden costs of institutional decay, distrust, and misaligned incentives The principles of generative systems — those built on trust, reciprocity, transparency, and sustainability Whether you're a leader, change agent, team member, or simply someone trying to work with integrity in a system that feels out of sync, this episode offers a clear-eyed perspective on what's possible—and where to start. Want to connect with Yvette? You can reach out to her at: LinkedIn Want to connect with Edwin? You can find him at: LinkedIn Want to go deeper? Explore related themes and leadership development tools at the IFB Academy — where future-forward leaders grow.
Aditya Vasudevan, Cohesity's cyber recovery expert, shares battle-tested insights from defending Fortune 100 companies against AI-powered cyberattacks.Topics Include:Cohesity protects 85% of Fortune 100 data with battle-tested cyber recovery experienceTop 10 cyber adversaries target organizations; Cohesity has defended against most major threatsGenAI adopted by 100 million users in two months, creating unprecedented security challengesNew AI threats include prompt injection, synthetic identities, shadow AI, and supply vulnerabilitiesAttackers now use AI for sophisticated phishing, automated malware, and accelerated attack chainsReal companies completely banned AI after code leaks, misuse incidents, and data concernsThree-pillar security approach: fight AI with AI, enhanced training, and automated workflowsSecure AI design requires private deployments, complete traceability, and role-based access controlsAmazon Bedrock offers built-in guardrails, private VPCs, and enterprise monitoring capabilitiesCohesity's Gaia demonstrates secure AI with RAG architecture and permission-aware data accessResilience strategy combines immutable backups, anomaly detection, and recovery automation for incidentsProper AI security reduces cyber insurance premiums and prevents costly downtime disastersParticipants:Aditya Vasudevan - GVP of Cyber Resiliency, Cohesity Further Links:Cohesity: Website | LinkedIn | AWS MarketplaceSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Our ongoing discussion of the difficulty in integrating the journey on the way of the life of faithfulness to the evengalion of Jesus, The Christ, the Orthodox Christian way of life, into contemporary western culture continues as Jim & Fr Symeon talk about the tower of babel as the proto-story of humanity's quest for control over our own destiny & God, repeatedly, making us aware we're not, He is.Our culture is in the grip of two competing civil religions, two ideologies, which manifest themselves as politics. One is openly anti-Christian, the other uses Christian language, but is in no way advocating actual traditional Christian culture.Let us know what you think!Particularly let us know if you prefer this video format to the side by side.This is, again, part one of two. The second half of this conversation will go live in two weeks, on October 2nd.Reference materials for this episode: - Saints Who Show Sympathy to Communism - https://mountthabor.com/blogs/the-professors-blog/question-27-orthodox-fathers-on-communism?srsltid=AfmBOopmpsDSl8enxknhEb-3mzRs589BiTPq7agS0YoTbJp13OMtRTy1Scripture citations for this episode:- Tower of Babel - Genesis 11The Christian Saints Podcast is a joint production of Generative sounds & Paradosis Pavilion with oversight from Fr Symeon KeesParadosis Pavilion - https://youtube.com/@paradosispavilion9555https://www.instagram.com/christiansaintspodcasthttps://twitter.com/podcast_saintshttps://www.facebook.com/christiansaintspodcasthttps://www.threads.net/@christiansaintspodcastIconographic images used by kind permission of Nicholas Papas, who controls distribution rights of these imagesPrints of all of Nick's work can be found at Saint Demetrius Press - http://www.saintdemetriuspress.comAll music in these episodes is a production of Generative Soundshttps://generativesoundsjjm.bandcamp.comDistribution rights of this episode & all music contained in it are controlled by Generative SoundsCopyright 2021 - 2023
Dave, Esmee, and Rob are strapping in for another season of bold, brain-bending conversations—and they're bringing the flux capacitor with them from Back to the Future.Season 5 beams in global leaders and innovators who challenge how we think about technology, business, and humanity. From AI disruption to digital sovereignty, from leadership to culture—this season's guests are ready to shake things up.Our first full episode drops on September 25, but before we hit 88 miles per hour, here's a quick trailer to set the timeline straight, or at least bend it a little.HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini
Ioana explores how AI is transforming the role of designers from creators to curators and editors of machine-generated outputs.This episode was recorded in partnership with Wix Studio.In this episode: • When did you first notice AI design outputs starting to look the same?• If our role shifts to curation—similar to art curators—what qualities define a design curator?• Are we still truly creating, or have we become editors and directors of AI-generated work? Are we building solutions as before, or primarily guiding and shaping AI outputs?Check out these links:Join Anfi's Job Search community. The community includes 3 courses, 12 live events and workshops, and a variety of templates to support you in your job search journey.Ioana's AI Goodies NewsletterIoana's Domestika course Create a Learning StrategyEnroll in Ioana's AI course "**AI-Powered UX Design: How to Elevate Your UX Career"** on Interaction Design Foundation with a 25% discount.Into UX design online course by Anfisa❓Next topic ideas:Submit your questions or feedback anonymously hereFollow us on Instagram to stay tuned for the next episodes.
The three largest movie studios in the US allege brazen copyright infringement across dozens of protected works. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Organizations across various industries are increasingly seeking to harness generative AI's capabilities to enhance productivity and operational efficiency. But while generative AI technologies can provide organizations with multiple opportunities, there are also risks unless integration is carried out with caution. In this episode of Risk in Context, Marsh's Gregory Eskins and Mercer's Adriana O'Kain discuss the multiple opportunities that generative AI presents, look at how the use of these technologies is evolving, and provide actions that senior leaders should consider to address technical, process, and people implications. You can access a transcript of the episode here. Read our series on debunking AI-generated myths. For more insights and insurance and risk management solutions, follow Marsh on LinkedIn and X and visit marsh.com.
Podcast: Pipeliners Podcast (LS 39 · TOP 2% what is this?)Episode: Episode 404: Combining Gamification and Generative AI to Improve Training (with Survey) with Clint BodungenPub date: 2025-09-02Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode of the Pipeliners Podcast, we revisit our conversation with Clint Bodungen of ThreatGEN. The discussion focuses on the application of gamification and generative AI in professional training, specifically for enhancing cybersecurity and incident response exercises. The episode also explores a PHMSA-sponsored R&D project that is adapting these advanced technologies for the unique operational needs of the pipeline industry, highlighting the development of AI-driven, multiplayer training environments. Visit PipelinePodcastNetwork.com for a full episode transcript, as well as detailed show notes with relevant links and insider term definitions.The podcast and artwork embedded on this page are from Russel Treat, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
On Saturday, tens of thousands of people rallied in Auckland calling for the government to sanction Israel for their actions and violence in Gaza. While politicians across the spectrum have responded to the protests with a mix of support and disapproval, the New Zealand government is not expected to announce their official decision on the recognition of a Palestinian state until a General Assembly in New York next week. Following the government's announcement to get rid of NCEA in favour of a new system, Education Minister Erica Stanford has revealed numerous subjects to join the senior school curriculum, which will include the incorporation of generative AI. Finally, the New Zealand Herald published information last week pertaining to MP Carl Bates' failure to disclose 25 properties to Parliament, against what has been argued as a ‘breach of public trust'. Wire Host Sara spoke with National MP Bates about all of these topics, starting with the pro-Palestine rally.
Sunday edition The Book of Revelation
In this episode, Pavlé Sabic, Senior Director in Generative AI Solutions and Strategy at Moody's, joins Emerj Editorial Director Matthew DeMello to discuss how agentic AI is redefining workflows in financial institutions. Pavlé explains why large enterprises are turning to AI-driven automation to overcome persistent challenges — from fragmented data and manual inefficiencies to evolving regulatory demands. He shares practical examples, including credit memo automation that reduces production time by 60%, portfolio monitoring tools that detect emerging risks earlier, and sales intelligence workflows that deliver highly targeted client insights. Pavlé also outlines why proprietary data is a strategic advantage in regulated industries and how leaders can implement agentic AI without losing human oversight. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! This episode is sponsored by Moody's. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
In this episode of The Responsive Lab, we go deeper into the real, raw questions nonprofit leaders are asking about artificial intelligence. Things like strategy and governance to privacy, personalization, and predictive modeling. Join Virtuous' Chief AI Officer, Nathan Chappell, alongside co-hosts Carly Berna and Scott Holthaus as they unpack: What AI really means for nonprofit teams How predictive and generative AI work together to drive giving Why most AI initiatives fail and how to beat the odds Ethical and environmental concerns (and how to respond to them) How Virtuous is building the future of fundraising through AI Nathan brings years of experience in machine learning, innovation, and responsible AI use in philanthropy. This session is a roadmap for nonprofit teams looking to move from AI curiosity to AI clarity. Watch the webinar that prompted this conversation here: 5 Essentials for AI Success: What Today's Fundraising Teams Need to Know (and Do) to Thrive Learn more about Virtuous at virtuous.org/learnmore and download your free Nonprofit CRM Checklist at virtuous.org/crmchecklist
The AI landscape in recruiting is evolving rapidly, with vendors racing to add AI features and many employers eager to embrace transformation. But navigating this shift successfully requires understanding what questions to ask and which foundations to build. From vendor transparency to compliance, from bias auditing to data governance, the path to effective AI implementation is not a simple one. What do TA teams need to consider to adopt a responsible approach to AI? My guest this week is Martyn Redstone, a highly experienced advisor on AI governance for HR and Recruitment. Martyn has spent the last 9 years working with AI in recruiting and has some incredibly valuable advice to share. In the interview, we discuss: Getting the foundations right Why false AI confidence is dangerous Four key vendor evaluation areas Third-party auditing Shadow AI and data breaches Generative versus decision-based AI Global regulatory landscape challenges Why guardrails actually accelerate innovation The task-based future of work Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.
In this episode of People of AI , we take you behind the scenes of "ANCESTRA," a groundbreaking film that integrates generative artificial intelligence into its core. Hear from the director Eliza McNitt and key collaborators from the Google DeepMind team about how they leveraged AI as a new creative tool, navigated its capabilities and limitations, and ultimately shaped a unique cinematic experience. Understand the future role of AI in filmmaking and its potential for developers and storytellers. Chapters: 0:00 - Introduction to Ancestra: AI in filmmaking 3:38 - The Origin Story of ANCESTRA 5:35 - Google DeepMind and Primordial Soup collaboration 11:47 - Veo and the creative process 20:21 - Behind the scenes: Making the film 28:47 - Generating videos: Gemini and Veo tools 38:11 - AI as a creative tool, not a replacement 47:41 - AI's impact and the future of the film industry 53:51 - Generative models: A new kind of camera 57:46 - Rapid fire & conclusion Resources: Ancestra → https://goo.gle/4mVScNW Making of ANCESTRA → https://goo.gle/3JVJil1 Veo 3 → https://goo.gle/4mWn3Kz Veo 3 Documentation → https://goo.gle/46qqFOV Veo 3 Cookbook → https://goo.gle/3VMVFSZ Google Flow → https://goo.gle/3VMVR4F Watch more People of AI → https://goo.gle/PAI Subscribe to Google for Developers → https://goo.gle/developers #PeopleofAI Speaker: Christina Warren, Ashley Oldacre, Eliza McNitt, Ben Wiley, Corey Matthewson, Products Mentioned: Google AI, Gemini, Veo 2, Veo 3
Querious utilizes the power of generative AI to listen into a conversation and deliver real-time insights based on what it hears. Essentially, it's like having another person in the room with you—only one who knows all the answers and can access information faster than anyone else. Learn more about your ad choices. Visit megaphone.fm/adchoices
Querious utilizes the power of generative AI to listen into a conversation and deliver real-time insights based on what it hears. Essentially, it's like having another person in the room with you—only one who knows all the answers and can access information faster than anyone else.
Querious utilizes the power of generative AI to listen into a conversation and deliver real-time insights based on what it hears. Essentially, it's like having another person in the room with you—only one who knows all the answers and can access information faster than anyone else.
Episode 275 of the Hotel Marketing Podcast uncovers the secrets of Generative Engine Optimization, or GEO. Hoteliers have three primary customers online: people, search engines, and AI. Today we delve into how you can succeed in optimizing for AI engines such as ChatGPT, Gemini, Claude, CoPilot, and others. Get the full show notes at www.TravelBoomMarketing.com/podcast
Generative AI and Agentic AI are now raising the stakes on how companies deliver customer value and innovate to stay one step ahead. Accelerating with AI comes down to the long game: technology infrastructure, data, and talent. This episode will focus on practical insights into how top enterprise leaders should approach AI for long term investment, including building a modern tech stack, creating enterprise platform capabilities, proprietary data and AI solutions, and building and cultivating world-class AI talent. These best practices will be shared through the lens of Capital One's AI journey.
Generative AI may be rewriting the rules, but Felicia Shakiba and Claurelle Rakipovic, Chief Product Officer at Pipe, break down what real product leadership looks like. From customer empathy to AI missteps, Claurelle shares sharp lessons on building products that can actually hold their ground. Chapters 00:00 Building Defensible Products in the Age of AI 01:50 Understanding Defensibility Beyond Automation 04:30 Learning from AI Missteps 09:26 Internal Testing and AI Adoption 14:39 New Competencies for Product Leaders 19:47 The Importance of Unique Data 23:49 Empathy in AI Product Development 27:39 Auditing AI for Value Delivery
AI seems to be the thing of the moment. Generative models are creating everything from pictures and movies to full songs. But is it any good? Turns out, not really. I tried my hand at making some different "AI Music" to see just how awful it is. And let me tell, it's really bad. Join me as I take a look at the future of music and why AI generated songs are total garbage.
I'm comparing three popular generative AI tools and their ability to create unique designs based on best-selling Amazon print on demand products
In today's episode, I'm building on what I started discussing last week as I continue to make a dark synth track to accompany one of the many hunting scenes in The Thirteenth Hour prequel, A Shadow in the Moonlight, about a cursed hunter who has to spend eternity hunting an enchanted deer. I've wanted to learn to use a desktop based DAW to make and edit music so am using this track as a way to do that. I've settled on the free web-based program Bandlab, which is supposedly the easiest one to start using, though I will say that I have found none of the ones I have tried intuitive or especially user friendly. That said, connecting a keyboard to the computer has helped a great deal, and I expect that the initially hassle will have longer term payoffs in terms of flexibility and range of tools at my fingertips when making new tracks than I would have doing it all analog. So, I'm trying to not throw my hands up in frustration and go back to what I know since the whole point was to learn how to use a DAW in order to make this track.In addition, I have been experimenting with another digital tool called Nauk Nauk to make short videos of the toys I've made. The app is basically generative AI specific to action figures and making them move. I'm not super for or against this kind of technology, and while I touch on some of the operational pros and cons of using this kind of tech (at least from what I can see), this is one I can get behind. Who doesn't want to see their toys come to life? Especially ones you've made! Case in point - this one of Beverly Switzler is my favorite so far: https://www.tiktok.com/@13thhr/video/7547184629434813710?is_from_webapp=1&sender_device=pc&web_id=7547583918360708621Thanks for listening!∞∞∞∞∞∞∞Once Upon a Dream, the second Thirteenth Hour soundtrack, is now out in digital form and on CD! It is out on most major streaming services such as Bandcamp, Spotify, and YouTube Music. (If you have no preference, I recommend Bandcamp since there is a bonus track there and you will eventually be able to find tapes and special editions of the album there as well.) The CDs are out now!-Check out the pixelart music videos that are out so far from the album:-->Logan's Sunrise Workout: www.youtube.com/watch?v=K7SM1RgsLiM-->Forward: www.youtube.com/watch?v=Z9VgILr1TDc-->Nightsky Stargazing: www.youtube.com/watch?v=2S0p3jKRTBo-->Aurora's Rainy Day Mix: https://youtu.be/zwqPmypBysk∞∞∞∞∞∞∞∞ Signup for the mailing list for a free special edition podcast, a demo copy of The Thirteenth Hour, and access to retro 80s soundtrack!Like what you see or hear? Consider supporting the show over at Thirteenth Hour Arts on Patreon or adding to my virtual tip jar over at Ko-fi. Join the Thirteenth Hour Arts Group over on Facebook, a growing community of creative people.Have this podcast conveniently delivered to you each week on Spotify, iTunes, Stitcher, Player FM, Tunein, and Googleplay Music.Follow The Thirteenth Hour's Instagram pages: @the13thhr for your random postings on ninjas, martial arts, archery, flips, breakdancing, fantasy art and and @the13thhr.ost for more 80s music, movies, and songs from The Thirteenth Hour books and soundtrack.Listen to Long Ago Not So Far Away, the Thirteenth Hour soundtrack online at: https://joshuablum.bandcamp.com/ or Spotify. Join the mailing list for a digital free copy. You can also get it on CD or tape.Website: https://13thhr.wordpress.comBook trailer: http://bit.ly/1VhJhXYInterested in reading and reviewing The Thirteenth Hour for a free book? Just email me at writejoshuablum@gmail.com for more details!
Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy! 00:00 - Introductions 04:58 - History of Major AI Models and Their Impact on Fal.ai 07:06 - Pivoting to Generative Media and Strategic Business Decisions 10:46 - Technical discussion on CUDA optimization and kernel development 12:42 - Inference Engine Architecture and Kernel Reusability 14:59 - Performance Gains and Latency Trade-offs 15:50 - Discussion of model latency importance and performance optimization 17:56 - Importance of Latency and User Engagement 18:46 - Impact of Open Source Model Releases and Competitive Advantage 19:00 - Partnerships with closed source model developers 20:06 - Collaborations with Closed-Source Model Providers 21:28 - Serving Audio Models and Infrastructure Scalability 22:29 - Serverless GPU infrastructure and technical stack 23:52 - GPU Prioritization: H100s and Blackwell Optimization 25:00 - Discussion on ASICs vs. General Purpose GPUs 26:10 - Architectural Trends: MMDiTs and Model Innovation 27:35 - Rise and Decline of Distillation and Consistency Models 28:15 - Draft Mode and Streaming in Image Generation Workflows 29:46 - Generative Video Models and the Role of Latency 30:14 - Auto-Regressive Image Models and Industry Reactions 31:35 - Discussion of OpenAI's Sora and competition in video generation 34:44 - World Models and Creative Applications in Games and Movies 35:27 - Video Models' Revenue Share and Open-Source Contributions 36:40 - Rise of Chinese Labs and Partnerships 38:03 - Top Trending Models on Hugging Face and ByteDance's Role 39:29 - Monetization Strategies for Open Models 40:48 - Usage Distribution and Model Turnover on FAL 42:11 - Revenue Share vs. Open Model Usage Optimization 42:47 - Moderation and NSFW Content on the Platform 44:03 - Advertising as a key use case for generative media 45:37 - Generative Video in Startup Marketing and Virality 46:56 - LoRA Usage and Fine-Tuning Popularity 47:17 - LoRA ecosystem and fine-tuning discussion 49:25 - Post-Training of Video Models and Future of Fine-Tuning 50:21 - ComfyUI Pipelines and Workflow Complexity 52:31 - Requests for startups and future opportunities in the space 53:33 - Data Collection and RedPajama-Style Initiatives for Media Models 53:46 - RL for Image and Video Models: Unknown Potential 55:11 - Requests for Models: Editing and Conversational Video Models 57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing 58:23 - Bitter Lesson and the Future of Model Workflows 58:44 - FAL's hiring approach and team structure 59:29 - Team Structure and Scaling Applied ML and Performance Teams 1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration 1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks 1:04:02 - Closing Remarks and Culture at FAL
Imagine unlocking a world where designing custom proteins is not only feasible - but faster, smarter, and more powerful than ever before, thanks to artificial intelligence.As the promise of programmable biology takes center stage, AI-driven protein engineering is rapidly moving from theoretical dream to industry standard.In this episode, David Brühlmann sits down with Elise de Reus, co-founder of Cradle, whose ground-breaking platform has become a go-to for luminaries at pharma giants, as well as leaders in agriculture and industrial biotech. Elise's journey bridges hands-on scientific discovery with entrepreneurial vision, and she's on a mission to make biology a foundational pillar of a more sustainable and equitable world.Here are three reasons you can't miss this episode:AI Unmasks the Black Box: Elise explains how Cradle's platform isn't just about predictive power - it offers transparency and actionable insights, allowing scientists to see why models perform as they do, and how to interpret and trust their outputs.From Hype to Real-World Impact: You'll hear real examples of AI accelerating everything from pandemic preparedness and antivenom development, to sustainable raw materials - plus candid advice on scaling AI adoption across diverse project teams.Blueprints for Biotech Founders: Elise shares hard-won lessons on transforming scientific innovation into a thriving company, including the counterintuitive power of targeting “mission impossible” challenges and the priceless value of early, unfiltered feedback.Ready to future-proof your protein engineering with generative AI? Press play to take home Elise's practical framework for evaluating tools, overcoming organizational hurdles, and crafting a winning adoption strategy, before the innovation curve pulls away.Wondering how Generative AI is transforming the biotech landscape? From accelerating regulatory workflows to reinventing protein purification, AI is driving breakthroughs across the industry. Explore these standout episodes to hear from experts leading the charge:Episodes 77-78: Cell Factories Explained: How Synthetic Biology and AI Revolutionize Protein Production with Mauro TorresEpisodes 119-120: Innovating Protein Purification Using Synthetic Organelles and AI with Haotian GuoEpisodes 123-124: Manufacturability: Why Most Protein Candidates Fail (And How to Pick Winners Early) with Susan SharfsteinEpisodes 167-168: How Generative AI Is Revolutionizing Biotech Regulatory Compliance with Abhijeet SatwekarConnect with Elise de Reus:LinkedIn: www.linkedin.com/in/elise-de-reus-77b83a24Website: www.cradle.bioNext step:Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
In this episode of the Pipeliners Podcast, we revisit our conversation with Clint Bodungen of ThreatGEN. The discussion focuses on the application of gamification and generative AI in professional training, specifically for enhancing cybersecurity and incident response exercises. The episode also explores a PHMSA-sponsored R&D project that is adapting these advanced technologies for the unique operational needs of the pipeline industry, highlighting the development of AI-driven, multiplayer training environments. Visit PipelinePodcastNetwork.com for a full episode transcript, as well as detailed show notes with relevant links and insider term definitions.
For decades, protein design has hinged on painstaking rounds of wet lab mutagenesis and trial-and-error, a process limited not by human ingenuity, but by time and complexity. Yet as the biotech field seeks faster, greener, and more effective solutions for therapeutics and industrial applications, the next leap might not come solely from the lab bench.In this episode, host David Brühlmann explores the frontiers of AI-driven protein engineering with Elise de Reus, co-founder of Cradle. Elise's journey weaves together a passion for DNA, real-world impact from the dairy industry to synthetic biology, and high-throughput experience at Zymergen. Now, at the helm of Cradle, she is bridging cutting-edge computational models with experimental validation, making biology programmable and accelerating the path from idea to functional protein.Here are three reasons this episode is essential listening:AI Makes Biology Programmable: Generative AI platforms like Cradle can intelligently design protein variants by learning from evolutionary trends and your own project data, enabling smarter, faster iterations and optimizing for stability, activity, and manufacturability - simultaneously.Limited Data? No Problem: While traditional approaches have stumbled over small, “short and fat” datasets, new generative AI models can effectively update their understanding with as little as a 96-well plate per round, democratizing high-impact protein design for startups, academia, and industry alike.The Future is Human + Machine: Even as algorithms advance, Elise stresses the irreplaceable value of wet lab validation. Cradle's hybrid approach, integrating experimental feedback with algorithmic prediction, ensures reliable, scalable results and empowers multidisciplinary teams to unlock first- and best-in-class solutions.Want to see how you can leverage AI to simplify and accelerate your own protein engineering projects? Don't miss this episode to hear how Elise and Cradle are transforming the pace - and possibilities - of biotech.Curious how Generative AI is reshaping biotech? From streamlining regulatory compliance to transforming protein purification, AI is changing the game. Dive into these standout episodes to explore the cutting edge of innovation in biotech:Episodes 77-78: Cell Factories Explained: How Synthetic Biology and AI Revolutionize Protein Production with Mauro TorresEpisodes 119-120: Innovating Protein Purification Using Synthetic Organelles and AI with Haotian GuoEpisodes 123-124: Manufacturability: Why Most Protein Candidates Fail (And How to Pick Winners Early) with Susan SharfsteinEpisodes 167-168: How Generative AI Is Revolutionizing Biotech Regulatory Compliance with Abhijeet SatwekarConnect with Elise de Reus:LinkedIn: www.linkedin.com/in/elise-de-reus-77b83a24Website: www.cradle.bioNext step:Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
SHOW SCHEDULE 8-20-25 GOOD EVENING. THE SHOW BEGINS IN ARTIFICIAL GENERATIVE INTELLIGENCE DEBATE OVER REGULATION AGI STATE BY STATE... CBS EYE ON THE WORLD WITH JOHN BATCHELOR FIRST HOUR 9:00-9:15 AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE 9:15-9:30 AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE CONTINUED 9:30-9:45 #UKRAINE: GUARANTEES AND CEASEFIRE - COLONEL JEFF MCCAUSLAND, USA (RETIRED) @MCCAUSLJ @CBSNEWS @DICKINSONCOL 9:45-10:00 #UKRAINE: TRILATERAL AND NATO - COLONEL JEFF MCCAUSLAND, USA (RETIRED) @MCCAUSLJ @CBSNEWS @DICKINSONCOL SECOND HOUR 10:00-10:15 PRC: EARTH-MOON SYSTEM LANDING 2030 - BRANDON WEICHERT, GORDON CHANG 10:15-10:30 INDIA: WANG YI IN DELHI - BLAINE HOLT, GORDON CHANG 10:30-10:45 PRC: CAPITAL FLIGHT - ANDREW COLLIER, GORDON CHANG 10:45-11:00 PRC: DRONE SUBS - RICK FISHER THIRD HOUR 11:00-11:15 INDIA: DC BREAK - SADANAND DHUME, WSJ 11:15-11:30 INDIA: CHINA ARRIVES 11:30-11:45 MAMET - EMINA MELONIC 11:45-12:00 RUSSIA: GAS TANK EMPTYING - MICHAEL BERNSTAM, HOOVER FOURTH HOUR 12:00-12:15 FRANCE: HEAT WAVE LIFTS - SIMON CONSTABLE 12:15-12:30 UK: MANSION TAX 12:30-12:45 CHICAGO: UNDERWATER - THOMAS SAVIDGE 12:45-1:00 AM CHICAGO: UNDERWATER - THOMAS SAVIDGE CONTINUED