Podcasts about pii

  • 414PODCASTS
  • 743EPISODES
  • 41mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about pii

Show all podcasts related to pii

Latest podcast episodes about pii

Oyster Stew - A Broth of Financial Services Commentary and Insights
Inside the Latest CAT and CAIS Reporting Issues

Oyster Stew - A Broth of Financial Services Commentary and Insights

Play Episode Listen Later May 6, 2025 12:33


Join Oyster experts as they provide real-world insight into the shifting CAT and CAIS landscape, including:The current regulatory focus on removing PII information from CAIS reportingImplementation uncertainty - where FINRA guidance falls shortMember firms grappling with the scope of PII removal at account and customer levelsBlue sheets and CAIS - redundant reporting and integration challengesCAT reporting's critical role in market surveillance during volatile trading periodsHow the multi-year phased implementation approach provides a potential model for future regulationsOyster Consulting has the expertise, experience and licensed professionals you need, all under one roof. Follow us on LinkedIn to take advantage of our industry insights or subscribe to our monthly newsletter. Does your firm need help now? Contact us today!

ITSPmagazine | Technology. Cybersecurity. Society
The New Front Line: Runtime Protection for AI and API-Driven Attacks | A Brand Story with Rupesh Chokshi from Akamai | An On Location RSAC Conference 2025 Brand Story

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 5, 2025 17:29


At RSAC Conference 2025, Rupesh Chokshi, Senior Vice President and General Manager of the Application Security Group at Akamai, joined ITSPmagazine to share critical insights into the dual role AI is playing in cybersecurity today—and what Akamai is doing about it.Chokshi lays out the landscape with clarity: while AI is unlocking powerful new capabilities for defenders, it's also accelerating innovation for attackers. From bot mitigation and behavioral DDoS to adaptive security engines, Akamai has used machine learning for over a decade to enhance protection, but the scale and complexity of threats have entered a new era.The API and Web Application Threat SurgeReferencing Akamai's latest State of the Internet report, Chokshi cites a 33% year-over-year rise in web application and API attacks—topping 311 billion threats. More than 150 billion of these were API-related. The reason is simple: APIs are the backbone of modern applications, yet many organizations lack visibility into how many they have or where they're exposed. Shadow and zombie APIs are quietly expanding attack surfaces without sufficient monitoring or defense.Chokshi shares that in early customer discovery sessions, organizations often uncover tens of thousands of APIs they weren't actively tracking—making them easy targets for business logic abuse, credential theft, and data exfiltration.Introducing Akamai's Firewall for AIAkamai is addressing another critical gap with the launch of its new Firewall for AI. Designed for both internal and customer-facing generative AI applications, this solution focuses on securing runtime environments. It detects and blocks issues like prompt injection, PII leakage, and toxic language using scalable, automated analysis at the edge—reducing friction for deployment while enhancing visibility and governance.In early testing, Akamai found that 6% of traffic to a single LLM-based customer chatbot involved suspicious activity. That volume—within just 100,000 requests—highlights the urgency of runtime protections for AI workloads.Enabling Security LeadershipChokshi emphasizes that modern security teams must engage collaboratively with business and data teams. As AI adoption outpaces security budgets, CISOs are looking for trusted, easy-to-deploy solutions that enable—not hinder—innovation. Akamai's goal: deliver scalable protections with minimal disruption, while helping security leaders shoulder the growing burden of AI risk.Learn more about Akamai: https://itspm.ag/akamailbwcNote: This story contains promotional content. Learn more.Guest: Rupesh Chokshi, SVP & General Manager, Application Security, Akamai | https://www.linkedin.com/in/rupeshchokshi/ResourcesLearn more and catch more stories from Akamai: https://www.itspmagazine.com/directory/akamaiLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:sean martin, rupesh chokshi, akamai, rsac, ai, security, cisos, api, firewall, llm, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More 

Inside UXR
41. What ethical considerations should I be thinking about?

Inside UXR

Play Episode Listen Later Apr 28, 2025 28:21 Transcription Available


In this week's episode, Drew and Joe explore ethical considerations to running research.  They'll cover everything from PII, to delicate topics, to ensuring you're treating your participants right.  Send us a textSupport the showSend your questions to InsideUXR@gmail.comVisit us on LinkedIn, or our website, at www.insideUXR.comCredits:Art by Kamran HanifTheme music by NearbysoundVoiceover by Anna V

The Tea on Cybersecurity
Cybersecurity Lingo Explained: vCISO, PII, and More

The Tea on Cybersecurity

Play Episode Listen Later Apr 21, 2025 23:56


Cybersecurity lingo can be overwhelming, but once you get the hang of the essentials, staying secure becomes much easier.In this episode, host Jara Rowe sits down with Marie Joseph, Senior Security Advisor at Trava, to break down key terms like vCISO, PII, and cybersecurity maturity models. They also differentiate between terms like hacker vs. threat actor and firewall vs. antivirus by highlighting the nuances that matter most. Plus, Marie reveals why continuous compliance is crucial, and how concepts like attack surface and risk tolerance fit into the bigger picture of your security strategy.Key takeaways:Essential cybersecurity terms and definitions: vCISO, PII, and more The importance of understanding and managing your attack surfaceWhy cybersecurity compliance can't be a one-time effortEpisode highlights:(00:00) Today's topic: Understanding cybersecurity terms(01:47) What is a vCISO, and why it benefits small businesses(02:54) Definition of PII, BCP, SIEM, DevSecOps, and BCRA (08:40) Hackers vs. threat actors Explained(10:28) Why businesses need an antivirus and a firewall(13:37) Patch management and cybersecurity attack surfaces(16:04) Continuous cybersecurity compliance(21:27) Recapping cybersecurity essentialsConnect with the host:Jara Rowe's LinkedIn - @jararoweConnect with the guest:Marie Joseph's LinkedIn - @marie-joseph-a81394143Connect with Trava:Website - www.travasecurity.comBlog - www.travasecurity.com/learn-with-trava/blogLinkedIn - @travasecurityYouTube - @travasecurity

PharmaSource Podcast
Private Equity and Digital Maturity in the CDMO Sector - Vishnu Dwadasi, West Monroe

PharmaSource Podcast

Play Episode Listen Later Apr 21, 2025 21:26


“AI doesn't have to be the first port of call. Many organisations still need to focus on the paper-to-glass journey before they can think about more advanced technologies” Vishnu Dwadasi, Director of Life Sciences at West Monroe, brings extensive experience across pharmaceutical operations, consulting and business development from both the sponsor and CDMO sides. His career path through companies including Catalent, BioDuro, PII and now West Monroe has given him unique insights into the evolving pharmaceutical outsourcing landscape.In the latest PharmaSource podcast episode, Vishnu shares his perspective on how private equity investments are reshaping the CDMO landscape, post-merger integration challenges, and the realistic pace of digital transformation within pharmaceutical manufacturing organisations.Read the article

DECAL Download
Episode 27 - PII & Cybersecurity

DECAL Download

Play Episode Listen Later Apr 15, 2025 36:55


Send us a textToday we are diving into a topic that impacts just about everyone in this age where technology is a part of our day-to-day lives. That topic is how to protect our “personally identifiable information”, also known as PII and application security. From financial transactions to healthcare records, protecting ourselves in the digital world has become increasingly important. Joining us this week to talk about protecting your personally identifiable information is Dennis Brice, Chief Information Officer at DECAL, and Rahda Datla, our Chief Technology and Security Information Officer. With their experience and knowledge, we will discuss threats, solutions, and steps that everyone can take to protect their digital identity. Support the show

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)
Challenges associated with Data Privacy, Interoperability, Security (feat. Michael Brown)

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

Play Episode Listen Later Apr 7, 2025 42:21


In this conversation, Michael Brown, CEO of CLOUDNINE AI, discusses the challenges and opportunities in enterprise AI applications, particularly focusing on data interoperability and privacy. He highlights the historical context of data collection in enterprises, the interoperability issues faced by various systems, and the unique challenges posed by large language models (LLMs) trained on public data. The discussion also delves into the importance of securing personally identifiable information (PII) and the processes involved in filtering and encrypting sensitive data. Brown shares insights into how CLOUDNINE AI addresses these challenges through innovative solutions, including the creation of digital twins and the management of dynamic data privacy rules across different regions. In this conversation, Michael Brown discusses the company's data management solutions, the onboarding process for clients, and the challenges of data privacy. He emphasizes the importance of understanding client needs and the evolving landscape of technology, particularly for Gen Z professionals looking to enter the field. The discussion also touches on personal insights and preferences, including Michael's favorite comfort food.

Technol-AG Podcast
AI in My Practice: Could I? Should I?

Technol-AG Podcast

Play Episode Listen Later Apr 3, 2025 15:31


Expert financial technology consultant Eric Baumgardner from Osaic speaks about the latest news and updates about AI, artificial intelligence, as it relates to financial services. Hear him discuss regulatory compliance issues, data privacy and the interesting application of note-taking.  What are "hallucinations" and why is that a concern?  Eric talks about in-house versus integration services, as well as PII data versus using placeholders.

Detection at Scale
Pangea's Oliver Friedrichs on Building Guardrails for the New AI Security Frontier

Detection at Scale

Play Episode Listen Later Mar 25, 2025 26:59


The security automation landscape is undergoing a revolutionary transformation as AI reasoning capabilities replace traditional rule-based playbooks. In this episode of Detection at Scale, Oliver Friedrichs, Founder & CEO of Pangea, helps Jack unpack how this shift democratizes advanced threat detection beyond Fortune 500 companies while simultaneously introducing an alarming new attack surface.  Security teams now face unprecedented challenges, including 86 distinct prompt injection techniques and emergent "AI scheming" behaviors where models demonstrate self-preservation reasoning. Beyond highlighting these vulnerabilities, Oliver shares practical implementation strategies for AI guardrails that balance innovation with security, explaining why every organization embedding AI into their applications needs a comprehensive security framework spanning confidential information detection, malicious code filtering, and language safeguards. Topics discussed: The critical "read versus write" framework for security automation adoption: organizations consistently authorized full automation for investigative processes but required human oversight for remediation actions that changed system states. Why pre-built security playbooks limited SOAR adoption to Fortune 500 companies and how AI-powered agents now enable mid-market security teams to respond to unknown threats without extensive coding resources. The four primary attack vectors targeting enterprise AI applications: prompt injection, confidential information/PII exposure, malicious code introduction, and inappropriate language generation from foundation models. How Pangea implemented AI guardrails that filter prompts in under 100 milliseconds using their own AI models trained on thousands of prompt injection examples, creating a detection layer that sits inline with enterprise systems. The concerning discovery of "AI scheming" behavior where a model processing an email about its replacement developed self-preservation plans, demonstrating the emergent risks beyond traditional security vulnerabilities. Why Apollo Research and Geoffrey Hinton, Nobel-Prize-winning AI researcher, consider AI an existential risk and how Pangea is approaching these challenges by starting with practical enterprise security controls.   Check out Pangea.com  

The Daily Scoop Podcast
Reflections from DOD's first-ever customer experience officer

The Daily Scoop Podcast

Play Episode Listen Later Mar 25, 2025 29:03


After serving for nearly 18 months as the Department of Defense's first-ever customer experience officer in the Office of the CIO, Savan Kong earlier this month parted ways with the Pentagon. Previously a member of the Defense Digital Service during his first tour of duty with the DOD, Kong helped build the department's CXO office from scratch, fostering a culture that prioritizes the needs of service members, civilians, and mission partners and striving to streamline governance processes, improve transparency, and ensure that IT solutions meet operational needs. Kong joins the Daily Scoop for a conversation to share the progress his office ushered in to improve customer experience for DOD's personnel, where things are headed under this administration and how AI will impact the CX space. FedRAMP is getting another overhaul, one that will involve far more automation and a greater role for the private sector, the program's chief announced Monday. Through FedRAMP 20x, the General Services Administration-based team focused on the program aims to simplify the authorization process and reduce the amount of time needed to approve a service from months to weeks, Director Pete Waterman said during an Alliance for Digital Innovation event. The private sector will also have increased responsibility over monitoring of their systems, he noted. In a critical change, agency sponsorship will — eventually — no longer be necessary to win authorization. As a first step, FedRAMP has launched four community working groups, which give the public a chance to share feedback, and focus on creating “innovative solutions” to formalize the program's standards. But in the meantime, Waterman said existing baselines will remain in place and there are no immediate changes to the program. The Office of Personnel Management and the departments of Treasury and Education are now barred from sharing individuals' personally identifiable information with DOGE representatives, a federal judge ruled Monday. Judge Deborah L. Boardman of the U.S. District Court for the District of Maryland said in her decision that in granting associates with Elon Musk's so-called government efficiency initiative access to systems containing plaintiffs' PII, the agencies “likely violated” the Privacy Act and the Administrative Procedure Act. The lawsuit was filed by the American Federation of Teachers, the International Association of Machinists and Aerospace Workers, the International Federation of Professional and Technical Engineers, the National Active and Retired Federal Employees Association, the National Federation of Federal Employees, and six military veterans. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

The Data Diva E229 - Kenya Dixon and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Mar 25, 2025 38:26 Transcription Available


Send us a textIn this episode of The Data Diva Talks Privacy podcast, host Debbie Reynolds welcomes back Kenya Dixon, litigation partner at Nelson Mullins and former Director of White House Information Governance. Their discussion centers on government data privacy, the impact of artificial intelligence on federal records, and the importance of following proper protocols in handling personal information.Kenya provides insight into the Privacy Act of 1974, which was enacted in response to concerns about government misuse of personal data. She explains how federal agencies are required to follow structured processes when handling personally identifiable information (PII) and creating new systems of record. The conversation delves into the recent controversies surrounding the government's data practices, including concerns over the lack of transparency regarding artificial intelligence and data consolidation efforts. Kenya emphasizes that while the government has a right to upgrade its technology and streamline data processes, the public must be informed through proper documentation and procedural adherence.The discussion also touches on the role of artificial intelligence in government data management and the potential concerns regarding private entities' access to government-held personal information. Kenya highlights the importance of public engagement and awareness in privacy-related matters, encouraging individuals to stay informed, participate in democratic processes, and understand how their data is used. She also shares her experiences from her time in government, addressing misinformation and misconceptions about data handling in federal agencies.As privacy issues continue to make headlines, Kenya and Debbie explore the broader implications of data governance, cybersecurity regulations, and the likelihood of federal privacy legislation. They conclude by stressing the significance of state-level privacy laws, the role of AI in shaping the future of data governance, and the ongoing need for transparency in government data practices. This episode offers valuable insights for privacy professionals, policymakers, and anyone interested in the evolving landscape of data privacy.Support the show

IT Privacy and Security Weekly update.
EP 234.5 Deep Dive. The IT Privacy and Security Weekly Update for the Week Ending March 18th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Mar 20, 2025 17:08


1. Why Should I Change My Passwords Immediately?Recent studies show that around 50% of online passwords are already compromised, and 41% of successful logins involve breached credentials. Common passwords like “123456” and password reuse make it easy for cybercriminals—especially with automated bots—to access multiple accounts. Changing passwords and using unique, strong credentials with multi-factor authentication is critical for security.Starting March 28th, all Alexa requests will be processed in Amazon's cloud, regardless of previous settings. Amazon claims this supports new AI features, but it means even users who opted out of saving voice recordings will now have all interactions recorded and sent to Amazon. This also impacts features like Voice ID, which won't function without stored voice data. While Amazon encrypts transmissions and provides some privacy controls, this shift raises concerns about increased data collection and potential personalization for shopping.Microsoft will stop providing free security updates for Windows 10 in October 2025, leaving charities that refurbish and donate older PCs with limited options. Many of these computers cannot run Windows 11, forcing organizations to choose between using an insecure OS, transitioning to Linux, or discarding hardware—contributing to electronic waste. While Linux is a secure, free alternative, its unfamiliar interface may pose usability challenges for some recipients, especially seniors.StilachiRAT is a newly discovered remote access trojan (RAT) targeting cryptocurrency wallets like MetaMask and Coinbase Wallet. This malware remains undetected on infected systems, stealing sensitive data, including credentials stored in browsers like Chrome. By accessing login credentials, attackers can drain funds from wallets. StilachiRAT also collects system data, increasing victims' exposure. While not widespread yet, its advanced capabilities make it a serious threat to crypto users.A Chinese state-sponsored hacking group remained undetected in a small Massachusetts power utility for over 300 days, showing that even lesser-known infrastructure is a target for cyber espionage. Attackers can use these breaches to test methods, gain footholds in critical networks, and extract operational data such as grid layouts. This underscores the need for robust security measures, continuous monitoring, and multi-factor authentication for all organizations, especially in critical sectors.Anthropic CEO Dario Amodei warns that state-sponsored actors, likely from China, are trying to steal “algorithmic secrets” from US AI firms. Some critical algorithms, despite representing massive investments (potentially $100 million), are just a few lines of code, making them easy to exfiltrate if security is breached. Amodei argues that the US government should take stronger action to protect these assets from industrial espionage.Allstate Insurance's National General unit had websites that displayed personally identifiable information (PII) in plaintext during the quote process. When users entered their name and address, the system exposed full driver's license numbers (DLNs) of the applicant and other residents at that address. Attackers used bots to harvest at least 12,000 DLNs, leading to fraudulent claims. This highlights the importance of secure website design and responsible data handling to prevent unauthorized access.

The Customer Success Playbook
Customer Success Playbook S3 E32 - Gilad Shriki - FunnelStory Customer Interview Big Question

The Customer Success Playbook

Play Episode Listen Later Mar 19, 2025 12:03 Transcription Available


Send us a textIn this engaging episode of the Customer Success Playbook Podcast, host Kevin Metzger sits down with Gilad Shriki from The Scope to explore how FunnelStory is transforming customer success operations. With seamless integration capabilities and a robust automation-first approach, FunnelStory is setting a new standard for customer success platforms.Gilad shares insights into how his team successfully integrated FunnelStory with BigQuery, HubSpot, and Segment, all while maintaining strict data privacy protocols. He also discusses how AI-driven automation is enhancing customer sentiment analysis and churn prediction, giving CS teams an edge in proactive engagement.Is Funnel Story truly a one-stop shop for customer success? Can businesses of all sizes leverage its automation without sacrificing human interaction? Listen in as Gilad provides a firsthand account of his experience and why he believes FunnelStory is reshaping the future of customer success management.Detailed Episode Insights:Seamless Integration: How The Scope connected FunnelStory with their existing data stack while maintaining PII privacy.Automation at the Core: Why starting with automation before layering in human interaction changes the game for CS teams.AI-Powered Efficiency: How FunnelStory is accelerating time-to-value and making predictive insights more accessible.Scalability & Growth: Can FunnelStory support businesses up to $500M in revenue? Gilad shares his perspective.The Future of CS Tech: What's next for AI-powered customer success platforms?Now you can interact with us directly by leaving a voice message at https://www.speakpipe.com/CustomerSuccessPlaybookPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.

IT Privacy and Security Weekly update.
For the other 50%. The IT Privacy and Security Weekly Update for the Week Ending March 18th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Mar 19, 2025 17:09


EP 234For the other 50%.  The IT Privacy and Security Weekly Update for the Week Ending March 18th., 20253/18/20250 CommentsEP 234- click the pic to hear the podcast -For our first story, Apparently there's a 50% chance your password is headlining a hacker convention.  Perhaps it's time to change up from ‘123456' (still the most commonly used password).Starting On March 28, Everything You Say To Your Echo Will Be Sent To Amazon.  Alexa's new motto: ‘Anything you say can and will be used—to personalize your shopping cart, and we mean potentially anything!'The end of Windows 10 Leaves PC Charities With Tough Choice:  Risk Windows 10, embrace Linux, or send Grandma's old PC straight to the tech graveyard?Then Microsoft flags a new threat draining crypto from top wallets.  Meet StilachiRAT, the malware so enthusiastic about your crypto it'll snatch it faster than you can configure your wallet software!Chinese Hackers Sat Undetected in a small Massachusetts power utility for months.  Who knew a cozy little power company could double as the perfect 300-day Airbnb for homeless cyber-spies?Anthropic CEO Says Spies Are After $100 Million AI Secrets in a 'Few Lines of Code'.  So when your fortune fits in a handful of lines, hitting Ctrl+C could be the new diamond heist.Finally,  Allstate Insurance gets sued for delivering PII in plaintext.  You're in good hands with Allstate, we just can't tell you whose.Let's update the other 50%!Find the full transcript to this podcast here.

The Daily Scoop Podcast
Lawyer linked to DOGE is defending OPM mass email system lawsuit; DOGE staffer violated security policies at the Treasury Department, court filing shows

The Daily Scoop Podcast

Play Episode Listen Later Mar 18, 2025 3:53


A lawyer who's said to have played a central role in the Department of Government Efficiency's attempted takeover of at least one federal organization is now defending in court the DOGE email system used to send email blasts to the entire U.S. government workforce. During a Feb. 6 hearing, Jacob Altik joined the defense in the ongoing lawsuit where pseudonymous federal workers have accused the Office of Personnel Management of standing up its new governmentwide email system with inadequate privacy and security protections in place. While the defense introduced him at the time as being “from OPM,” counsel for the plaintiffs filed a new notice early Monday essentially connecting the dots that Altik, through other lawsuits and public reports, has played a hands-on role in supporting the DOGE. Altik was first identified as a DOGE lawyer with an official DOGE email address hosted by the Executive Office of the President in a ProPublica article from early February, the Monday legal notice notes. Then, Altik was identified in a separate ongoing lawsuit as working hand-in-hand with DOGE associates in the organization's attempt to dismantle the U.S. African Development Foundation. The DOGE is also in the spotlight in another case where state attorneys general have sued President Donald Trump and Treasury Secretary Scott Bessent challenging DOGE access to Treasury records. In the latest development in that litigation, DOGE staffer Marko Elez, who resigned in February after racist social media posts surfaced, is said to have shared personally identifiable information in a spreadsheet with two General Services Administration officials, according to the filing from a witness in the case. The testiomony explains that Elez shared names in the spreadsheet that are considered low risk PII because the names are not accompanied by more specific identifiers, such as social security numbers or birth dates. Still, the distribution of this spreadsheet was contrary to BFS policies, in that it was not sent encrypted, and he did not obtain prior approval of the transmission as required. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

Generation AI
FERPA & AI: What Higher Ed Needs to Know

Generation AI

Play Episode Listen Later Mar 11, 2025 31:46


In this episode of Generation AI, Ardis Kadiu and Dr. JC Bonilla unpack FERPA—the Family Educational Rights and Privacy Act—and its critical role in protecting student data within AI-driven educational tools. They clarify common misunderstandings around FERPA compliance, specifically addressing the handling of AI-powered student engagement platforms, chatbots, and data security practices. Learn how institutions can effectively utilize AI while safeguarding student privacy and maintaining compliance.Understanding FERPA Basics (00:00:07)Introduction of the topic based on questions from the AI Engagement SummitFERPA stands for Family Educational Rights and Privacy ActFederal law enacted in 1974 that protects privacy of student educational recordsApplies to institutions receiving US Department of Education fundingGrants students (or parents of minors) rights regarding their educational recordsWhat Constitutes Educational Records Under FERPA (00:07:33)Academic records including grades, transcripts, and course enrollmentPersonally identifiable information (PII) such as names, student IDs, birthdatesDisciplinary records and counseling informationFinancial aid and billing informationStudent communications with advisers, faculty, and staffInstitutions must maintain control and prevent unauthorized disclosureFERPA Compliance for Engagement Tools (00:08:52)Student data must remain protected from unauthorized accessInformation cannot be used for unintended purposes outside institutional contractsData must remain under the institution's control at all timesThe "school official exception" allows third-party vendors to access dataVendors must perform services the school would otherwise use its own staff forSchools must maintain direct control over records use and maintenanceVendor Contracts and FERPA Compliance (00:13:01)Contracts must clearly state vendors act as school officials bound by FERPAVendors cannot use student records outside the scope of their contractsInstitutions must retain full control over how student data is accessedImportance of granular access controls and role-based permissionsVendors should not use student data to train AI models without specific permissionData minimization principles should be followed in all AI processesData Security Requirements (00:15:51)Encryption requirements for data in transit and at restImportance of multifactor authenticationAccess logging to track who interacts with dataData deletion and retention policies must be clearly definedVendors should have clear procedures for data deletion after contract endsAudits and Compliance Monitoring (00:16:40)Vendors should comply with security and privacy standardsRegular security audits and compliance reviews by third partiesThe importance of SOC 2 Type 2 certification as the gold standardInstitutions' rights to conduct independent security auditsAI-Specific FERPA Concerns (00:18:50)Chatbots and AI assistants must follow proper verification protocolsAI-powered tools must adhere to role-based access permissionsRisks of using public AI tools like ChatGPT with student dataDirectory vs. non-directory information distinctionsThe dangers of uploading student data to non-FERPA compliant AI toolsAI Training and Data Use Risks (00:24:00)Many AI models store and use interactions for trainingRisks of unauthorized retention of student recordsImportance of checking data retention policies in AI toolsFree versions of AI tools typically don't offer data protection optionsPaid versions may have data retention turned on by defaultElement451's FERPA Compliance Approach (00:26:28)SOC 2 Type 2 compliance with third-party verificationData encryption in transit and at rest with additional field-level encryptionMultifactor authentication enforcementIdentity verification in AI chatbots before sharing any personal informationNo training on user data and anonymization of activity dataInstitution control over data deletion and visibility of all recordsAI inherits institutional security policies and access controlsClosing Thoughts (00:29:39)The importance of understanding FERPA in the AI contextBuilding trust through proper complianceAddressing misinformation around FERPA and AIInvitation for listeners to suggest future topics - - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com. Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register

Telecom Reseller
BroadSource Unveils SecurePII: Revolutionizing Data Security for Service Providers, Podcast

Telecom Reseller

Play Episode Listen Later Mar 10, 2025


Cloud Connections 2025 Preview: BroadSource's SecurePII Takes Center Stage March 2025 – Technology Reseller News – BroadSource has officially launched SecurePII, a cutting-edge real-time redaction platform designed to protect Personally Identifiable Information (PII) in telecommunications networks. In a special Cloud Communications Alliance (CCA) podcast, Haydn Faltyn and Bill Placke from BroadSource joined Doug Green to discuss the technology, its market impact, and why service providers should take notice. The Growing Need for Real-Time PII Protection BroadSource has long been a leader in delivering technology solutions to cloud communications providers. With SecurePII, they are addressing a critical issue in telecommunications: how to protect PII that traverses carrier networks. The demand for real-time data redaction has surged due to increasing regulatory requirements, including CCPA, GDPR, HIPAA, and the evolving PCI DSS 4.0 standard. Faltyn explains: “We launched SecureCall as a PCI-compliant platform for credit card redaction last year. But service providers and enterprises alike need more—protection beyond just payment information. SecurePII extends our technology to safeguard all forms of personal data in voice communications.” Shifting the Compliance Conversation Placke highlights the legal and compliance challenges that enterprises face, as regulators worldwide introduce stricter measures around data privacy. “Legal teams are often forced to say ‘no' to new initiatives because of concerns over PII exposure. SecurePII flips the script—by redacting sensitive data in real time, businesses can fully leverage AI, analytics, and automation without compliance roadblocks.” A Game Changer for AI-Driven Business Communications The rise of AI and large language models (LLMs) has created a data dilemma for enterprises: how can they safely utilize voice data for AI applications, customer analytics, and automation without violating data privacy laws? With SecurePII, BroadSource provides a solution that allows organizations to extract value from their data without storing or processing sensitive customer information. By removing PII in real-time, businesses can: Enhance AI training models without compliance risks Increase customer trust by ensuring privacy protection Reduce operational risks and costs associated with data breaches and regulatory fines Impact on Contact Centers and CX A core use case for SecurePII is contact centers, where credit card details, account numbers, and personal information are frequently exchanged over voice channels. The platform ensures: Seamless transactions without the risk of human agents being exposed to sensitive data A frictionless customer experience that retains the personal touch while safeguarding information Higher revenue retention—BroadSource has observed a 9% increase in revenue when businesses implement SecurePII in customer interactions BroadSource's SecurePII Roadmap and Upcoming Events The launch of SecurePII marks a new strategic direction for BroadSource, emphasizing data security as a core value for service providers. Faltyn and Placke will be presenting SecurePII at: Cavell's Summit Europe 2025 – A premier event for cloud communications leaders Cloud Connections 2025 (CCA Conference, St. Petersburg, FL) – Where BroadSource will showcase SecurePII's capabilities to global service providers Where to Learn More SecurePII is now live, and service providers can integrate it into their networks today. BroadSource has also launched a dedicated website for SecurePII, providing resources, case studies, and implementation details. Visit: www.securepii.cloud BroadSource's mission is clear—to empower service providers with the tools to protect their networks, comply with global regulations, and enable the future of AI-driven business communications. With SecurePII,

Voice of the DBA
A Poor Data Model

Voice of the DBA

Play Episode Listen Later Mar 3, 2025 2:57


Recently there was some online complaints about social security numbers (SSNs) in the US being duplicated and re-used by individuals. This is really political gamesmanship, so ignore the political part. Just know that social security numbers appear to be one of the contenders used in many data models. I found a good piece about how SSNs aren't unique, and have a mess of problems. Despite this, many people seem to want to use SSNs as a primary or alternate key in their database systems. They also aren't well secured in many systems, even though we should consider this sensitive PII data. Read the rest of A Poor Data Model

New Money Review podcast
Unseen Money 5: Stealing your identity—bit by bit

New Money Review podcast

Play Episode Listen Later Feb 25, 2025 31:42


Having your identity stolen is a catastrophe. You can lose your reputation, your credit rating, your money, your home or even be accused of fraud yourself.To victims, ID theft feels like a single, earth-shattering event. But it's likely that the hacker has been stealing different aspects of your identity over time.Your name, address, email address, phone number, bank account number, passport number, medical records and log-in credentials are all valuable bits of information to hackers. Combined, they may be enough for a digital hit on you and your bank account.In the latest Unseen Money podcast, Timur Yunusov and I explore the “personally identifiable information” or “PII” that enables hackers to impersonate and rob us.PII is now traded amongst fraudsters as a commodity. But what kind is most valuable to criminals? Where do they get it? How do they use it in scams? Once we've lost our PII, is there anything we can do?Listen on for more.

Leaders In Payments
Oban MacTavish, CEO of Spade | Episode 374

Leaders In Payments

Play Episode Listen Later Feb 21, 2025 21:53 Transcription Available


Unlock the secrets of real-time merchant intelligence with Oban MacTavish, the innovative co-founder and CEO of Spade.  Discover how his early fascination with stock trading and technology laid the foundation for launching Spade in 2021. Oban reveals how Spade revolutionizes card payment data by integrating firmographic insights for fraud prevention and payment optimization, setting new standards in the US market. With ambitious expansion plans on the horizon, you'll learn how Spade is transforming the way card issuers comprehend consumer spending patterns.Our conversation takes a deep dive into the world of data security, a crucial aspect of B2B operations. Oban details the significance of operating without personally identifiable information (PII) and achieving SOC 2 Type 2 compliance, ensuring rigorous security protocols are in place. From humble beginnings during the pandemic to creating a comprehensive data network for banks, Oban shares the challenges and triumphs that have defined Spade's journey. Beyond the professional realm, he gives us a glimpse into his personal life, sharing his passion for cooking and exploring culinary delights with his wife's baking prowess. This episode is a treasure trove of insights for anyone interested in fintech innovation, entrepreneurship, and the stories that drive groundbreaking ideas.

ILTA
#0071: (CCT) Data Privacy: How to Determine What You Have in Your Network

ILTA

Play Episode Listen Later Feb 19, 2025 17:46


In this session, we dove into the critical topic of what obligations we have to track personal information (PII, PHI, PCI, PBI) that firms are storing. We explored effective strategies for tracking this sensitive data and discussed the best practices businesses can implement to ensure compliance. Learn how to report this information accurately to clients and risk insurance companies, while minimizing risks and maintaining data security. Whether you're in a small firm or large enterprise, this episode offers valuable insights on safeguarding personal data and meeting reporting requirements. Moderator: @Madeleine La Cour- Director, Business Intake and Records, Baker Botts L.L.P Speaker: @Randy Curato- Vice President-Senior Loss Prevention Counsel, ALAS, Lt Recorded on 02-19-2025.

AI Lawyer Talking Tech
AI and the Law: Trends, Trials, and Transformation

AI Lawyer Talking Tech

Play Episode Listen Later Feb 19, 2025 33:15


Welcome to today's AI Lawyer Talking Tech. In this episode, we will plunge into the latest developments at the intersection of law and artificial intelligence. From law firms strategically investing in AI startups and integrating AI-powered platforms to streamline operations, to the ethical and legal challenges that emerge with AI's increasing role, we'll explore how the legal landscape is rapidly evolving. We'll also discuss topics such as cybersecurity threats, data privacy regulations, and the impact of AI on employment, providing you with insights to stay ahead in this dynamic era.Law firm invests in agentic AI startup after testing tools19 Feb 2025Legal FuturesLaw firms set to enjoy second year of good PII market19 Feb 2025Legal FuturesLIBRA jumped the shark19 Feb 2025CryptoNews.netLumio Launches to Bring AI-Powered Business Enablement to Professional Services18 Feb 2025Small Business TrendsContractPodAi Launches Leah Marketplace18 Feb 2025ContractPodAiExterro Disrupts the Status Quo for Mobile Device Investigations18 Feb 2025VMblog.comThe Rise of Non-Traditional Career Paths in BigLaw: How Firms are Embracing JD Advantage Roles18 Feb 2025Firsthand.coLuminance Raises $75 Million for ‘Panel of Judges' Legal AI Offering18 Feb 2025PYMNTS.comLawyerist Podcast #546: Unlock Law Firm Growth: AI marketing Secrets with Karin Conroy18 Feb 2025Legal Talk NetworkFive Biggest Cyber Threats to Law Offices18 Feb 2025TPx CommunicationsEpiq and Maptician Partner to Enhance Hybrid and Flexible Work Models18 Feb 2025Epiq SystemsAI 'hallucinations' in court papers spell trouble for lawyers18 Feb 2025ReutersEnhance your legal practice: The power of CoCounsel Drafting for litigators18 Feb 2025Thomson ReutersState Attorneys General on Applying Existing State Laws to AI18 Feb 2025JD SupraPrivilege Under Pressure: The Shifting Data Breach Investigation Landscape18 Feb 2025National Law ReviewIntroducing Lumio: Unlocking Potential for Professional Services in the AI Era18 Feb 2025Morningstar.comWill organic rankings become the small print of digital marketing?18 Feb 2025Legal FuturesAderant's iTimekeep AI-Powered Solution Successfully Completes SOC 2 Type 2 Audit with Zero Exceptions Noted18 Feb 2025AccessWDUNCLM Company ContractPodAi Partners with Big Four Firms and ALSPs to Launch Marketplace of AI Agents18 Feb 2025LawSitesE-Discovery Company Exterro Unveils Tool It Says Will ‘Disrupt' Data Collection from Remote Mobile Devices18 Feb 2025LawSitesAI and authorship: Navigating copyright in the age of generative AI18 Feb 2025LexologyBeyond The Hype Of Generative AI To Early Practical Enterprise Applications18 Feb 2025Forbes.comDeposely Launches Free Gen AI Tools for Deposition Work, Previews Comprehensive AI Deposition Platform18 Feb 2025LawSitesContractPodAi Launches AI App Marketplace, PwC + KPMG Join18 Feb 2025Artificial LawyerUK-based Luminance secures $75M for next-gen legal AI assistant transforming contract management18 Feb 2025Tech Funding NewsCopyright, Fair Use & AI | Justia CLE & Webinars18 Feb 2025Legal Marketing & Technology BlogAI in law: A new era of Legal innovation18 Feb 2025Legaltech on MediumComment: Harvey's $300m raise – You cannot ignore the speed of adoption18 Feb 2025Legal IT InsiderAderant's iTimekeep AI-Powered Solution Successfully Completes SOC 2 Type 2 Audit18 Feb 2025Legal Technology News - Legal IT Professionals | Everything legal technologyNew York Proposal to Protect Workers Displaced by Artificial Intelligence18 Feb 2025K&L GatesNo ‘Fair Use' Defense for Using Copyrighted Works for Training AI Models18 Feb 2025BeneschEmployers' Use of Artificial Intelligence in the Workplace After President Trump's Executive Orders18 Feb 2025King & SpaldingData Broker Privacy Compliance: Our Top 5 List18 Feb 2025Stoe Rives LLP

Diverse Perspektiven
Episode 77 - Echte Maskenball Mythologie

Diverse Perspektiven

Play Episode Listen Later Feb 17, 2025 95:12


Wie war der diesjährige Maskenball? Wer hat Thors Hammer verzaubert? Wer wirft Weinflaschen herum? Welche Filmszene hat Pii und Mischa verstört? Was reimt sich auf lustig? Wo kann man die besten Fenster bestellen? Und noch vielmehr gibt es in dieser Episode zu hören.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

Confessions of a Higher Ed CMO — with Jaime Hunt
Ep. 68: Stand Out and Measure Up: Why Performance TV is a Secret Weapon

Confessions of a Higher Ed CMO — with Jaime Hunt

Play Episode Listen Later Dec 20, 2024 48:52


In this episode of the Higher Ed CMO Podcast, host Jaime Hunt welcomes Jennifer Lonchar, founder of Ambio, to explore the transformative potential of Performance TV for higher education marketing. Jennifer breaks down how this technology provides the reach of traditional TV with the precision of digital advertising, delivering measurable ROI and enabling schools to craft highly targeted campaigns. From personalized storytelling to integrating AI for advanced analytics, this episode is packed with actionable insights for enrollment marketers, advancement professionals, and beyond.Key TakeawaysPerformance TV bridges the gap between traditional TV's wide reach and digital advertising's precise targeting and ROI measurement.Household ID technology ensures ethical and accurate targeting without relying on personal data.Schools can track and optimize campaigns based on how audiences respond, ensuring data-driven decisions.Empathy-driven messaging—tailored commercials for each stage of the student journey—creates a transformative connection with prospective students.Performance TV can complement traditional channels, enhancing email engagement and boosting overall campaign effectiveness.What is Performance TV? Jennifer Lonchar introduces Performance TV as a game-changer in higher ed marketing. Unlike traditional streaming ads, Performance TV combines the broad reach of television with the precision and measurability of digital advertising. Using Household ID technology, this tool allows marketers to reach specific audiences—students and parents—while respecting privacy by avoiding personal identifiable information (PII).Performance TV makes it possible to measure ROI with clarity. Schools can track behaviors, such as how many users visit the admissions page or complete an application after seeing a commercial, providing actionable insights for optimizing campaigns.Empathy in Marketing: The Power of Storytelling One of the most compelling opportunities with Performance TV is its potential for empathy-driven messaging. Jaime and Jennifer discuss how understanding your audience's emotions and challenges can guide campaign creation. For instance, commercials could address financial aid complexities or highlight the lifelong friendships and connections fostered in college.By crafting ads tailored to specific stages in the enrollment funnel—student search, application, yield, and even summer melt—schools can resonate with their audiences on a deeper level. Jennifer emphasizes the importance of storytelling, citing examples of commercials that are memorable because they evoke emotions and build trust.Campaign Integration and Success Stories Performance TV is not a standalone solution but a powerful component of an integrated marketing strategy. Jennifer advises using it alongside email, social media, and other channels. For example, schools have seen higher engagement rates in email campaigns when paired with Performance TV exposure.Jennifer shares several success stories, such as the University of Iowa, which generated over 15,000 inquiries, applications, and event registrations during a three-month campaign. Another example is Northeastern Illinois University, which used bilingual commercials to engage diverse audiences and met their enrollment goals for the first time in years.The Future of Performance TV in Higher Ed As the landscape evolves, Jennifer predicts even more advanced integrations, such as interactive TV ads and greater use of AI for optimization. These innovations will allow schools to engage audiences more effectively and refine strategies in real-time. However, the adoption of such tools in higher ed will require marketers to move faster and embrace change.A New Era for Higher Ed Marketing Performance TV offers higher education institutions an unskippable, 30-second stage to tell their story, build trust, and inspire action. Ready to stand out in a crowded digital space? Embrace this innovative tool to revolutionize your campaigns.Guest Name: Jennifer Lonchar, co-founder of AmbioEduGuest Social: https://www.linkedin.com/in/jenniferlonchar/Guest Bio: Jennifer Lonchar brings almost two decades of expertise in higher education, having worked in various roles focused on strategic enrollment and marketing. For over 13 years she worked for Carnegie, and was instrumental in bringing digital marketing to higher education. Her deep understanding of the challenges and opportunities within the sector has made her a sought-after leader in developing innovative solutions for enrollment and student engagement. Driven by a passion for enhancing the student experience and optimizing recruitment strategies, Jennifer co-founded AmbioEdu. This venture reflects her commitment to transforming higher education marketing through advanced technologies, including Performance TV and integrated digital solutions. AmbioEdu, under her leadership, harnesses cutting-edge tools to help universities connect with prospective students more effectively and efficiently, setting new standards in the field. - - - -Connect With Our Host:Jaime Hunthttps://www.linkedin.com/in/jaimehunt/https://twitter.com/JaimeHuntIMCAbout The Enrollify Podcast Network:Confessions of a Higher Ed CMO is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Some of our favorites include Talking Tactics and Higher Ed Pulse. Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com.Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register

We Are, Marketing Happy - A Healthcare Marketing Podcast
URGENT New Google Click-To-Call Terms (And HIPAA Concern)

We Are, Marketing Happy - A Healthcare Marketing Podcast

Play Episode Listen Later Dec 20, 2024 8:00


In this special episode of We Are, Marketing Happy, Jenny dives into a critical update for healthcare marketers. Google is requiring advertisers to opt into call recording for its Click-to-Call feature, creating potential HIPAA compliance risks. Jenny explains the changes, why they're a concern, and what steps you need to take to protect your organization.   Key Points: • Google's new terms for Click-to-Call could result in PHI or PII being recorded, violating HIPAA. • The rollout is inconsistent, so accounts must be monitored closely. • You can contact Google support to opt out of call recording.   Action Items: • Check if the terms were accepted for your account. • Share this episode with your team or agency to ensure awareness.   More Information •Search Engine Land Article •Search Engine Journal Article   Connect with Jenny: • Email: jenny@hedyandhopp.com • LinkedIn: https://www.linkedin.com/in/jennybristow/   If you enjoyed this episode we'd love to hear your feedback! Please consider leaving us a review on your preferred listening platform and sharing it with others.

Les Cast Codeurs Podcast
LCC 319 - le ramasse-miettes-charognes

Les Cast Codeurs Podcast

Play Episode Listen Later Dec 16, 2024 70:05


Dans cet épisde en audio et en vidéo (youtube.com/lescastcodeurs), Guillaume et Emmanuel discutent des 15 ans de Go, d'une nouvelle approche de garbage collecting, de LLMs dans les applications Java, dobservabilité, d'une attaque de chaine d'approvisionnement via javac et d'autres choses. Enregistré le 13 décembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-319.mp3 News Langages Go fête son 15ème anniversaire ! https://go.dev/blog/15years discute les 15 ans la corrections de gotchas dans les for loops (notamment les variables étaient loop scoped) le fait que la compile echoue si on attend une version de go superieure seulement depuis go 1.21 en parallele de la gestion de la chaine d'outil (c'est en 2023 seulement!) opt-in telemetrie aussi recent Construire OpenJDK à partir des sources sur macOS https://www.morling.dev/blog/building-openjdk-from-source-on-macos/ de maniere surprenante ce n'est pas tres compliqué Papier sur l'aproche Mark-scavenge pour un ramasse miette https://inside.java/2024/11/22/mark-scavenge-gc/ papier de recherche utiliser l'accessibilité pour preuve de vie n'est pas idéal: un objet peut etre atteignable mais ne sera jamais accedé par le programme les regions les plus pauvres en objets vivant voient leurs objets bouger dans uen autre region et la regio libéré, c'est le comportement classique des GC deux methodes: mark evaguate qui le fait en deux temps et la liveness peut evoluer ; et scavenge qui bouge l'objet vivant des sa decouverte ont fait tourner via ZGC des experience pour voir les objects consideres vivants et bougés inutilement. resultats montrent un gros taux d'objets bougés de maniere inutile proposent un algo different ils marquent les objets vivants mais ne les bougent pas avant le prochain GC pour leur donner une change de devenir unreachable elimine beaucoup de deplacement inutiles vu que les objets deviennent non accessible en un cycle de GC jusquà 91% de reduction ! Particulierement notable dans les machines chargées en CPU. Les tokens d'accès court ou longs https://grayduck.mn/2023/04/17/refresh-vs-long-lived-access-tokens/ pourquoi des long access tokens (gnre refresh token) sont utilises pour des short lived dans oauth 2.0 refresh token simplifient la revocation: vu que seul le auth serveur a a verifier la révocation et les clients vérifient l'expiration et la validité de la signature refresh token ne sont envoyés que entre endpoints alors que les access tokens se baladent pas mal: les frontières de confiance ne sont pas traversées refresh token comme utilise infréquement, et donc peut etre protegee dans une enclave les changements de grants sont plus simple tout en restant distribuable histoire des access refresh token et access token permet de mieux tracer les abus / attaques les inconvenients: c'est plus compliqué en flow, the auth serveur est un SPOF amis mitigeable Java Advent est de retour https://www.javaadvent.com/calendar backstage Java integrite par defaut (et ses consequences sur l'ecosysteme) timefold (sovler) Les extensions JUNit 5 OpenTelemetry via Java Agent vs Micrometer analyse statique de code CQRS et les fonctionalités modernes de Java java simple (sans compilatrion, sans objet fullstack dev with quarkus as backend José Paumard introduit et explique les Gatherers dans Java 24 dans cette vidéo https://inside.java/2024/11/26/jepcafe23/ Librairies Micronaut 4.7, avec l'intégration de LangChain4j https://micronaut.io/2024/11/14/micronaut-framework-4-7-0-released/ Combiner le framework de test Spock et Cucumber https://www.sfeir.dev/back/spock-framework-revolutionnez-vos-tests-unitaires-avec-la-puissance-du-bdd-et-de-cucumber/ les experts peuvent écrire leurs tests au format Gherkin (de Cucumber) et les développeurs peuvent implémenter les assertions correspondantes avec l'intégration dans Spock, pour des tests très lisibles Spring 6.2 https://spring.io/blog/2024/11/14/spring-framework-6-2-0-available-now beans @Fallback améliorations sur SpELet sur le support de tests support de l'echape des property placeholders une initioalisation des beans en tache de fond nouvelle et pleins d'autres choses encore Comment créer une application Java LLM tournant 100% en Java avec Jlama https://quarkus.io/blog/quarkus-jlama/ blog de Mario Fusco, Mr API et Java et Drools utilise jlama + quarkus + langchain Explique les avantage de l'approche pure Java comme le cycle de vie unique, tester les modeles rapidement, securite (tout est in process), monolithe ahahah, observabilité simplifiée, distribution simplifiée (genre appli embarquée) etc Vert.x 5 en seconde incubation https://vertx.io/blog/eclipse-vert-x-5-candidate-2-released/ Support des Java modules (mais beacoup des modules vert.x eux-même ne le supportent pas support io_uring dans vert.x core le load balancing côté client le modele des callbacks n'est plus supporté, vive les Futur beaucoup d'améliorations autour de gRPC et d'autres choses Un article sur Spring AI et la multi modalite audio https://spring.io/blog/2024/12/05/spring-ai-audio-modality permet de voir les evolutions des APIs de Spring AI s'appluie sur les derniers modeles d'open ai des examples comme par exemple un chatbot voix et donc comment enregistrer la voix et la passer a OpenAI Comment activer le support experimental HTTP/3 dans Spring Boot https://spring.io/blog/2024/11/26/http3-in-reactor-2024 c'ets Netty qui fait le boulot puis Spring Netty l'article décrit les etapes pour l'utiliser dans vos applis Spring Boot ou Spring Cloud Gateway l'article explique aussi le cote client (app cliente) ce qui est sympa Infrastructure Un survol des offres d'observabilité http://blog.ippon.fr/2024/11/18/observabilite-informatique-comprendre-les-bases-2eme-partie/ un survol des principales offres d'observabilité Open source ou SaaS et certains outsiders Pas mal pour commencer à défricher ce qui vous conviendrait blog de ippon Web Sortie de Angular 19 https://blog.ninja-squad.com/2024/11/19/what-is-new-angular-19.0/ stabilité des APIs Signal APIs migration automatique vers signals composants standalone par défaut nouvelles APIs linkedSignal et resource de grosses améliorations de SSR et HMR un article également de Sfeir sur Angular 19 https://www.sfeir.dev/front/angular-19-tout-ce-quil-faut-savoir-sur-les-innovations-majeures-du-framework/ Angluar 19 https://www.sfeir.dev/front/angular-19-tout-ce-quil-faut-savoir-sur-les-innovations-majeures-du-framework/ composant standalone par default (limiter les problemes de dependances), peut le mettre en strict pour le l'imposer (ou planter) signalement des imports inutilisés @let pour les variables locales dans les templates linkedSignal (experimental) pour lier des signaux entre eux (cascade de changement suite a un evenement hydratation incrementale (contenu progressivement interactif avec le chargement - sur les parties de la page visible ou necessaires et event replay, routing et modes de rendu en rendy hybride, Hot module replacement etc The State of Frontend — dernière compilation des préférences des développeurs en terme de front https://tsh.io/state-of-frontend/ React en tête, suivi de Vue et Svelte. Angular seulement 4ème Côté rendering framework, Next.js a la majorité absolue, ensuite viennent Nuxt et Astro Zod est la solution de validation préférée Pour la gestion de date, date-fns est en tête, suivi par moment.js Côté state management, React Context API en première place, mais les suivants sont tous aussi pour React ! Grosse utilisation de lodash pour plein d'utilités Pour fetcher des resources distantes, l'API native Fetch et Axios sont les 2 vaincoeurs Pour le déploiement, Vercel est premier Côté CI/CD, beaucoup de Github Actions, suivi par Gitlab CI Package management, malgré de bonnes alternatives, NPM se taille toujours la part du lion Ecrasante utilisation de Node.js comme runtime JavaScript pour faire du développement front Pour ce qui est du typing, beaucoup utilisent TypeScript, et un peu de JSdoc, et la majorité des répondants pensent que TypeScript a dépassé JavaScript en usage Dans les API natives du navigateur, Fetch, Storage et WebSockets sont les APIs les plus utilisées La popularité des PWA devrait suivre son petit bonhomme de chemin En terme de design system, shadcn.ui en tête, suivi par Material, puis Bootstram Pour la gestion des styles, un bon mix de plain old CSS, de Tailwind, et de Sass/CSS Jest est premier comme framework de tests Les 3/4 des développeurs front utilisent Visual Studio Code, quant au quart suivant, c'est JetBrains qui raffle les miettes Pour le build, Vite récolte les 4/5 des voix ESLint et Prettier sont les 2 favoris pour vérifier le code   Parfois, on aimerait pouvoir tester une librairie ou un framework JavaScript, sans pour autant devoir mettre en place tout un projet, avec outil de build et autre. Julia Evans explore les différents cas de figure, suivant la façon dont ces librairies sont bundlées https://jvns.ca/blog/2024/11/18/how-to-import-a-javascript-library/ Certaines librairies permette de ne faire qu'un simple import dans une balise script Certaines frameworks sont distribués sous forme d'Universal Module Definition, sous CommonJS, d'ESmodule franchemet en tant que noob c'est compliqué quand même Data et Intelligence Artificielle L'impact de l'IA en entreprise et des accès aux documents un peu laxistes https://archive.ph/uPyhX l'indexing choppe tout ce qu'il peut et l'IA est tres puissante pour diriger des requetes et extraires les données qui auraient du etre plus restreintes Différentes manières de faire de l'extraction de données et de forcer la main à un LLM pour qu'il génère du JSON https://glaforge.dev/posts/2024/11/18/data-extraction-the-many-ways-to-get-llms-to-spit-json-content/ l'approche “je demande gentiment” au LLM, en faisant du prompt engineering en utilisant du function calling pour les modèles supportant la fonctionnalité, en particulier avant les approches de type “JSON mode” ou “JSON schema” ou effectivement si le modèle le supporte aussi, toujours avec un peu de prompting, mais en utilisant le “JSON mode” qui force le LLM a générer du JSON valide encore mieux avec la possibilité de spécifier un schema JSON (type OpenAPI) pour que le JSON en sortie soit “compliant” avec le schéma proposé Comment masquer les données confidentielles avec ses échanges avec les LLMs https://glaforge.dev/posts/2024/11/25/redacting-sensitive-information-when-using-generative-ai-models/ utilisation de l'API Data Loss Prevention de Google Cloud qui permet d'identifier puis de censurer / masquer (“redacted” en anglais) des informations personnelles identifiables (“PII”, comme un nom, un compte bancaire, un numéro de passeport, etc) pour des raison de sécurité, de privacy, pour éviter les brèche de données comme on en entend trop souvent parler dans les nouvelles On peut utiliser certains modèles d'embedding pour faire de la recherche de code https://glaforge.dev/posts/2024/12/02/semantic-code-search-for-programming-idioms-with-langchain4j-and-vertex-ai-embedding-models/ Guillaume recherche des bouts de code, en entrant une requête en langue naturel Certains embedding models supportent différents types de tâches, comme question/réponse, question en langue naturelle / retour sous forme de code, ou d'autres tâches comme le fact checking, etc Dans cet article, utilisation du modèle de Google Cloud Vertex AI, en Java, avec LangChain4j Google sort la version 2 de Gemini Flash https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/ La nouvelle version Gemini 2.0 Flash dépasse même Gemini 1.5 Pro dans les benchmarks Tout en étant 2 fois plus rapide que Gemini 1.5 Pro, et bien que le prix ne soit pas encore annoncé, on imagine également plus abordable Google présente Gemini 2 comme le LLM idéal pour les “agents” Gemini propose une vraie multimodalité en sortie (premier LLM sur le marché à le proposer) : Gemini 2 peut entrelacer du texte, des images, de l'audio Gemini 2 supporte plus de 100 langues 8 voix de haute qualité, assez naturelles, pour la partie audio Un nouveau mode speech-to-speech en live, où on peut même interrompre le LLM, c'est d'ailleurs ce qui est utilisé dans Project Astra, l'application mobile montrée à Google I/O qui devient un vrai assistant vocale en live sur votre téléphone Google annonce aussi une nouvelle expérimentation autour des assistants de programmation, avec Project Jules, avec lequel on peut discuter en live aussi, partager son code, comme un vrai pair programmeur Google a présenté Project Mariner qui est un agent qui est sous forme d'extension Chrome, qui va permettre de commander votre navigateur comme votre assistant de recherche personnel, qui va être capable de faire des recherches sur le web, de naviguer dans les sites web, pour trouver les infos que vous recherchez Cet autre article montre différentes vidéos de démos de ces fonctionnalités https://developers.googleblog.com/en/the-next-chapter-of-the-gemini-era-for-developers/ Un nouveau projet appelé Deep Research, qui permet de faire des rapports dans Gemini Advanced : on donne un sujet et l'agent va proposer un plan pour un rapport sur ce sujet (qu'on peut valider, retoucher) et ensuite, Deep Research va effectuer des recherches sur le web pour vous, et faire la synthèse de ses recherches dans un rapport final https://blog.google/products/gemini/google-gemini-deep-research/ Enfin, Google AI Studio, en plus de vous permettre d'expérimenter avec Gemini 2, vous pourrez aussi utiliser des “starter apps” qui montrent comment faire de la reconnaissance d'objet dans des images, comment faire des recherches avec un agent connecté à Google Maps, etc. Google AI Studio permet également de partager votre écran avec lui, en mobile ou en desktop, de façon à l'utiliser comme un assistant qui peut voir ce que vous faites, ce que vous coder et peut répondre à vos questions Méthodologies Un article de GitHub sur l'impact de la surutilisation des CPU sur la perf de l'appli https://github.blog/engineering/architecture-optimization/breaking-down-cpu-speed-how-utilization-impacts-performance/ c'est surprenant qu'ils ont des effets des 30% de perf c'est du a la non limit thermique, au boost de frequece qui en suit ils ont donc cherché le golden ratio pour eux autour de 60% ils prennent des morceaux de cluster kube poru faire tourner les workloads et ajoutent des wqorkload CPU artificiels (genre math) Sécurité Attaque de la chaîne d'approvisionnement via javac https://xdev.software/en/news/detail/discovering-the-perfect-java-supply-chain-attack-vector-and-how-it-got-fixed s'appuie sur l'annotation processeur l'annotation processors des dependances est chargé et executé au moment du build du projet et cherche les annotations processor dans le user classpath (via le pattern serviceloader) et donc si la dependance est attaquée et un annotation processor est ajouté ou modifié on a un vecteur d'attaque au moment de la compilation du projet ciblé des qu'on deparre l'IDE en gros workaround, activer -proc:none et activer les annotation processors explicitly dans votre outil de build certaines améliorations dans le JDK: le compilateur note qu'il execute un annotation processor dans java 23+ les annotation processors sont deactivés par defaut Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 19 décembre 2024 : Normandie.ai 2024 - Rouen (France) 20 janvier 2025 : Elastic{ON} - Paris (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 24-25 janvier 2025 : Agile Games Île-de-France 2025 - Paris (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6-7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 16-18 avril 2025 : Devoxx France - Paris (France) 29-30 avril 2025 : MixIT - Lyon (France) 7-9 mai 2025 : Devoxx UK - London (UK) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

EdCuration: Where We Reshape Learning
Strengthen Students' Writing with Immediate, Personalized, Contextual Feedback

EdCuration: Where We Reshape Learning

Play Episode Listen Later Dec 12, 2024 31:23


        Veteran English teacher Carrie Shevlin couldn't wait to tell the EdCuration listeners about how Scribo from Literatu has given her students loads of confidence with their writing, greatly improved their outcomes, and made both her planning and grading quicker, easier and much more personalized.          While AI can never take the place of a teacher, Carrie says Scribo gives both her and her students a huge boost toward stronger, skillful writing.    Resources: Scribo by Literatu is designed to improve student writing skills by providing real-time feedback, personalized insights, and actionable strategies. It helps students improve their writing proficiency while offering educators valuable data to tailor instruction. As part of the SchoolDay Collection, by Global Grid for Learning (GG4L), Scribo is available for a free pilot without requiring or collecting PII student data. SchoolDay Collection products are available for purchase on the AWS Marketplace.   More Information: Scribo News and Views Product Sample About Scribo's Writing improvement platform Saving teachers time across disciplines at Woodland Hills School District Motivating students to care about writing at New Waverly High School in Texas Inclusion Statement Product Impact Report   More great stuff: Explore our Micro Professional Learning ExPLorations fun and free, 1-hour digital, on-demand Professional Learning for teachers from all content areas and grades levels EdCuration's Blog: Learning in Action

Good Morning, HR
Evolving Data Privacy Regulations and Expectations (Business Credit) with Jason Barrett

Good Morning, HR

Play Episode Listen Later Dec 12, 2024 38:15 Transcription Available


In episode 178, Coffey talks with Jason Barrett about the evolving landscape of data privacy laws and their impact on employers' handling of personal information.They discuss the cultural shift driving increased data privacy concerns; differences between personally identifiable information (PII) and sensitive personal information; roles of data controllers, processors, and consumers; GDPR's seven foundational principles; state-level privacy laws including Texas and California regulations; biometric data collection and regulation; cross-state jurisdiction issues; data storage and retention practices; and the implications of AI on data privacy.Good Morning, HR is brought to you by Imperative—Bulletproof Background Checks. For more information about our commitment to quality and excellent customer service, visit us at https://imperativeinfo.com. If you are an HRCI or SHRM-certified professional, this episode of Good Morning, HR has been pre-approved for half a recertification credit (business credit for HRCI). To obtain the recertification information for this episode, visit https://goodmorninghr.com. About our Guest:Jason has worked over 20 years in corporate legal, compliance and HR roles in diverse industries including maritime, energy and oil and gas.He has expertise in the areas of global employment, labor and IP law, employee relations, data privacy compliance and legal administration.JAME Consulting, LLC focuses on providing comprehensive HR, legal, and data privacy consulting advice and training for all stages of the employment lifecycle.Jason received his Bachelor of Arts degree from the University of Mississippi and J.D. from the University of Houston.Jason currently serves as a Board or committee member for 2 nonprofits and one private organization.Jason Barrett can be reached athttp://jameconsulting.com https://www.linkedin.com/in/jason-barrett-jd-3062b6a About Mike Coffey:Mike Coffey is an entrepreneur, licensed private investigator, business strategist, HR consultant, and registered yoga teacher.In 1999, he founded Imperative, a background investigations and due diligence firm helping risk-averse clients make well-informed decisions about the people they involve in their business.Imperative delivers in-depth employment background investigations, know-your-customer and anti-money laundering compliance, and due diligence investigations to more than 300 risk-averse corporate clients across the US, and, through its PFC Caregiver & Household Screening brand, many more private estates, family offices, and personal service agencies.Imperative has been named the Texas Association of Business' small business of the year and is accredited by the Professional Background Screening Association. Mike shares his insight from 25 years of HR-entrepreneurship on the Good Morning, HR podcast, where each week he talks to business leaders about bringing people together to create value for customers, shareholders, and community.Mike has been recognized as an Entrepreneur of Excellence by FW, Inc. and has twice been recognized as the North Texas HR Professional of the Year. Mike is a member of the Fort Worth chapter of the Entrepreneurs' Organization and is a volunteer leader with the SHRM Texas State Council and the Fort Worth Chamber of Commerce.Mike is a certified Senior Professional in Human Resources (SPHR) through the HR Certification Institute and a SHRM Senior Certified Professional (SHRM-SCP). He is also a Yoga Alliance registered yoga teacher (RYT-200).Mike and his very patient wife of 27 years are empty nesters in Fort Worth.Learning Objectives:Identify different types of protected data and develop appropriate protocols for collecting, storing, and managing employee information.Create transparent data privacy policies that address both state and federal requirements while maintaining compliance across multiple jurisdictions.Implement data minimization practices that balance business needs with emerging privacy regulations and individual rights to data access and protection.

Rosenfeld Review Podcast
Rock Climbing and Security UX

Rosenfeld Review Podcast

Play Episode Listen Later Dec 2, 2024 34:16


Just as a rock climber meticulously checks their gear and follows strict safety protocols to navigate treacherous heights, security UX professionals must also anticipate risks and design safeguards to ensure a smooth and safe journey for users in a digital landscape. In Lou's interview with Heidi Trost, author of Human-Centered Security: How to Design Systems that are Both Safe and Usable, Heidi highlights the critical safety protocols climbers and belayers follow, which mirror the precautions needed in system design to mitigate human error and anxiety. This analogy sets the stage for a broader discussion on security user experience challenges. Heidi stresses the necessity of cross-disciplinary collaboration, especially when dealing with sensitive data like personally identifiable information (PII) and electronic protected health information (EPHI). She points out how involving legal and security teams early can streamline projects and improve outcomes. Designers, as facilitators, must bridge the gap between complex security concepts and user comprehension. Heidi's book helps them do this by using personas to understand how the dynamic between users, security UX, and threat actors shapes. Lou and Heidi's conversation explores the evolution of multi-factor authentication (MFA) and its unintended consequences. What started as a simple 6-digit code morphed into a troublesome fatigue for users. Heidi underscores the importance of iterative design to adapt to these evolving challenges, likening the chaos of security interactions to a relentless ping-pong match. As they look ahead, Louis and Heidi discuss the rapid evolution of AI in security contexts, emphasizing the balance between technological advancement and user protection. With AI assistants poised to know more about individuals than ever, designers must remain vigilant to prevent potential misuse. Their conversation is an invitation for professionals to rethink how they approach security UX and design, encouraging a proactive stance in this ever-changing landscape.

AI Powered by People
AI, IP, and VC: Building and Protecting Future Tech

AI Powered by People

Play Episode Listen Later Nov 12, 2024 76:37


//The content of this Podcast has been prepared for informational purposes only and does not constitute legal advice//summaryIn this conversation, Sarah Nagle and Chad Reynolds discuss the complexities of AI development, focusing on the challenges of copyright, regulation, and investment in the AI landscape. They explore the implications of antitrust laws, the strategic moves by major companies like Amazon, and the evolving nature of AI startups. The conversation also delves into the importance of record-keeping and the future of copyright ownership in the context of AI-generated content. In this conversation, Sarah Nagle discusses the complexities of ownership, frameworks, and legal implications surrounding AI, particularly in the context of copyright, fundraising, and the handling of personal identifiable information (PII). The discussion highlights the importance of having proper frameworks in place to protect intellectual property and navigate the evolving landscape of AI regulations. The conversation also touches on the responsibilities of companies in managing user data and the potential liabilities they face in the event of misuse. Predictions for the future of AI and fundraising are made, emphasizing the challenges and opportunities that lie ahead.Chapters00:00Navigating Uncertainty in AI Revenue Streams02:59The Role of Regulation in AI Development05:49The Impact of Antitrust Laws on AI Companies09:07Amazon's Strategic Moves in AI Investment11:58Understanding AI Hardware and Processing Power14:58The Evolving Landscape of AI Startups17:52Intellectual Property Challenges in AI21:10The Importance of Record Keeping in AI23:55The Future of AI and Copyright Ownership38:41Ownership and Frameworks in AI42:57Human Involvement in AI Outputs48:56Training Data and Copyright Concerns55:00Navigating PII in the Age of AI01:01:28Liability in AI Misuse01:10:20Predictions for the Future of AI and Fundraisinghttps://www.dinsmore.com/Vurvey.com

The RSnake Show
Demo Day - BigID

The RSnake Show

Play Episode Listen Later Nov 12, 2024 57:50


Today Trey Ford and RSnake sit down and review BigID. It is an entirely different way of thinking of digital loss prevention (DLP) where they hunt down data all over your environment and attempt to classify and ideally remove or protect data whenever possible.  In the event of a breach they can identify the blast radius of that breach and what PII is in scope, what regulations are triggered, etc.

Molecule to Market: Inside the outsourcing space
Meet the Top Gun CDMO Maverick

Molecule to Market: Inside the outsourcing space

Play Episode Listen Later Nov 8, 2024 55:20


In this episode of Molecule to Market, you'll go inside the outsourcing space of the global drug development sector with John Fowler, President & CEO at Pii (Pharmaceutics International, Inc.) Your host, Raman Sehgal, discusses the pharmaceutical and biotechnology supply chain with John, covering: How a competitive family environment and navigating flight school in the Navy equipped him for a successful career Why a CDMO business is like managing chaos... under strict regulations The silver lining of the pandemic - quality time with his family Why he decided to leave Johnson Matthey after 30 years and take a 'title' step back Bringing PII back to its former glory and rebuilding a management team John Fowler brings more than 30 years of pharmaceutical and chemical industry experience to Pii. He most recently served as President and Chief Operating Officer of Piramal Pharma Solutions, a CDMO serving generic companies worldwide. Before joining Piramal, he was the Divisional CEO of the Global Fine Chemical business at Johnson Matthey. Mr. Fowler also held senior leadership roles in several business verticals at JM, including Pharmaceutical Materials and Services, Environmental Catalysts, and Technologies. He has a Chemical Engineering degree from Bucknell University and an MBA from St. Joseph University.   This episode of Molecule to Market is sponsored by Vetter, a global leader in aseptic filling solutions for injectable products in the pharmaceutical and biotech industries. Discover more about Vetter's comprehensive services, from early-stage development to commercial production, and how they support clients in delivering high-quality therapies to the market.   Please subscribe, tell your industry colleagues and join us in celebrating and promoting the value and importance of the global life science outsourcing space. We'd also appreciate a positive rating!   Molecule to Market is sponsored and funded by ramarketing, an international marketing, design, digital and content agency helping com

VOV - Việt Nam và Thế giới
Tin trong nước - Cuối năm 2024 sẽ công bố chỉ số đổi mới sáng tạo của từng địa phương

VOV - Việt Nam và Thế giới

Play Episode Listen Later Nov 4, 2024 2:32


- Theo thông tin từ Bộ Khoa học và công nghệ, dự kiến, cuối năm nay, Bộ chỉ số đổi mới sáng tạo cấp địa phương (PII 2024) sẽ được công bố, nhằm cung cấp bức tranh tổng thể về hiện trạng mô hình phát triển kinh tế - xã hội dựa trên khoa học công nghệ và đổi mới sáng tạo của từng địa phương. Chủ đề : công bố, chỉ số --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1tintuc/support

trong pii b khoa
Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 639: Cody Ebberson on Regulated Industries

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Oct 23, 2024 39:20


Cody Ebberson, CTO of Medplum, joins host Sam Taggart to discuss the constraints that working in regulated industries add to the software development process. They explore some general aspects of developing for regulated industries, such as medical and finance, as well as a range of specific considerations that can add complexity and effort. Cody describes how translating regulatory requirements into test specifications and automating those tests can help streamline software development in these regulated environments.  Brought to you by IEEE Computer Society and IEEE Software magazine.

Cyber Security Headlines
U.S. rule on selling sensitive data, Cisco data stolen, Nidec breach

Cyber Security Headlines

Play Episode Listen Later Oct 22, 2024 8:34


Proposed rules ban U.S. companies from selling sensitive data Cisco data stolen by IntelBroker Nidec breach exposes 50,000+ documents Thanks to today's episode sponsor, SpyCloud Did you know that infostealer malware can be a precursor to ransomware? Infostealers are a trending tactic used by cybercriminals to exfiltrate valuable identity data like credentials, PII, and session cookies. According to recent SpyCloud research, 75% of organizations were affected by ransomware more than once in the past year! Visit spycloud.com/headlines to find out how to keep your organization from becoming one of the statistics.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

Cyber Security Headlines
T-mobile data breach fines, Iranian hackers charged, Deepfake scam hits U.S. senate

Cyber Security Headlines

Play Episode Listen Later Oct 1, 2024 8:26


T-Mobile data breaches cost company $31.5 million Iranian hackers charged for targeting 2024 U.S. election Deepfake scam hits U.S. senate Huge thanks to our sponsor, SpyCloud Did you know that infostealer malware can be a precursor to ransomware? Infostealers are a trending tactic used by cybercriminals to exfiltrate valuable identity data like credentials, PII, and session cookies. According to recent SpyCloud research, 75% of organizations were affected by ransomware more than once in the past year! Visit spycloud.com/headlines to find out how to keep your organization from becoming one of the statistics. Get the story behind the headlines at CISOSeries.com

Hacking Humans
The devil IS in the details.

Hacking Humans

Play Episode Listen Later Sep 26, 2024 47:05


It's all in the details, folks. Pay attention to those and you can avoid unnecessary stress. Dave Bittner, Maria Varmazis, and Joe Carrigan swap stories on email password-stealing attacks, Google ads scams, and fake banks this week. The team shares follow up from listener Steven from the UK about the hazards of shoulder surfing when they received their new debit card with all PII on the same side of the card. A friend of the show JJ shared a story and a warning about fake checks. Never accept a check from a stranger. Dave's story covers Action Fraud, the UK's national fraud and cyber reporting center, warning iPhone users of a new Apple ID phishing campaign. Maria talks about new research that uncovers a new scam that takes advantage of public wishlists on ecommerce websites, which in this case is Walmart, but is similar to those found on Amazon and other sites. Joe's story is about a firm in Singapore with an email from a supplier requesting that a pending payment be sent to a new bank account based in East Timor.  Our Catch of the Day is from Reddit on the /scambait subreddit "THE Dolly Parton is going to let ME in her VIP club." Links to the stories: iPhone Users Warned As New Email Password-Stealing Attacks Reported Walmart customers scammed via fake shopping lists, threatened with arrest Police recover over USD 40 million from international email scam THE Dolly Parton is going to let ME in her VIP club. You can hear more from the T-Minus space daily show here. Have a Catch of the Day you'd like to share? Email it to us at hackinghumans@n2k.com.

ServiceNow Podcasts
Succeeding in Health IT During Uncertain Times

ServiceNow Podcasts

Play Episode Listen Later Sep 20, 2024 20:17


Health IT – is it about healthcare or security or is it IT? Is it about Clinicians, Nurses or Patients? What about the technologies and the core systems? Who's minding the store on PII and PHI? AI and GenAI are here, can we start on that journey? YES! None of these are less important than the other and compete for an organization's attention. But how many, how much, how complex are further questions that arise. How many different systems? How much time is an individual spending when health is at stake? How complex is the navigation through all of this causing an undesirable experience and inefficient process, not to mention increased costs. Speakers:Jonathan Alboum, Federal CTO, ServiceNowArunkumar (Arun) Iyer, Principal Executive Architect and Field CTO, ServiceNow   Link to YouTube recording: https://youtu.be/pAt0twuhnPkSee omnystudio.com/listener for privacy information.

EdCuration: Where We Reshape Learning
The George & Mary Turek Hall Of Science breaks boundaries in STEM instruction

EdCuration: Where We Reshape Learning

Play Episode Listen Later Sep 19, 2024 29:17


Amy Ely, Director of STEM at Detroit Catholic Central High School, announces the opening of the George & Mary Turek Hall of Science - state-of-the-art 57,000 sq ft STEM Center. Learn about the cutting-edge facilities including a large-scale observatory, robotics labs, and an advanced greenhouse for sustainability projects. She captivating descriptions include full-motion flight simulators and immersive theaters that can stream astronomical data.  Amy emphasizes the importance of future readiness and project-based learning. highlighting the importance of collaboration and adaptation in education. Her vision is to prepare students not only for current STEM careers but also for those that will emerge in the future, embodying her belief that education should be both forward-thinking and deeply connected to real-world applications.   RESOURCES:  Learn about School Passport®,  a data exchange platform that exchanges any data with any EdTech product, eliminates the need to share student PII and is easy to implement for schools and vendors. School Passport empowers schools govern the exchange of student, staff, and parent PII data with SaaS vendors, approve and publish apps to a district or school branded on-demand AppStore, and delegate to school staff and teachers the ability to activate apps within their groups.   Learn more about Detroit Catholic Central High School See the George and Mary Turek Hall of Science Take a video tour here  Hear more on The STEM podcast Learn Why Fewer Young Men are Choosing to Pursue College Degrees (From PBS)   More great stuff: Explore our Micro Professional Learning ExPLorations fun and free, 1-hour digital, on-demand Professional Learning for teachers from all content areas and grades levels EdCuration's Blog: Learning in Action  

Lend Academy Podcast
Rodger Desai of Prove on the key to success in ID verification and authentication

Lend Academy Podcast

Play Episode Listen Later Sep 19, 2024 30:49


There is no faster-moving space in finance right now than the anti-fraud space. With all the advances in Generative AI, the fraudsters are more difficult to catch than ever. But there is one place where fintechs and banks can focus that has real promise: authenticating the mobile phone. My next guest on the Fintech One-on-One podcast is Rodger Desai, the CEO and Founder of Prove. They have developed sophisticated tools to verify identity through a mobile phone that can help banks and fintechs across a variety of touchpoints. We explain why the phone may hold the key (literally) to the future of ID verification and authentication.In this podcast you will learn:What led to the founding of Prove.Why they decided to focus on financial services.The different types of services Prove offers today.ID verification best practices and why you need to add authentication.Rodger's thoughts on the ready availability of PII due to data breaches.What data they are returning back to their clients around authentication.How they work with transactions that happen on a desktop rather than a phone.How we should be thinking about sophisticated fraud rings today.How to combat Gen-AI deep fakes.Why key management is the answer to how you authenticate.The role of visual cues when it comes to verification.What the popular attack vectors are for criminals.Who is winning in the arms race between the criminals and the good guys.Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes

TestGuild Performance Testing and Site Reliability Podcast
Mastering OpenTelemetry and Observability with Steve Flanders

TestGuild Performance Testing and Site Reliability Podcast

Play Episode Listen Later Sep 12, 2024 34:14


Today, we've got an exciting episode lined up for you. Steve Flanders, the senior director of engineering at Splunk and one of the founding members of OpenTelemetry, is joining us. In this episode, we'll explore the power of AI in observability, the intricacies of instrumentation and tracing signals, and the essential role of context and correlation in problem isolation and root cause analysis. Steve will also share insights from his upcoming book, Mastering Open Telemetry and Observability, which aims to make this complex topic accessible to everyone from beginners to experts. Listen in to discover valuable knowledge, from discussing real-world applications like the Astronomy Shop demo app to addressing concerns around personal identification information (PII) and the balancing act of instrumentation. Whether you're a DevOps professional, a developer, or simply interested in the future of observability, you won't want to miss this conversation.

The Tech Blog Writer Podcast
3015: Real-World AI Solutions: Appian's Path to Enterprise Innovation

The Tech Blog Writer Podcast

Play Episode Listen Later Sep 5, 2024 25:19


In this episode of Tech Talks Daily, Adam Glaser from Appian shares how generative AI is transforming enterprise technology and redefining how businesses operate. As the global appetite for AI grows, Appian's low-code platform stands out by making AI more accessible, allowing enterprises to build and deploy AI-powered applications without requiring extensive data science resources. Adam dives deep into how generative AI serves as a force multiplier across the board—from developers building and testing applications faster to end users interacting directly with AI-driven chat interfaces. A key focus of the conversation is Appian's patented data fabric, a virtualized data layer that addresses fragmented enterprise data. This architecture provides the foundation for AI to operate efficiently, pulling together disparate data sources into a unified system. Adam explains how this approach enables businesses to unlock the full potential of AI, helping enterprises tackle complex tasks such as document extraction, PII detection, and real-time data analysis. Throughout the episode, Adam presents several real-world examples where Appian's AI-enhanced solutions have delivered measurable results. From automating the accounts payable process for a U.S. fire protection company to improving student advising through AI chatbots at a large university, these stories reveal how businesses are achieving significant productivity gains and cost savings. In particular, the episode highlights how AI has revolutionized document processing, customer service, and data management, reducing errors and improving accuracy across industries. Adam also addresses the barriers to AI adoption, including common concerns around data privacy, job displacement, and unrealistic expectations. He offers practical advice for business leaders looking to integrate AI effectively, urging them to focus on tangible business outcomes and view AI as a tool to augment human capabilities, not replace them.

Tech ONTAP Podcast
Tech ONTAP Podcast Episode 396 - BlueXP Classification

Tech ONTAP Podcast

Play Episode Listen Later Sep 4, 2024 31:13


Sifting through millions of files to filter out PII, compliance violations, or other sensitive data is a huge burden for many organizations. BlueXP Classification from NetApp provides a way to simplify those tasks and make regulatory compliance more of a molehill than a mountain. Ben Kesten joins us to discuss how BlueXP Classification is changing the way you manage your data.

Telecom Reseller
Pitchit's LQaaS (Lead Qualification as a Service platform), Podcast

Telecom Reseller

Play Episode Listen Later Sep 3, 2024


“We can now provide lead qualification as a service using AI and help reduce operating costs that traditionally would have been involved in that entire sales funnel,” says Paul Kleen, founder and CEO of Pitchit. Pitchit's LQaaS takes on the heavy lifting of those very first few steps in the sales process, which can be very hard on people. The AI assisted process takes the brunt of that challenge, leaving a company's sales force to take on qualified leads and a lead flow that is matched up, better, the way each individual might want to connect. “Voice is not always the best way to get ahold of these people.” Pitchit's LQaaS platform, available today, automates the first 24 hours of manual labor required to qualify inbound leads. To date, Pitchit has helped telecom and insurance sales teams qualify 531,000+ leads, save 4,400+ labor hours, book 4,000+ meetings, and close $280 million in customer revenue. Customers using Pitchit experienced a 250% increase in their lead qualification rate, on average. Pitchit's industry-first AI LQaaS solution can sync leads from 7,000+ channels — social media, CRM, email, SMS, and more — to quickly quote pricing, book meetings, handle objections, and capture personally identifiable information (PII) before handing qualified leads off to a human sales rep. By focusing on purchase-ready leads, brands with a large volume of B2C sales across multiple consumer touch points can dramatically increase sales qualified lead (SQL) conversion rates, lower the time and cost associated with every sale, and maximize revenue per sales representative. In addition, by reducing sales friction, brands can offer their customers a faster and more personalized sales experience while reducing costs. In this podcast, Paul gives us a guided tour of LQaaS. While there are clearly some intriguing consumer-facing applications, we also catch a glimpse at some B2B opportunities. As the portfolios of MSPs and MSSPs expand, could this be a platform that finds leads, lining up undiscovered sales processes by matching product, service, need and more? Visit www.pitchit.ai

Better Call Daddy
388. Building An Empire From the Basement: John Svedese Surge Media

Better Call Daddy

Play Episode Listen Later Aug 26, 2024 44:12


Today's guest, John Svedese, found a comfortable place to be himself, and it happens to be his basement in Brooklyn, New York. John is a podcaster, a family man, and someone who took care of his grandparents. He even had his dad intro Reena on his podcast. In this episode, John shares his journey from growing up in a family of serial entrepreneurs to starting his own ventures and building an empire from his basement. John talks about his early involvement in his father's background screening company, his entrepreneurial spirit, and how he transitioned from working in corporate settings to cutting cold cuts at a local supermarket to make ends meet. He also delves into the challenges and rewards of balancing family life with his passion for podcasting. John's story is one of resilience, creativity, and the importance of family legacy. He discusses the evolution of his podcasting journey, the various shows he started, and how he eventually consolidated his efforts under the "Basement Surge" brand. John also shares touching anecdotes about his grandparents and the impact they had on his life. This episode is a heartfelt exploration of the American dream, the value of hard work, and the joy of building something meaningful from the ground up. Tune in to hear John's inspiring story and the wisdom he's gained along the way. Key Takeaways - The importance of family legacy and entrepreneurial spirit. - Balancing work and family life requires resilience and adaptability. - Building something meaningful often starts from humble beginnings. - Podcasting can be a powerful tool for personal growth and connection. Connect with John Svedese - Podcast: Basement Surge - Twitter: Basement Surge - Instagram: Basement Surge Connect with Reena- Website: bettercalldaddy.com - LinkedIn: Reena Friedman Watts - Twitter: @reenareena - Instagram: @reenafriedmanwatts - Instagram Podcast: @bettercalldaddypodcast We love hearing your feedback. Leave us a review, share your thoughts, and spread the word about this enriching episode. Share it with someone who appreciates the resilience and the importance of self-worth. - (00:00) Each week I interview a guest, share the stories with my dad - (05:27) PII works for major corporations to do background screening on prospective employees - (07:18) Do you think most corporations don't look back 15-20 years - (10:56) Your grandparents are both immigrants. What role have your grandparents played in your life - (14:45) You talk about what you want your legacy to be and parenting your kids - (15:44) How do you involve your kids in your work or in your passions - (17:10) Talk to me a little bit about this empire that you're building called basement surge - (23:26) Right, yeah. Exactly. And deal with me, you know, and stay with me. Not because he has to, but because he wants - (23:42) When you found out you were expecting a daughter, how did you feel - (27:59) Podcasting has helped me become more outgoing, you know - (28:49) Jamie Bell got Paul Cubby Bryan on his show through Instagram - (33:26) Is there anything that you would like to share about basement surge - (36:11) My grandfather passed away from Parkinson's, and I took care of him - (37:54) John's podcast reaffirms what podcasting is about

Navigating the Customer Experience
234: Transforming Customer Experience: Joel Passen on AI, Innovation, and Leadership with Joel Passen

Navigating the Customer Experience

Play Episode Listen Later Jul 9, 2024 20:58


Joel Passen is a proven technology entrepreneur and leader with 20+ years of success creating value and driving measurable results at the intersections of sales, business development, product strategy, operations and customer experience.   Before co-founding Sturdy, a next-gen customer experience solution that helps companies improve customer satisfaction. Joel co-founded Newton Software, a B2B SaaS company headquartered in San Francisco that was acquired by Paycor (2016). Prior to that, Joel co-founded Gravity Technologies, Inc., a company that owned and operated businesses in the talent acquisition industry, including the first recruitment processing outsourcing company focused on technical talent.    Questions · So, could you share with our listeners, how it is that you got from where you were to where you are today? · Could you share maybe 1 to 3 top I would say areas that you believe as an organization, especially in your industry that you believe it's important for you to focus on or it drives your success in the space of ensuring that you're delivering a great or a fantastic customer experience. · If you want to maybe debunk maybe one of the biggest customer service myths that you see organizations still using as a driver in their business, when really and truly that shouldn't be anything that they should be guided by, what would you say that is based on your observation and experience? · Now, Joel, can you also share with us what's the one online tool, resource, website or application that you absolutely cannot live without in your business? · Can you also share with our listeners, maybe one or two books that you have read, it could be a book that you read recently, or even one that you've read a very long time ago, but it still has had a great and profound impact on you. And that can be either professional or even a personal impact. · Now, Joel, can you also share with our listeners, what's the one thing that's going on in your life right now that you are really excited about? Either something you're working on to develop yourself or your people. · Could you tell us a little bit about your organization Sturdy, exactly what it does. And if they wanted to connect with you, or Sturdy, how they could find you online? · Now, Joel, before we wrap our episodes up, we always like to give our guests a chance to share with our listeners, do you have a quote or a saying that during times of adversity or challenge, you'll tend to revert to this quote if for any reason you get derail or you get off track or you get knock down and the quote kind of helps to just motivate you to get back up and to push forward and to complete whatever would have knocked you down. Do you have one those?   Highlights Joel's Journey Me: So, we like to give our guests an opportunity to share in their own words, a little bit about your journey. So, could you share with our listeners, how it is that you got from where you were to where you are today?   Joel shared that he appreciates this question. So, he's always a commercial leader, meaning he's always had responsibility not only for net new growth, but also for responsibility for customers. And where he is today, in terms of having started a company that focuses on using AI to ultimately improve the customer experience by deeper listening to customers, he thinks for him, the journey was more of like, and this may sound a little off putting to some, but he was kind of like a frustrated engineer to a certain extent. He was a frustrated commercial person in many ways. So, he thinks the desire to sort of solve a problem that he had as an operator, a commercial operator for two decades, has led him to where he is today, which is trying to solve that problem at scale.   Areas Organizations Needs to Focus On to Ensure the Delivery of a Fantastic Customer Experience Me: So, could you share maybe 1 to 3 top I would say areas that you believe as an organization, especially in your industry that you believe it's important for you to focus on or it drives your success in the space of ensuring that you're delivering a great or a fantastic customer experience.    Joel shared that he likes everything in 3, so absolutely. But the first thing is, in many ways he thinks that removing the abstraction layers between our customers, and the level of leadership within organizations that can actually do something about the issues that we're hearing is really important. And what he means by that is, he thinks for the last 10 or 15 years, we've created not only technology driven abstraction layers like chatbots, and automated knowledge base stuff and it's all sort of addressing what he would think of as a symptom, but not necessarily is the root cause of perhaps customer frustrations in terms of customer experience.  So, an example would be like if we have to continue to build these really acute knowledge bases and what we're addressing is constant customer confusion, for example, wouldn't it make more sense to take the data that we're deriving about what's confusing the customer, take it back to our product teams or services organizations, and address the root cause. So, number one is, he thinks we've got too many abstraction layers between us and our customers.  Like number two is, the people that are responsible for customer experience, and ultimately, you're holding the bag and oftentimes, perhaps even the number for the board can only solve about 30% of the problems that we face around customer experience or challenges we face about customer experience within the organization. So, for example, going back to like a product or service, which ultimately, we deliver to an end user. If he's the customer experience person, and he's holding the bag for that number, whether it's some sort of arbitrary number like not arbitrary, but he guesses arbitrary, like NPS or CSAT, like something that we're being measured on, but even more importantly, a revenue number. He has to rely on his other teammates to ultimately contribute to the success of that experience. And that's a big challenge.  And he thinks the third one is kind of an expansion on that topic, which is, he thinks that we've lost sight as organizations at large that everybody's job is customer experience, providing the best customer experiences we can, whether it's the billing team that designs processes around auto renewing, and billing customers, to the product teams that are delivering product to the services teams that are delivering services, to the sales teams that are an account management teams that are being truthful and ethical and honest about what we actually can provide in terms of value.   Debunking Customer Service Myths Me: Now, in listening to you, I also wanted to know if you could share with us, you've been in the whole customer experience space, and you're noted as a top customer experience voice on LinkedIn. If you want to maybe debunk maybe one of the biggest customer service myths that you see organizations still using as a driver in their business, when really and truly that shouldn't be anything that they should be guided by, what would you say that is based on your observation and experience?    Joel stated that it's going make me unpopular. He's going to lose the popularity contest on this answer, and he's going to treat this less like a cable news show, so, he's not going to swear. But if you do follow him on LinkedIn, it's not just to be incendiary. But he really thinks health scores of customers are one of the most overused, underpowered abstraction layers that people use, he thinks they're garbage. And he's speaking from experience, not because he has something against them, or the companies that help us build them or anything like that. It's because too many times, he's been in executive leadership team meetings, and he takes the board slides in, and he's got all these green lights on maybe with their top revenue accounts, for example. And he goes back to team meetings two weeks later to go back through the account reviews, and so many times he's gone from green to gone with no leading indicators along the way that has ever given him any kind of indication that they were in trouble. So, that's his answer. And some people are in love with health scores, and they spent a lot of time building them, and quite frankly, he thinks they're full of telemetry data and opinions that just don't yield anything that he can use as an executive leader.   App, Website or Tool that Joel Absolutely Can't Live Without in His Business When asked about online resource that he cannot live without in his business, Joel shared GPT 4. As an early-stage entrepreneur, he feels like he spends a lot of time sifting through data, and crafting messaging. But to make a finer point, like, for example, he has a client, they're called Hawke Media, they're one of the largest independent ad agencies in the United States. And his outbound sales team, let's say, wants to find more companies that are similar to these folks. And there are hundreds of these types of agencies in the United States, not of the same magnitude of Hawke Media, but similar. And when you go into LinkedIn Sales Navigator, and all these other Apollo or ZoomInfo. With all the metadata tags that are in the systems, it's really hard to get a filtered list, for example.  So, last night, he was watching a hockey game, and he typed a simple query into GPT 4 and said, “Give me 300 companies that look and feel like this particular company that are based in the United States and then put their name in column A of a CSV and send it to me,” and it did it and it's super accurate. And it's that kind of stuff that it's just mind blowing. Some of the things you can do with it. So, GPT 4, probably for him, can't live without it.    Me: Amazing. It's fascinating when I heard the prompt that you just gave it now. It's very important that what you put in is what you get out almost like the quality of the question you asked determines the quality of the response that you get back. So, the fact that you are so specific in terms of what you were looking for, the teams, the column that you wanted it in, the fact that you wanted it to be for a specific geographic location, and not just to the world, and the fact that it gave you back that information, so specifically, the prompt that you put into it is so critical to the output that it gives you.   Joel stated that that's a good point, not to scratch his back at all. But he would tell people start with being ultra specific, if you haven't leaned into GPT.  First of all, it's worth the 20 bucks a month for GPT 4. It's really powerful and it can produce graphs and charts and statistical things you can enter, for example, he was working on some like a VC deck the other day for a friend of his, not for his company. But he was having a trouble with like, processing the information from his total addressable market into like a graphical form. Like he's not a designer, he's a spreadsheet person.  And so, he took his spreadsheet, uploaded it to his version of GPT 4 in Azure, so it's privatized without any kind of PII in it, he paid 20 bucks a month for it. And it spit out a graphical representation that with a little bit of tweaking and the coloring could be used in a deck, it's really great. So, there's just some really crazy things you can do with it. But start with being more specific and dial it back, rather than starting with broad requests and commands, because then you're going to get frustrated.   Books that Have Had the Biggest Impact on Joel When asked about books that have had a great impact, Joel stated that he's going to reveal that his dad is an Industrial Psychologist, PhD in Industrial Psychology. And he's written a bunch of papers on emotional intelligence, EQ and EI. And so, he's kind of a big fan of Dan Pink, or Daniel Pink. They are business books, so they might be a little cliche for some people. But in 2005, he wrote a book called A Whole New Mind: Why Right-Brainers Will Rule the Future, Right Brain. It's kind of like the why right brain thinkers are the future. And he thinks there's a lot of interesting things in there that are pretty applicable today. Despite his little claim of using GPT, which might be more of a left brained example, he's probably a stronger right brained person with communication skills, specifically, rather than somebody that's going to put their head in a spreadsheet all day. And he thinks there's some really telling things and some really interesting things that people could use, both earlier in their career and even later in their career with the Daniel Pink book.  And then he thought one of the other books that he read recently, when he thought about this was “The Billion Dollar Loser : The Epic Rise and Spectacular Fall of Adam Neumann and WeWork”, written by Reeves Wiedeman, it's the Adam Neumann story about work, he doesn't know, if Yanique have read that. It's actually kind of a fiery read, like, it's a cool book. But he thinks for him, the story was really interesting, if you're interested in business, but it also is a really stark reminder that humility is really important in business, being honest, and being humble is a marathon play, but it's also the right play, it'll get you to the finish line. So, he thought that was a good reminder for him and anyone else that is in business and thinking about their own sort of virtue.   What Joel is Really Excited About Now! When asked about something that he's really excited about, Joel shared that he's really excited about for me, he works at a company that builds applied AI products and essentially helping businesses listen to their customers more carefully, and actively for that matter.  And he thinks what he's most excited about is sort of the evolution of AI in an ethical way. So, one of the things if you read any of the business reports from the industry analysts and in the industry consulting firms, the Beans, you read any of the stuff from the Harvard Business Review, fairly objective sources, people are really scared about using AI in their businesses specifically, let alone personally, but he's thinking about it in a business light.  And he thinks what's starting to happen is that the hype is dying down a little bit and we're starting to see a more practical light in the application of AI where it's not so risky, and that it is truly not going to replace everybody's jobs, it's going to help us sift through larger and larger datasets together, so we can get to the root of very specific problems and solve for the root, going back to him at the top of the show when he talked about abstraction layer. So, for him, he's really excited about that, more and more companies are adopting AI and doing so in an ethical, responsible, privacy conscious way. And the wild, wild west is starting to fade into the background as quickly as it grew, and he thinks that's exciting to him, and he knows that sounds kind of nerdy, but that for him is progress.    About Joel's Organization Sturdy and How Can Listeners Find Joel Online Joel shared that Sturdy is very easy to find, he'll start there, but it's sturdy.ai. And he thinks for the relevancy of your listeners and our listeners today is, their our thesis was that ultimately, the customer experiences is really the responsibility of every team. Again, he mentioned operations services, product sales, marketing, leadership, everybody's responsible for customer satisfaction and customer experience. And ultimately, that leads to everybody being in charge of revenue and retention. And what they've realized over the years as being operators of businesses is that 75% of customers expect businesses to use some sort of technology to better their experiences, like the time is now yet we have all these silos in our business.  So, what their business does, what Sturdy does is plugs into all of your data hubs where you're collecting communications from your customers, those day-to-day interactions, those really rich meaningful, those emails, the tickets, the calls transcripts, even unstructured tests like text like surveys, they take all of that information in, privatize it. And they run language models on it to help find topics and trends to help improve the customer experience. And then make agents to deliver this data to the teams and the people and the systems that need it the most. So, that's basically what they do. And they find that the outcomes are their customers are improving their NPS and CSAT scores almost immediately, because they're more actively listening to their customers. But they're also seeing pretty strong lifts in customer retention and revenue as well. So, both expansion and retention revenue.  And he's easy to find, personally, Joel Passen as mentioned, he talks a lot about customer experience and satisfaction on LinkedIn. And you can find him on LinkedIn, he's pretty active there. And otherwise, he just joel@sturdy.ai.   Quote or Saying that During Times of Adversity Joel Uses When asked about a quote that he tends to revert to, Joel shared that he's going to disappoint people, because this isn't like super profound. But he tells his kids this too and his dad used to tell him this like, “Every day is like the first day of school, you can reinvent yourself.” The ability for humans to reinvent themselves is surreal, it's crazy. And if he's having a bad day, or he's down or he's stuck, he literally thinks about the next day and say, tomorrow's my day, I can be whoever I want, I can solve whatever I want. He shared that he sounds like that guy on Saturday Night Live, Stuart Smalley. But reinventing, every day is an opportunity to reinvent yourself. And he thinks that's what keeps him going and he truly believes that even if you had a bad day or a bad interaction with your family or your team or what have you, tomorrow's a new day.    Me: Great. Thank you so much for sharing, Joel.   Thank you so much for hopping on our podcast and sharing all of these great insights as it relates to customer experience and debunking the myths in customer experience that a lot of organizations are still driven by. Also talking about your organization Sturdy and just where you see artificial intelligence going and how the adaptation is being taken more in a practical role now as all the fanfare and excitement is dying down and people are really seeing how they can integrate it in a very practical and useful way not just to eliminate human interaction but to add to the value that they are using the technology to enhance the quality of the experience that the customer is having. So, thank you so much.   Please connect with us on Twitter @navigatingcx and also join our Private Facebook Community – Navigating the Customer Experience and listen to our FB Lives weekly with a new guest   Links •     A Whole New Mind: Why Right-Brainers Will Rule the Future by Daniel Pink   The ABC's of a Fantastic Customer Experience   Grab the Freebie on Our Website – TOP 10 Online Business Resources for Small Business Owners  Do you want to pivot your online customer experience and build loyalty - get a copy of “The ABC's of a Fantastic Customer Experience.” The ABC's of a Fantastic Customer Experience provides 26 easy to follow steps and techniques that helps your business to achieve success and build brand loyalty. This Guide to Limitless, Happy and Loyal Customers will help you to strengthen your service delivery, enhance your knowledge and appreciation of the customer experience and provide tips and practical strategies that you can start implementing immediately! This book will develop your customer service skills and sharpen your attention to detail when serving others. Master your customer experience and develop those knock your socks off techniques that will lead to lifetime customers. Your customers will only want to work with your business and it will be your brand differentiator. It will lead to recruiters to seek you out by providing practical examples on how to deliver a winning customer service experience!

EdCuration: Where We Reshape Learning
Work-based Learning Through GG4L's Certified School Passport Administrator Digital Internship

EdCuration: Where We Reshape Learning

Play Episode Listen Later Jun 24, 2024 25:51


We're joned today by Monica Schultz, Senior Manager for Navigator Services with Denver Public Schools, and Nazli José, a student at Montebello Career and Technical School. They enthusiastically share the many benefits of the School Passport Administrator digital internship offered through GG4L. The program provides both High School and college credit for students while giving them real-world tech skills and providing the full flexibility of a digital certification.   Resources: Inquire about the Certified School Passport Administrator (CSPA)digital internship program HERE. The CSPA program offers high school students a 60-hour internship with a mix of academic and hands-on experiences, with certification and college credit included.  Access program overview & syllabus HERE   GG4L's School Passport® is a data exchange platform that exchanges any data with any EdTech product, eliminates the need to share student PII and is easy to implement for schools and vendors. It empowers schools to centrally: Govern the exchange of student, staff, and parent PII data with SaaS vendors Approve and publish apps to a district or school branded on-demand AppStore Delegate to school staff and teachers the ability to activate apps within their managed groups on-demand School Passport® is used by over 25,000 schools in the US and abroad.      Micro Professional Learning ExPLorations - fun and free, 1-hour digital Professional Learning for teachers from all content areas and grades levels EdCuration's Blog: Learning in Action

Software Engineering Daily
Implementing KYC and User Verification with Alex Grinman

Software Engineering Daily

Play Episode Listen Later May 9, 2024 57:57


Almost every application or system involves some sort of user onboarding. Increasingly, companies must implement know-your-customer and know-your-business compliance, or KYC and KYB, as part of that process. In addition, they often handle personal identifiable information, or PII. Footprint is a developer platform that was co-founded by Alex Grinman for handling identity, security, fraud, and The post Implementing KYC and User Verification with Alex Grinman appeared first on Software Engineering Daily.