Podcasts about Computer science

Study of the foundations and applications of computation

  • 6,315PODCASTS
  • 15,440EPISODES
  • 45mAVG DURATION
  • 3DAILY NEW EPISODES
  • Nov 5, 2025LATEST
Computer science

POPULARITY

20172018201920202021202220232024

Categories




Best podcasts about Computer science

Show all podcasts related to computer science

Latest podcast episodes about Computer science

The Data Chief
When Navan Chose to Build, Not Buy: The AI Decision That Changed Everything

The Data Chief

Play Episode Listen Later Nov 5, 2025 48:16


In this episode of The Data Chief, Ilan Twig, co-founder and CTO of Navan, shares why large language models will revolutionize our relationship with technology—just like the mouse did for the keyboard. From pushing AI to its limits to launching Navan Cognition, built for zero critical hallucination, Ilan reveals what it really takes to lead through change and build AI that people can trust. He also dives into a critical question every company must face: Will you build AI from scratch, or build with AI partners?? And if you're curious about the next frontier, Ilan paints a bold vision of agent-to-agent communication—where AI services talk to each other and your admin work disappears into the background.  A must-listen for anyone building the future of AI-powered user experiences.Key Moments:Agent-to-Agent Communication (A2A) (17:00): Ilan envisions a future where dedicated AI services communicate with each other in natural language, without the need for an API. This "mother of all bots" would manage administrative tasks by talking to other bots, simplifying complex tasks for the user.AI as a "Human" Experience (27:16): Ilan was surprised by the release of ChatGPT in 2022 because it was the first time a technology felt human. This led him to spend four months building and testing the technology's boundaries, including its ability to lie or be "jailbroken" with creative prompts.Identifying the Core Business (31:43): Ilan advises companies to decide if they want to become an "AI company" or simply use AI. He explains that building a core AI platform requires a huge commitment.A Case Study in Building (35:32): The conversation furthers, as a light is shed on the building of “Navan Cognition”, because no solution existed at the time to prevent critical hallucinations in AI models. This system includes a supervisor agent that works to catch and correct undesirable responses, creating a "zero critical hallucination" experience for its users.Key Quotes:"LLMs would do to the mouse what the mouse did to the keyboard when it comes to how humans interact with computers."  - Ilan Twig“My role is to always apply the best technology in order to drive, to create the best product and best experience. That's my role. And it is not technology for the sake of technology. It is technology for the sake of creating value for the users." - Ilan Twig“We ended up using ThoughtSpot. We also applied the generative AI capabilities that you guys have built into your product. That's build versus buy. That's the benefit of buy.”  - Ilan TwigMentionsNavan Introduces World's Smartest T&E Personal AssistantNavan CognitionAI jailbreak method tricks LLMs into poisoning their own contextSurely You're Joking, Mr. Feynman! (Adventures of a Curious Character)  - Richard P. FeynmanGuest Bio Ilan Twig is the co-founder and Chief Technology Officer (CTO) of Navan, the leading modern travel and expense management platform, globally. As CTO, Ilan drives Navan's product development and engineering efforts, leveraging cutting-edge technologies — including AI — to enhance user experience and operational efficiency. This is Ilan's second successful venture with Navan CEO Ariel Cohen, following their previous company, StreamOnce, a business multimedia integration platform acquired by Jive Software. With nearly two decades of engineering experience, Ilan has a proven track record of leading innovative research and development teams. He previously held key roles at Hewlett-Packard and Rockmelt, where he managed large-scale engineering initiatives. Ilan holds a Bachelor of Science in Computer Science from the Academic College of Tel-Aviv, Yaffo. As a forward-thinking technologist, Ilan is passionate about integrating AI-driven solutions to redefine the future of corporate travel and expense management. Hear more from Cindi Howson here. Sponsored by ThoughtSpot.

Chicago's Morning Answer with Dan Proft & Amy Jacobson

0:30 - Trump: You Must Vote for Cuomo 13:30 - JB's potty mouth 35:59 - Mark Levin at RJC on Tucker, et al 57:53 - Ben Shapiro on Tucker Carlson, Nick Fuentes...most important thing going on in America 01:13:38 - In Depth History w/ Frank From Arlington Heights 01:16:12 - Sports & Politics 01:33:29 - Mark Glennon, founder of Wirepoints, breaks down the recently passed Clean and Reliable Grid Affordability Act and the absurdities of Illinois’ green energy policy. For more from Mark substack.com/@markglennon 01:48:02 - Tom Williams, Associate Professor of Computer Science at the Colorado School of Mines and Human-Robot Interaction researcher, on how close we are to humanoid robots in the home and the opportunities if the current wave succeeds. For more on Tom’s work with robotics visit mirrorlab.mines.edu 02:06:45 - Why Dan Proft is SingleSee omnystudio.com/listener for privacy information.

All Home Care Matters
Kian Saneii Founder & CEO of Independa, Inc.

All Home Care Matters

Play Episode Listen Later Nov 3, 2025 31:45


All Home Care Matters and our host, Lance A. Slatton were honored to welcome Kian Saneii as guest to the show.   About Kian Saneii Founder & CEO Independa, Inc.:   Kian Saneii is a serial entrepreneur and computer scientist, best known as the founder and CEO of Independa, an award winning health tech company delivering remote care solutions through computers, tablets, mobile phones and even TVs! His work helps people stay healthier at home longer, safer and more comfortably, while improving efficiency and effectiveness across senior care, homecare and healthcare systems.   Previously, Saneii held leadership roles at Websense, IPNet Solutions, and IMA, driving innovation in wireless, supply chain, and CRM technologies. He holds Computer Science undergrad and graduate degrees from NYU and Rutgers, respectively, and lives in Los Angeles, CA. Outside of work, he enjoys spending time with family, playing soccer, tennis, and cycling, and dabbling with the piano and drums.   About Independa, Inc.:   Independa, Inc., founded in 2009, is a leader in remote engagement, education and care solutions. Independa turns the everyday TV into a health and wellness hub, offering 24/7 access to telehealth services, games, wellness content, social engagement including video chat, in-home lab tests, and much more—improving access to health across the US.   Independa customers and partners enjoy top line growth, bottom line efficiencies, and brand elevation, and improving the lives and maintaining the health of those they serve. Independa provides solutions "From the Hospital to the Home, and everything in between."

Healing with Confidence
Stephanie Seneff: Glyphosate, Deuterium & the Gut Microbiome #37

Healing with Confidence

Play Episode Listen Later Nov 1, 2025 88:02


Glyphosate may be silently fueling sulfur deficiency, autism, gut issues, and chronic disease. Dr. Seneff explains why.

No Such Thing: K12 Education in the Digital Age
Greedy Algorithms, Public Goods: Rethinking AI Regulation and Education

No Such Thing: K12 Education in the Digital Age

Play Episode Listen Later Oct 31, 2025 58:52


Dr. Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, Director of the Center for Responsible AI, and member of the Visualization and Data Analytics Research Center at New York University. She is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) and a Senior member of the Association of Computing Machinery (ACM). Julia's goal is to make “Responsible AI” synonymous with “AI”. She works towards this goal by engaging in academic research, education and technology policy, and by speaking about the benefits and harms of AI to practitioners and members of the public. Julia's research interests include AI ethics and legal compliance, and data management and AI systems. Julia is engaged in technology policy and regulation in the US and internationally, having served on the New York City Automated Decision Systems Task Force, by mayoral appointment, among other roles. She received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.Links:https://engineering.nyu.edu/faculty/julia-stoyanovich https://airesponsibly.net/nyaiexchange_2025/ Hosted on Acast. See acast.com/privacy for more information.

Start Up Podcast PH
Start Up #291: Fish-i - Visual Census Technology for Monitoring Marine Biodiversity

Start Up Podcast PH

Play Episode Listen Later Oct 31, 2025 57:30


Justine Doctolero is Project Development Officer at Fish-i. Fish-i is a patented hardware-software fish visual census technology developed by the University of the Philippines' Department of Computer Science and Marine Science Institute. It uses a stereo camera setup mounted on a rig to capture underwater footage from sample sites. The data collected is then analyzed by the AI-powered Video Analyzer Software, which identifies fish species, counts individuals, and estimates fish size, biomass, and population density. This system offers a precise, automated method for monitoring marine biodiversity, which is vital for ecosystem management and conservation. This episode is recorded live during the 2025 Regional Science and Technology Week in Western Visayas organized by DOST Region VI, held at Robinsons Roxas, Capiz.In this episode | 01:17 Ano ang Fish-i? | 07:00 What problem is being solved? | 14:30 What solution is being provided? | 29:34 What are stories behind the startup? | 44:52 What is the vision? | 54:03 How can listeners find more information?FISH-I | Website: https://fishi.ph | Facebook: https://facebook.com/fishiphDOST REGION VI | Website: https://region6.dost.gov.ph | Facebook: https://www.facebook.com/DOSTRegionVICHECK OUT OUR PARTNERS:Ask Lex PH Academy: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://asklexph.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (5% discount on e-learning courses! Code: ALPHAXSUP)Argum AI: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://argum.ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠PIXEL by Eplayment: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pixel.eplayment.co/auth/sign-up?r=PIXELXSUP1⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (Sign up using Code: PIXELXSUP1)School of Profits: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://schoolofprofits.academy⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Founders Launchpad: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://founderslaunchpad.vc⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Hier Business Solutions: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://hierpayroll.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Agile Data Solutions (Hustle PH): ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://agiledatasolutions.tech⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Smile Checks: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://getsmilechecks.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠CloudCFO: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://cloudcfo.ph⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (Free financial assessment, process onboarding, and 6-month QuickBooks subscription! Mention: Start Up Podcast PH)Cloverly: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://cloverly.tech⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠BuddyBetes: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://buddybetes.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠HKB Digital Services: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://contakt-ph.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (10% discount on RFID Business Cards! Code: CONTAKTXSUP)Hyperstacks: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://hyperstacksinc.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠OneCFO: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://onecfoph.co⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (10% discount on CFO services! Code: ONECFOXSUP)UNAWA: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://unawa.asia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠SkoolTek: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://skooltek.co⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Better Support: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://bettersupport.io⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (Referral fee for anyone who can bring in new BPO clients!)Britana: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://britanaerp.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Wunderbrand: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://wunderbrand.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠EastPoint Business Outsourcing Services: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://facebook.com/eastpointoutsourcing⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠DVCode Technologies Inc: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://dvcode.tech⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠NutriCoach: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://nutricoach.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Uplift Code Camp: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://upliftcodecamp.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ (5% discount on bootcamps and courses! Code: UPLIFTSTARTUPPH)START UP PODCAST PH⁠⁠⁠⁠YouTube⁠⁠⁠⁠ | ⁠⁠⁠Spotify⁠⁠⁠ | ⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠Facebook⁠⁠⁠Patreon: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://patreon.com/StartUpPodcastPH⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠PIXEL: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pixel.eplayment.co/dl/startuppodcastph⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://phstartup.online⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Edited by the team at: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://tasharivera.com⁠⁠

Chat With Traders
310 · Dr. Efrat Levy - Fingerprinting the Big Players: Inside the Hidden Order Flow

Chat With Traders

Play Episode Listen Later Oct 30, 2025 96:12


Dr. Efrat t Levy is a cybersecurity expert with a PhD in Computer Science and AI from Ben-Gurion University—one of Israel's top tech institutions. After years studying how to detect hackers through side-channel signals, she applied the same logic to trading. In this conversation, Dr. Levy explains how she uses machine learning to map non-repainting key levels that reveal the hidden order flow—the subtle timing and volume clues that expose the real intent of big players. She shows how correlated markets often move together at these key levels and how that insight helps her trade with precision and low drawdown. We discuss filtering noise, managing psychology, and bridging cybersecurity thinking with market analysis—exploring how data and discipline can uncover the quieter forces shaping market behavior. Links + Resources:  ●      Dr. Levy's website:  https://ctpacademy.com/ ●      Dr. Levy's email: efrat.levy@egindicators.com ●      Dr. Levy on youtube:  https://www.youtube.com/@EGIndicators ●      Dr. Levy's linktree: https://linktr.ee/efrat.levy   Sponsor of Chat With Traders Podcast:  ●       Trade The Pool:  http://www.tradethepool.com Time Stamps: Please note: Exact times will vary depending on current ads.  ●   00:00 – Intro: From AI & cyber to trading ●     01:20 – PhD background, domains where AI applies ●     04:30 – First market exposure: anomaly detection for derivatives ●     07:45 – Side-channel analysis: uncovering hidden intent ●     12:10 – “Hidden order flow” and fingerprinting big players ● 18:30 – Single-tick levels vs. zones; why zones mislead ●     22:00 – The correlation filter: 3 of 4 assets hitting together ●     26:00 – Entries & stops: lowest drawdown mindset ●     30:30 – Managing trades by other markets' levels ●     33:45 – Timeframe-agnostic approach; redefining “correlation” ●     51:00 – Instruments: indices, gold/silver/copper/platinum ●     56:00 – Psychological tripwires:streaks, missed A+ setups, ego risk ●     1:04:00 – How to reach Dr. Levy ●     1:05:00 – Catch up with Tessa   Trading Disclaimer:  Trading in the financial markets involves a risk of loss. Podcast episodes and other content produced by Chat With Traders are for informational or educational purposes only and do not constitute trading or investment recommendations or advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Touch MBA Admissions Podcast
#232 IESE MBA Admissions Insights with Patrik Wallen

The Touch MBA Admissions Podcast

Play Episode Listen Later Oct 30, 2025 93:36


What does IESE look for in MBA applicants? How do they evaluate your academic aptitude, post-MBA career goals, leadership potential and fit? In this episode, IESE MBA Admissions Director Patrik Wallen shares candid insights into what makes candidates stand out — and what makes IESE's two-year MBA unique.Program Highlights - What Makes the IESE MBA Unique?Introduction (0:00)What Makes the IESE MBA Unique? (3:30)IESE's Reputation with Employers (11:30)IESE MBA's Two-Year Course Structure (14:27) Living, Studying & Working in Barcelona, Spain (18:37)The Importance of Speaking the Local Language: Spanish (25:10)IESE MBA Admissions & Scholarships - How to Improve Your Chances? What IESE MBA Looks for When Building a Class (28:00)Patrik's Thoughts on Post-MBA Goals (37:00)GMAT/GRE Scores & GPA (41:45)Resumes (48:00)Written Essays & Video Essays (52:45)Letters of Recommendation (55:35)Interviews & IESE MBA's Assessment Day (57:45)How IESE MBA Admissions Views Scholarships & How Applicants Can Win Funding (1:03:35)Career Opportunities at IESE - What to Know & How to PrepareHas AI Affected Recruiting for Consulting? (1:10:55)IESE MBA's Grading System (1:17:30)What Applicants Need to Know about Landing Jobs in Spain & Europe (1:20:05)Structured Recruiting & Unstructured Recruiting: What Applicants Can Expect from IESE's Career Services (1:27:45)About Our GuestPatrik Wallen is the MBA Admissions Director at IESE Business School. Previously, he was Director of IESE's Career Development Center. Before joining IESE, Patrik worked as a general manager in hospitality, founded a fish importing business and worked as a software consultant. Patrik got his Masters in Science in Computer Science from KTH Royal Institute of Technology and his MBA from IESE in 2007.Show NotesIESE MBAGet feedback on your profile from IESE MBA's Admissions Team before you applyIESE MBA Scholarships and Post-Graduation Payment Aid (PPA) for MBAsMBA Application Resources⁠⁠Get free school selection help at Touch MBA⁠⁠⁠⁠Get pre-assessed by top international MBA programs⁠⁠⁠⁠Get the Admissions Edge Course: Proven Techniques for Admission to Top Business Schools⁠⁠⁠⁠Our favorite MBA application tools (after advising 4,000 applicants)

Voice of Islam
Drive Time Show Podcast 30-10-2025: Children and Social Media and AI is a source of good or bad?

Voice of Islam

Play Episode Listen Later Oct 30, 2025 103:16


Join our hosts for Thursday's show where we will be discussing : 'Children and Social Media' and ' AI is a source of good or bad?'. Children and Social Media Denmark plans to ban social media for under-15s, with the Prime Minister warning that phones are “stealing childhood”. Join us as we discuss the dangers of social media for children, from mental health impacts to rising cases of online abuse. AI is a source of good or bad? This episode explores the growing influence of Artificial Intelligence — from chatbots and disease prediction to its impact on jobs, privacy, and ethics. We look at how AI is solving real problems like climate change and healthcare, while also asking: is it ultimately harmful or helpful? Guests :  Daisy Greenwell - Co-founder & director of Smartphone Free Childhood Professor Su - Awarded a PhD in Computational Neuroscience from University of Kent in 2009. Professor Tim Norman - Professor of Computer Science and Head of the Agents, Interaction and Complexity Group at the University of Southampton. Producers: Fezia Haq  and Amtul Shakoor

The Guy Gordon Show
A Look at Kettering University's College of Engineering and Computer Science

The Guy Gordon Show

Play Episode Listen Later Oct 29, 2025 9:31


October 29, 2025 ~ Dr. Scott Grasman, who leads Kettering University's College of Engineering and Computer Science, joins Chris, Lloyd, and Jamie to discuss Kettering's co-op experiences, students working on industry equipment, and more! Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.

CERIAS Security Seminar Podcast
Marcus Botacin, Malware Detection under Concept Drift: Science and Engineering

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 29, 2025 52:13


The current largest challenge in ML-based malware detection is maintaining high detection rates while samples evolve, causing classifiers to drift. What is the best way to solve this problem? In this talk, Dr. Botacin presents two views on the problem: the scientific and the engineering. In the first part of the talk, Dr. Botacin discusses how to make ML-based drift detectors explainable. The talk discusses how one can split the classifier knowledge into two: (1) the knowledge about the frontier between Malware (M) and Goodware (G); and (2) the knowledge about the concept of the (M and G) classes, to understand whether the concept or the classification frontier changed. The second part of the talk discusses how the experimental conditions in which the drift handling approaches are developed often mismatch the real deployment settings, causing the solutions to fail to achieve the desired results. Dr Botacin points out ideal assumptions that do not hold in reality, such as: (1) the amount of drifted data a system can handle, and (2) the immediate availability of oracle data for drift detection, when in practice, a scenario of label delays is much more frequent. The talk demonstrates a solution for these problems via a 5K+ experiment, which illustrates (1) how to explain every drift point in a malware detection pipeline and (2) how an explainable drift detector also makes online retraining to achieve higher detection rates and requires fewer retraining points than traditional approaches. About the speaker: Dr. Botacin is a Computer Science Assistant Professor at Texas A&M University (TAMU, USA) since 2022. Ph.D. in Computer Science (UFPR, Brazil), Master's in Computer Science and Computer Engineering (UNICAMP, Brazil). Malware Analyst since 2012. Specialist in AV engines and Sandbox Development. Dr. Botacin published research papers at major academic conferences and journals. Dr. Botacin also presented his work at major industry and hacking conferences, such as HackInTheBox and Hou.Sec.Con.Page: https://marcusbotacin.github.io/

On The Edge Podcasts
Introducing the BVU Computer Science Fellowship

On The Edge Podcasts

Play Episode Listen Later Oct 28, 2025 3:43


a feature on the new BVU Computer Science Fellowship.

Energy Sector Heroes ~ Careers in Oil & Gas, Sustainability & Renewable Energy
Vered Shwartz on AI, Job Applications, and the Future of Work | Energy Sector Heroes

Energy Sector Heroes ~ Careers in Oil & Gas, Sustainability & Renewable Energy

Play Episode Listen Later Oct 28, 2025 41:19


Many of you are already using AI tools in your studies, careers, or job searches — but how do you make sure you're using them wisely?In this episode of Energy Sector Heroes, I speak with Vered Shwartz, Assistant Professor of Computer Science at the University of British Columbia and a specialist in natural language processing. We explore how AI is reshaping recruitment, interviews, and professional development — and what skills humans still need to bring to the table.Here are three actionable takeaways you can apply straight away:

FreightWaves LIVE: An Events Podcast
F3 | Fireside Chat: Building a Startup After Exiting Your Last

FreightWaves LIVE: An Events Podcast

Play Episode Listen Later Oct 28, 2025 24:57


Prasad Gollapalli is chief executive officer, founder and chairman of the board of Qued, a pioneering cloud-based AI workflow automation platform transforming load appointment scheduling into the future. It seamlessly automates the entire process, securing the optimal appointment times to all types of loads including multi-stop loads. By eliminating the chaos of spreadsheet management, overflowing inboxes, and cumbersome portal logins, Qued empowers 3PLs, brokers, and carriers with a streamlined workflow while delivering significant workforce optimization benefits for shippers. Through Qued, brokers build trust by guaranteeing on-time deliveries, enhancing communication, and fostering transparency, thereby strengthening industry relationships and enhancing efficiency and reliability. Qued's innovative approach has earned recognition from industry leaders such as McLeod Software, a premier freight management and transportation software provider, which has certified Qued as an integration partner. With nearly three decades of industry experience, Prasad is a seasoned entrepreneur renowned for his success in start-up ventures and leadership roles within trucking and shipping software companies. Prior to Qued, Prasad founded and led Trucker Tools, the industry's premier digital freight management platform with the most popular Trucker Tools driver app. As the CEO of Trucker Tools, he led the team in implementing innovative solutions including capacity management, predictive freight matching, automated booking, and real-time GPS-based visibility. Prasad sold Trucker Tools to ASG, a portfolio company of Alpine Investors, in June 2021. In December of 2024, DAT (a Roper Industries company) acquired Trucker Tools making ASG extremely happy with the outcome.  Prior to his tenure at Trucker Tools, Prasad held various leadership positions focused on the design and implementation of advanced transportation technology solutions. As Director of Product Management at Getloaded.com, later acquired by DAT, he led business and product strategy efforts. Prasad's expertise extends to the shipping industry, where he served as a product manager for the Liberian International Ship & Corporate Registry (LISCR, LLC). Prasad holds an MBA with a focus on strategy and entrepreneurship from the University of Maryland's Robert H. Smith School of Business, as well as a master's degree in Computer Science from the University of Alabama-Huntsville. He earned his bachelor's degree in computer science and engineering from the University of Madras, India. Prasad resides in Ashburn, Virginia, with his wife and daughter. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Sunday Show
Ryan Calo Wants to Change the Relationship Between Law and Technology

The Sunday Show

Play Episode Listen Later Oct 26, 2025 36:06


Ryan Calo is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the UW Tech Policy Lab and a co-founder of the UW Center for an Informed Public. In his new book, Law and Technology: A Methodical Approach, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."

The Art & Science of Learning
123. Urgency of Learning How to Learn in the Age of AI (Trini Balart)

The Art & Science of Learning

Play Episode Listen Later Oct 24, 2025 34:58


There are significant challenges in education that have been ignored for too long, and AI is forcing us to confront them urgently; otherwise, AI will think for us, rather than with us. The need to learn how to learn has been increasingly important, but it has rarely been fully integrated into the education system. My guest in this episode is a doctoral student researching how to teach critical thinking with the aid of AI. She is sounding the alarm on the importance of teaching this skill with AI, otherwise, she thinks that AI will not only think for us, but it will not allow us to think at all. Trini Balart is a Ph.D. candidate in the Multidisciplinary Engineering Department at Texas A&M University, originally from Chile. She holds a background in Industrial Engineering, Computer Science, and a major in Engineering, Design, and Innovation from the Pontifical Catholic University of Chile. Her research focuses on engineering education and the impact of generative artificial intelligence on how we teach, learn, and think. She is especially interested in how these tools are shaping the development of critical thinking in engineering students and prompting us to rethink the true purpose of education and what we understand by learning itself. Passionate about human-centred development, innovation, and progress, Trini is committed to building a future where AI empowers, rather than replaces, our uniquely human capabilities. She envisions a future where these tools may even help us reach deeper levels of knowledge and societal development. LinkedIn: https://www.linkedin.com/in/trinidad-balart-386213223/

Stanford Psychology Podcast
160 - Jennifer Hu: From Human Minds to Artificial Minds

Stanford Psychology Podcast

Play Episode Listen Later Oct 24, 2025 35:21


Su chats with Dr. Jennifer Hu. Jenn is an Assistant Professor of Cognitive Science and Computer Science at Johns Hopkins University, directing the Group for Language and Intelligence. Her research examines the computational principles that underlie human language, and how language and cognition might be achieved by artificial models. In her work to answer these questions, she combines cognitive science and machine learning, with the dual goals of understanding the human mind and safely advancing artificial intelligence. We are discussing Jenn's paper titled “Signatures of human-like processing in Transformer forward passes."Jenn's paper: https://arxiv.org/abs/2504.14107 Jenn's lab website: https://www.glintlab.org/ Jenn's personal website: https://jennhu.github.io/ Su's Twitter: https://x.com/sudkrc Podcast Twitter @StanfordPsyPodPodcast Substack https://stanfordpsypod.substack.com/Let us know what you thought of this episode, or of the podcast! :) stanfordpsychpodcast@gmail.com

Machine Learning Podcast - Jay Shah
Beyond Accuracy: Evaluating the learned representations of Generative AI models | Aida Nematzadeh

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Oct 23, 2025 53:17


Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22  Twitter:  jaygshah22  Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**

Campus & Karriere - Deutschlandfunk
"Philosophy & Computer Science": Masterstudiengang an der Uni Bayreuth

Campus & Karriere - Deutschlandfunk

Play Episode Listen Later Oct 23, 2025 4:43


Ferencak, Leon www.deutschlandfunk.de, Campus & Karriere

ManifoldOne
AIs Win Math Olympiad Gold: Prof. Lin Yang (UCLA) – #97

ManifoldOne

Play Episode Listen Later Oct 23, 2025 50:43


Lin Yang is a professor of computer science at UCLA. Recently, he and his collaborator built an AI pipeline using commercial models such as Gemini, ChatGPT, and Grok that performed at the gold medal level on International Mathematics Olympiad problems. Steve and Lin discuss this research, which relies on "verifier-refiner" LLM instances and large token budgets to reliably solve difficult problems. They discuss how these methods can be used to advance AI for scientific research, legal analysis, and complex document processing.https://github.com/lyang36/IMO25/blob/main/IMO25.pdfhttps://x.com/hsu_steve/status/1948189075707469942Chapter markers:(00:00) - AIs Win Math Olympiad Gold: Prof. Lin Yang (UCLA) – #97 (00:57) - Prof. Lin Yang, UCLA (04:27) - Journey from Physics to Computer Science: 2 PhDs (11:15) - Transition to AI from Theoretical CS (13:16) - AI Pipeline Math Olympiad: Gold Medal! (28:23) - Probability Amplification (29:00) - Applications in Industry and Legal Analysis (29:58) - Challenges in Model Reasoning and Verification (33:23) - Future of AI in Scientific Research and AGI Speculations –Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus.ai, SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU.Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on X @hsu_steve.

Pedagogy A-Go-Go
Nothing is Perfect or Complete with Dr. Marshal Miller

Pedagogy A-Go-Go

Play Episode Listen Later Oct 22, 2025 68:06 Transcription Available


Send us a textHello! This month, Gina and Kelly sit down with Assistant Professor of Computer Science and Information Science, Dr. Marshal Miller. In this episode, “Nothing is Perfect or Complete,” Marshal shares with us why he believes good teaching always requires maintenance and why it's so important to help instill a love for the material when we know for students it's so easy to quit. Please be sure to subscribe to, rate, and review the podcast and follow us on Facebook and Instagram @pedagogyagogo. https://linktr.ee/pedagogyagogo

CERIAS Security Seminar Podcast
Rajiv Khanna, The Shape of Trust: Structure, Stability, and the Science of Unlearning

CERIAS Security Seminar Podcast

Play Episode Listen Later Oct 22, 2025 55:42


Trust in modern AI systems hinges on understanding how they learn—and, increasingly, how they can forget. This talk develops a geometric view of trustworthiness that unifies structure-aware optimization, stability analysis, and the emerging science of unlearning. I will begin by revisiting the role of sharpness and flatness in shaping both generalization and sample sensitivity, showing how the geometry of the loss landscape governs what models remember. Building on these insights, I will present recent results on Sharpness-Aware Machine Unlearning, a framework that characterizes when and how learning algorithms can provably erase the influence of specific data points while preserving accuracy on the rest. The discussion connects theoretical guarantees with empirical findings on the role of data distribution and loss geometry in machine unlearning—ultimately suggesting that the shape of the optimization landscape is the shape of trust itself. About the speaker: Rajiv Khanna is an Assistant Professor in the Department of Computer Science. His research interests span various subfields of machine learning including optimization, theory and interpretability.Previously, he held positions of Visiting Faculty Researcher at Google, postdoctoral scholar at Foundations of Data Analystics Institute at University of California, Berkeley and a Research Fellow in the Foundations of Data Science program at the Simons Institute also at UC Berkeley. He graduated with his PhD from UT Austin.

AI and the Future of Work
358: Inside Mastercard's AI Adoption Journey: CTO George Maddaloni on Building Trust, Detecting Fraud, and the Future of Payments

AI and the Future of Work

Play Episode Listen Later Oct 20, 2025 38:40


George Maddaloni is the EVP and CTO for Operations at Mastercard, where he leads the performance and modernization of technology platforms serving more than 35,000 employees worldwide. He has previously held senior IT leadership roles at AIG, UBS, AT&T, GM, and Merrill Lynch, and currently serves on the board of SustainableIT.org. George earned his BS in Mechanical Engineering and Computer Science from Johns Hopkins University and an MBA from Fordham University.In this conversation, we discuss:How Mastercard's CTO thinks about the balance between innovation, trust, and regulation in one of the world's most complex financial networks.The strategy behind modernizing Mastercard's internal technology platforms to empower 35,000 global employees.Why a decade of AI experience changed how Mastercard approaches fraud, data, and customer confidence.The cultural shift that turned curiosity about AI into measurable progress across a global workforce.How a 50-year-old payments company keeps competing with startups by rethinking infrastructure from the ground up.George Maddaloni's vision of the next era of payments and how technology might make transactions faster, safer, and nearly invisible.Resources:Subscribe to the AI & The Future of Work NewsletterConnect with George on LinkedInAI fun fact articleOn How To Create an Energy-Based Work System that Empowers EmployeesOther resources mentioned in this conversation: On decentralized AI in Banks and the Future of Finance with Paolo Ardoino, Tether CEO 

Business Leadership Series
Episode 1438: One Million by One Million with Sramana Mitra

Business Leadership Series

Play Episode Listen Later Oct 19, 2025 19:05


Derek Champagne talks with Sramana Mitra. Sramana is the founder and CEO of One Million by One Million (1Mby1M), the world's first and only global virtual incubator/accelerator. Its goal is to help a million entrepreneurs globally reach a million dollars in annual revenue, build a trillion dollars in global GDP, and create 10 million jobs.Since its founding in 2010, 1Mby1M has become a powerful platform for democratization of entrepreneurship acceleration.Sramana also developed 1Mby1M's Incubator-in-a-Box methodology for Corporate Incubation that is used by enterprises to manage internal and external innovation endeavors.In 2015, LinkedIn named Sramana one of their Top 10 Influencers alongside Bill Gates and Richard Branson.Sramana has been an entrepreneur and a strategy consultant in Silicon Valley since 1994. Her fields of experience span from hardcore technology disciplines like Artificial Intelligence, Cloud Computing and Semiconductors, to sophisticated consumer marketing industries including e-commerce, fashion and education.As an entrepreneur CEO, Sramana founded three companies: Dais (off-shore software services), Intarka (sales lead generation and qualification software using Artificial Intelligence algorithms; VC: NEA) and Uuma (online personalized store for selling clothes using Expert Systems software; VC: Redwood). Two of these were acquired, while the third received an acquisition offer from Ralph Lauren which the company did not accept.As strategy consultant, Sramana has consulted with over 80 companies, including public companies such as SAP, Cadence Design Systems, Webex, KLA-Tencor, Best Buy, MercadoLibre and Tessera among others. Her work has also included numerous startups and VCs.Sramana has a Masters degree in EECS from MIT and a Bachelors degree in Computer Science and Economics from Smith College.From 2000 to 2004, Sramana chaired the MIT Club of Northern California's entrepreneurship program in Silicon Valley.Learn more at www.1Mby1M.comBusiness Leadership Series Intro and Outro music provided by Just Off Turner: https://music.apple.com/za/album/the-long-walk-back/268386576

Shaun Attwood's True Crime Podcast
Will AI Cause Human Extinction? - AI Safety Expert: Dr. Roman Yampolskiy | AU 492

Shaun Attwood's True Crime Podcast

Play Episode Listen Later Oct 19, 2025 42:06


Dr. Roman Yampolskiy  explains: ⬛How AI could release a deadly virus ⬛Why these 5 jobs might be the only ones left ⬛How superintelligence will dominate humans ⬛Why ‘superintelligence' could trigger a global collapse by 2027 ⬛How AI could be worse than nuclear weapons ⬛Why we're almost certainly living in a simulation Follow Dr Roman: X - https://x.com/romanyam Google Scholar - https://bit.ly/4gaGE72 You can purchase Dr Roman's book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks', here: https://amzn.to/4g4Jpa5 AI could end humanity, and we're completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we're heading toward global collapse…or even World War III. Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks'. #ai #technology #tech #news #usa #world #china

DailyCyber The Truth About Cyber Security with Brandon Krieger
Reverse Engineering, AI, and the Economics of Malware with Danny Quist| DailyCyber 277 ~Watch Now ~

DailyCyber The Truth About Cyber Security with Brandon Krieger

Play Episode Listen Later Oct 18, 2025 65:22


AI, Reverse Engineering & the Economics of Malware | Danny Quist | DailyCyber 277 ~ Watch Now ~In this episode of DailyCyber, I sit down with Danny Quist, Chief Technology Officer at PolySwarm, to unpack the intersection of AI, reverse engineering, and cybersecurity economics.Danny brings nearly two decades of experience leading research and analysis teams at Redacted, Bechtel, MIT Lincoln Laboratory, and Los Alamos National Laboratory. As a Ph.D. in Computer Science and educator at New Mexico Tech, he bridges deep technical expertise with practical guidance for cybersecurity professionals. 

Breaking Into Cybersecurity
Building the Cybersecurity Workforce: Eric Stride's Perspective

Breaking Into Cybersecurity

Play Episode Listen Later Oct 17, 2025 26:32


Breaking into Cybersecurity with Eric Stride: From Air Force to Private SectorIn this episode of Breaking into Cybersecurity, host Christoph interviews Eric Stride from Huntress Security. Eric shares his journey from being a Communications Computer Systems Officer in the Air Force to becoming the Chief Security Officer at Huntress. He discusses his extensive experience in cybersecurity, including roles at the NSA and in the private sector. Eric emphasizes the importance of continual learning, certifications, and deliberate career growth. He also touches on the implications of AI in cybersecurity and provides insights into developing and recruiting the next generation of cybersecurity talent.00:00 Introduction to the Episode00:49 Eric Stride's Journey into Cybersecurity01:11 Military Experience and Transition to Cybersecurity06:08 Continuous Learning and Staying Updated09:41 Certifications and Career Growth11:49 Leadership and Management Principles15:23 AI in Cybersecurity22:02 Recruiting and Developing Cybersecurity Talenthttps://www.huntress.com/company/careers 26:22 Conclusion and Final Thoughtshttps://www.linkedin.com/in/ericstride/Eric Stride is the Chief Security Officer at Huntress, where he oversees the company's 24/7 Global Security Operations Center, Detection Engineering, Adversary Tactics, IT Operations, and Internal Security. A 20+ year cybersecurity leader, Eric has held senior roles spanning the U.S. Air Force, NSA, and private sector.During his 12 years on active duty, Eric helped architect the Air Force's first cyber combat mission team, co-authored its first offensive cyber operations manual, and rose to Deputy Chief for Cyber Operations at NSA Georgia. He continues to serve as a Colonel in the Air Force Reserve, where he established its first cyber range squadron.In the private sector, Eric co-founded Atlas Cybersecurity, advised defense and enterprise clients as an independent consultant, led Deloitte's Advanced Cyber Training portfolio, and led the generation of $135M+ in new cyber business. He holds an M.S. in Information Technology Management, a B.S. in Computer Science, and multiple cybersecurity certifications (CISSP, GCIH, CEH).Develop Your Cybersecurity Career Path: How to Break into Cybersecurity at Any Level: https://amzn.to/3443AUIHack the Cybersecurity Interview: A complete interview preparation guide for jumpstarting your cybersecurity career https://www.amazon.com/dp/1801816638/

STEM Everyday
STEM Everyday #307 | CNC in STEM | feat. Chad Miller

STEM Everyday

Play Episode Listen Later Oct 17, 2025 28:36


Chad Miller teaches Technology, Enginering, & Design as well as Computer Science in North Carolina. But he wasn't always a teacher. His experiences as a computer aided draftsman helped him learn the business of architectural lighting, and in the process problem solving, iteration, and more skills that kids in CTE courses need. As an educator, he has always been passionate about STEM and providing real world examples in his classroom. He also enjoys helping with clubs and mentoring groups, even partnering with NC State for many years to bring their students to mentor his students.He also has his students competing in the Technology Student Association (TSA), and has even had students win top awards for their STEM & design skills. Connect with Chad:Instagram: @techtomentorLinkedIn: linkedin.com/in/chad-miller-917791214/TSA website: tsaweb.orgCode.org AI teaching course: code.org/en-US/artificial-intelligenceChris Woods is the host of the STEM Everyday Podcast... Connect with him:Website: dailystem.comTwitter/X: @dailystemInstagram: @dailystemYouTube: @dailystemGet Chris's book Daily STEM on AmazonSupport the show

@BEERISAC: CPS/ICS Security Podcast Playlist
Episode 338 Deep Dive: Eric Stride | Securing the Aviation Industry in the Modern Age

@BEERISAC: CPS/ICS Security Podcast Playlist

Play Episode Listen Later Oct 17, 2025 38:12


Podcast: KBKAST (LS 31 · TOP 5% what is this?)Episode: Episode 338 Deep Dive: Eric Stride | Securing the Aviation Industry in the Modern AgePub date: 2025-10-15Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, we sit down with Eric Stride, Chief Security Officer at Huntress, to discuss the escalating cybersecurity challenges facing the aviation industry. Eric highlights the alarming 600% year-over-year surge in cyberattacks targeting the sector, emphasising how attackers are exploiting the interconnected and fragile aviation supply chain—most notably seen in recent incidents like the ransomware strike on Collins Aerospace. He explores the growing risk posed by both IT and OT system convergence, the shift in regulation tying cybersecurity readiness directly to airworthiness, and the increasing adoption of robust frameworks to mitigate operational disruptions and data breaches. Eric also highlights the critical need for holistic supply chain security, the importance of regulatory enforcement, and a cultural shift in the industry toward prioritising safety and cyber resilience to restore public trust in air travel. Eric Stride is the Chief Security Officer at Huntress, where he oversees the company's 24/7 Global Security Operations Center, Detection Engineering, Adversary Tactics, IT Operations, and Internal Security. A 20+ year cybersecurity leader, Eric has held senior roles spanning the U.S. Air Force, NSA, and private sector.  During his 12 years on active duty, Eric helped architect the Air Force's first cyber combat mission team, co-authored its first offensive cyber operations manual, and rose to Deputy Chief for Cyber Operations at NSA Georgia. He continues to serve as a Colonel in the Air Force Reserve, where he established its first cyber range squadron.  In the private sector, Eric co-founded Atlas Cybersecurity, advised defense and enterprise clients as an independent consultant, and led Deloitte's Advanced Cyber Training portfolio, generating $135M+ in new business. He holds an M.S. in Information Technology Management, a B.S. in Computer Science, and multiple cybersecurity certifications (CISSP, GCIH, CEH). The podcast and artwork embedded on this page are from KBI.Media, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

Eat Blog Talk | Megan Porta
758: How To Rebuild Your Blog After Hacks, Hijacks, and Heartbreak With Laura Ashley Johnson

Eat Blog Talk | Megan Porta

Play Episode Listen Later Oct 16, 2025 48:34


Laura Ashley Johnson teaches us what it takes to push through massive setbacks and rebuild a thriving food blog with grit, faith, and community. Dinner in 321 is a place for food and nutrition inspiration for cooks of all levels, busy families and food lovers! It's Laura Ashley's mission to make cooking fun, share delicious and nutritious recipes (comfort food, crockpot meals, casseroles, southern cooking, and nostalgic recipes), and spread joy! She's a Kentucky girl now Texan, married, with two kids (one graduate from A&M now a chemical engineer and the other graduating in May in Computer Science from Texas Tech), and a dog named Butter and cat named Newman. She LOVES cooking, trying new restaurants, traveling to NYC, camping in their Airstream, everything Fall and Christmas, and simply spending time at home with her family. Laura Ashley Johnson faced every blogger's nightmare twice: her website was hijacked for illegal activity and her Facebook audience of nearly half a million was stolen by hackers. Instead of quitting, she rebuilt, monetized, and grew stronger than ever. In this conversation, you'll hear what kept her going, how she rebuilt from scratch, and why connection and faith are the foundations of her success. Key points discussed include: Vet everything: Protect yourself by carefully verifying every opportunity and brand approach. Ask for help: Trusted peers and mentors can open doors and provide the right contacts when you need them most. Lean on community: Strong relationships inside the blogging world can turn obstacles into growth opportunities. Protect your platforms: Use strong passwords, verification tools, and be wary of fake collaborations. Handle trolls wisely: Don't feed negativity, focus on loyal supporters instead. Turn off notifications: Boundaries around social media help you protect mental health and joy in your work. Hire with discernment: Build your team from trusted recommendations, not random portals. Connect with Laura Ashley Johnson Website | Instagram Subscribe to Megan's Substack - Discover more about her first non-cookbook book!

CryptoNews Podcast
#483: Nassim Eddequiouaq, CEO of Bastion, on The 10/10 Crypto Crash, The Future of Stablecoins, and Enterprise Stablecoin Adoption

CryptoNews Podcast

Play Episode Listen Later Oct 16, 2025 36:24


Nassim Eddequiouaq is co-founder and CEO of Bastion, a pioneer in regulated stablecoin infrastructure and NYDFS-certified provider. Bastion is the stablecoin issuance platform for financial institutions and enterprises. Prior to founding Bastion, Nass was the Chief Information Security Officer at a16z Crypto, and held senior management roles across Security and Infrastructure at Facebook, Anchorage, Docker, and Apple. He received a M.S. in Computer Science from Ecole d'Ingénieurs en Informatique. In this conversation, we discuss:- What happened on the 10/10 crypto crash? - Winners and losers after the crypto crash  - Bridging traditional finance and digital assets through enterprise-ready solutions  - The diverse use cases of stablecoins  - Why stablecoins (especially USD-pegged) are poised for mass enterprise adoption  - The growing interest in branded stablecoins  - Bastion's NYDFS trust charter  - GENIUS Act and STABLE Act  - Why regulatory clarity is critical  - Privacy for stablecoin users  BastionX: @BastionPlatformWebsite: bastion.comLinkedIn: BastionNassim EddequiouaqX: @nassyweazyLinkedIn: Nassim Eddequiouaq---------------------------------------------------------------------------------This episode is brought to you by PrimeXBT.PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.  PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50

Interpreting India
Unbundling AI Openness: Beyond the Binary

Interpreting India

Play Episode Listen Later Oct 16, 2025 48:02


The episode challenges the familiar “open versus closed” framing of AI systems. Sharma argues that openness is not inherently good or bad—it is an instrumental choice that should align with specific policy goals. She introduces a seven-part taxonomy of AI—compute, data, source code, model weights, system prompts, operational records and controls, and labor—to show how each component interacts differently with innovation, safety, and governance. Her central idea, differential openness, suggests that each component can exist along a spectrum rather than being entirely open or closed. For instance, a company might keep its training data private while making its system prompts partially accessible, allowing transparency without compromising competitive or national interests. Using the example of companion bots, Sharma highlights how tailored openness across components can enhance safety and oversight while protecting user privacy. She urges policymakers to adopt this nuanced approach, applying varying levels of openness based on context—whether in public services, healthcare, or defense. The episode concludes by emphasizing that understanding these layers is vital for shaping balanced AI governance that safeguards public interest while supporting innovation.How can regulators determine optimal openness levels for different components of AI systems? Can greater transparency coexist with innovation and competitive advantage? What governance structures can ensure that openness strengthens democratic accountability without undermining safety or national security?Episode ContributorsChinmayi Sharma is an associate professor of law at Fordham Law School in New York. She is a nonresident fellow at the Stoss Center, the Center for Democracy and Technology, and the Atlantic Council. She serves on Microsoft's Responsible AI Committee and the program committees for the ACM Symposium on Computer Science and Law and the ACM Conference on Fairness, Accountability, and Transparency.Shruti Mittal is a research analyst at Carnegie India. Her current research interests include artificial intelligence, semiconductors, compute, and data governance. She is also interested in studying the potential socio-economic value that open development and diffusion of technologies can create in the Global South.Suggested Readings Unbundling AI Openness by Parth Nobel, Alan Z. Rozenshtein, and Chinmayi Sharma. Tragedy of the Digital Commons by Chinmayi Sharma. India's AI Strategy: Balancing Risk and Opportunity by Amlan Mohanty and Shatakratu Sahu.  Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

Sri Sathya Sai Podcast (Official)
Tribute to 94th Birthday of Dr APJ Abdul Kalam | APJMJ Sheik Dawood on Dr Kalam | Oct 15 2025

Sri Sathya Sai Podcast (Official)

Play Episode Listen Later Oct 16, 2025 10:31


As a tribute to Dr APJ Abdul Kalam, the People's President and Missile Man of India, on his 94th Birth Anniversary, the Sri Sathya Sai Media Centre presents a special conversation with Mr APJMJ Sheik Dawood, the grandnephew of Dr Kalam.Mr Sheik Dawood holds an M.Tech in Computer Science from SASTRA University and is the Co-Founder of the APJ Abdul Kalam International Foundation. A consultant in software development, artificial intelligence, and digital transformation, he brings both technical expertise and personal insight to this conversation.In this interaction, he fondly recalls the sterling virtues and values of Dr Kalam that he personally witnessed, and shares moving anecdotes that highlight the deep and endearing relationship the former President of India shared with Bhagawan Sri Sathya Sai Baba.

Book Club with Michael Smerconish
Nate Soares: "If Anyone Builds It, Everyone Dies"

Book Club with Michael Smerconish

Play Episode Listen Later Oct 15, 2025 23:03


Michael talks with Nate Soares, co-author of "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All", about the alarming risks of advanced artificial intelligence. Soares, president of the Machine Intelligence Research Institute, explains why AIs are not designed but grown, how that leads to unpredictable behavior, and why even their creators can't control them. They discuss chilling examples—from rogue chatbots to lab “escape” attempts—and why simply “unplugging” an AI may not be possible. Soares argues that humanity must act now, treating AI risk as seriously as pandemics or nuclear war. Original air date 15 October 2025. The book was published on 16 September 2025. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.

Highlights from The Hard Shoulder
How can you tell between an AI and a real image?

Highlights from The Hard Shoulder

Play Episode Listen Later Oct 15, 2025 6:31


You may have seen a recent trend circulating online, of people sending their parents AI generated images which make it look like there is an unknown person in their house, to get their reaction. While this is a seemingly harmless joke, it's one of many instances of AI generated deepfakes circulating online.So, how can you tell an AI deepfake from a real image?Joining Jonatahn to discuss is Professor Barry O'Sullivan of the School of Computer Science & IT at University College Cork.

The Lawfare Podcast
Lawfare Daily: How Technologists Can Help Regulators with Erie Meyer and Laura Edelson

The Lawfare Podcast

Play Episode Listen Later Oct 14, 2025 44:26


Erie Meyer, Senior Fellow at Georgetown Law's Institute for Technology Law & Policy and Senior Fellow at the Vanderbilt Policy Accelerator, and Laura Edelson, Assistant Professor of Computer Science at Northeastern University, who are coauthors of the recent toolkit, “Working with Technologists: Recommendations for State Enforcers and Regulators,” join Lawfare's Justin Sherman to discuss how state enforcers and regulators can hire and better work with technologists, what technologists are and are not best-suited to help with, and what roles technologists can play across the different phases of enforcer and regulator casework. They also discuss how to best attract technologists to enforcement and regulation jobs; tips for technologists seeking to better communicate with those lawyers, compliance experts, and others in government with less technology background; and how this all fits into the future of AI, technology, and state and broader regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Hacking Humans
Abstraction layer (noun) [Word Notes]

Hacking Humans

Play Episode Listen Later Oct 14, 2025 5:36


Please enjoy this encore of Word Notes. A process of hiding the complexity of a system by providing an interface that eases its manipulation. CyberWire Glossary link: ⁠https://thecyberwire.com/glossary/abstraction-layer⁠ Audio reference link: “⁠What Is Abstraction in Computer Science,⁠” by Codexpanse, YouTube, 29 October 2018.

Word Notes
Abstraction layer (noun)

Word Notes

Play Episode Listen Later Oct 14, 2025 5:36


Please enjoy this encore of Word Notes. A process of hiding the complexity of a system by providing an interface that eases its manipulation. CyberWire Glossary link: ⁠https://thecyberwire.com/glossary/abstraction-layer⁠ Audio reference link: “⁠What Is Abstraction in Computer Science,⁠” by Codexpanse, YouTube, 29 October 2018. Learn more about your ad choices. Visit megaphone.fm/adchoices

Deep Papers
Georgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI

Deep Papers

Play Episode Listen Later Oct 14, 2025 31:24


Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

My Favorite Mistake
Can AI Be Humble? Maya Ackerman on What Machines Teach Us About Creativity

My Favorite Mistake

Play Episode Listen Later Oct 13, 2025 43:16


My guest for Episode #327 of the My Favorite Mistake podcast is Dr. Maya Ackerman, AI pioneer, researcher, and CEO of WaveAI. She's also an associate professor of Computer Science and Engineering at Santa Clara University and the author of the new book Creative Machines: AI, Art, and Us. EPISODE PAGE WITH VIDEO, TRANSCRIPT, AND MORE In this episode, Maya shares her favorite mistake — one that changed how she builds technology and thinks about creativity. Early in her journey as an entrepreneur, her team at WaveAI created an ambitious product called “Alicia,” designed to assist with every step of music creation. But in trying to help too much, they accidentally took freedom away from users. That experience inspired her concept of “humble AI” — systems that step back, listen, and support human creativity rather than take over. Maya describes how that lesson led to their breakthrough success with Lyric Studio, an AI songwriting tool that empowers millions of artists by helping them create while staying true to their own voices. She also shares insights from her research on human-centered design, the philosophy behind generative models, and why we should build AI that's more collaborative than competitive. Together, we discuss why mistakes — whether made by people or machines — can spark innovation, and how being more forgiving toward imperfection can help both leaders and creators thrive. “If AI is meant to be human-centric, it must be humble. Its job is to elevate people, not replace them.” — Maya Ackerman “Who decided machines have to be perfect? It's a ridiculous expectation — and a limiting one.” — Maya Ackerman Questions and Topics: What was your favorite mistake — and what did you learn from it? What went wrong with your second product, “ALYSIA,” and how did that shape your later success? How did you discover the concept of “humble creative machines”? What makes Lyric Studio different from general AI tools like ChatGPT? How do you design AI that supports — rather than replaces — human creativity? What's the real difference between AI and a traditional algorithm? How do you think about ethical concerns, like AI imitating living artists? What do you mean by human-centered AI — and how can we build it? Why do AI systems “hallucinate,” and can those mistakes actually be useful? How can embracing mistakes — human or machine — lead to more creativity and innovation? What are your thoughts on AI's future — should we be hopeful or concerned?

WGTD's The Morning Show with Greg Berg
10/13/25 Carthage and NASA

WGTD's The Morning Show with Greg Berg

Play Episode Listen Later Oct 13, 2025 47:48


I speak with Dr. Kevin Crosby, Professor of Physics, Astronomy and Computer Science and director of the Space Sciences program at Carthage College, about the NASA-underwritten research in which he and a number of Carthage students are engaged. Joining him are four Carthage students: seniors Teagan Steineke and Semaje Farmer, junior Juliana Alvarez, and sophomore Owen Bonnett. Professor Crosby is also Director of the NASA Wisconsin Space Grant Consortium and is working as a senior scientist at NASA. He is also the Donald Hedberg Distinguished Professor of Entrepreneurship at Cartahge.

Artificiality
John Pasmore: Inclusive AI

Artificiality

Play Episode Listen Later Oct 11, 2025 34:31


In this conversation, we explore the challenges of building more inclusive AI systems with John Pasmore, founder and CEO of Latimer AI and advisor to the Artificiality Institute. Latimer represents a fundamentally different approach to large language models—one built from the ground up to address the systematic gaps in how AI systems represent Black and Brown cultures, histories, and perspectives that have been largely absent from mainstream training data.John brings a practical founder's perspective to questions that often remain abstract in AI discourse. With over 400 educational institutions now using Latimer, he's witnessing firsthand how students, faculty, and administrators are navigating the integration of AI into learning—from universities licensing 40+ different LLMs to schools still grappling with whether AI represents a cheating risk or a pedagogical opportunity.Key themes we explore:The Data Gap: Why mainstream LLMs reflect a narrow "Western culture bias" and what's missing when AI claims to "know everything"—from 15 million unscanned pages in Howard University's library to oral traditions across thousands of indigenous tribes.Critical Thinking vs. Convenience: How universities are struggling to preserve deep learning and intellectual rigor when AI makes it trivially easy to get instant answers, and whether requiring students to bring their prompts to class represents a viable path forward.The GPS Analogy: John's insight that AI's effect on cognitive skills mirrors what happened with navigation—we've gained efficiency but lost the embodied knowledge that comes from building mental maps through direct experience.Multiple Models, Multiple Perspectives: Why the future likely involves domain-specific and culturally-situated LLMs rather than a single "universal" system, and how this parallels the reality that different cultures tell different stories about the same events.Excavating Hidden Knowledge: Latimer's ambitious project to digitize and make accessible vast archives of cultural material—from church records to small museum collections—that never made it onto the internet and therefore don't exist in mainstream AI systems.An eBay for Data: John's vision for creating a marketplace where content owners can license their data to AI companies, establishing both proper compensation and a mechanism for filling the systematic gaps in training corpora.The conversation shows that AI bias goes beyond removing offensive outputs. We need to rethink which data sources we treat as authoritative and whose perspectives shape these influential systems. When AI presents itself as an oracle that has "read everything on the internet," it claims omniscience while excluding vast amounts of human knowledge and experience.The discussion raises questions about expertise and process in an era of instant answers—in debugging code, navigating cities, or writing essays. John notes that we may be "working against evolution" by preserving slower, more effortful learning when our brains naturally seek efficiency. But what do we lose when we eliminate the struggle that builds deeper understanding?About John Pasmore: John Pasmore is founder and CEO of Latimer AI, a large language model built to provide accurate historical information and bias-free interaction for Black and Brown audiences and anyone who values precision in their data. Previously a partner at TRS Capital and Movita Organics, John serves on the Board of Directors of Outward Bound USA and holds degrees in Business Administration from SUNY and Computer Science from Columbia University. He is also an advisor to the Artificiality Institute.

Podcast UFO
702. Joshua Bertrand

Podcast UFO

Play Episode Listen Later Oct 10, 2025 81:20 Transcription Available


Is the famous “Tic Tac” a home-grown technology? In this deep-dive, Martin Willis sits down with mathematician and technologist Joshua Bertrand to explore the cutting edge—and century-long history—of America's lighter-than-air programs, vacuum-based aerogels, and the black-budget pathways that may intersect with the Nimitz Incident. Bertrand (B.Math, University of Waterloo; Computer Science honors; former EA/industry engineer) has spent nearly a decade cross-referencing open sources, defense programs, and material-science breakthroughs.SHOW NOTES Support the Show & Stay Connected!

HLTH Matters
AI @ HLTH : Untangling the Provider Network Knot

HLTH Matters

Play Episode Listen Later Oct 9, 2025 16:29


In this episode, host Sandy Vance sits with Derek Lo, CEO and Founder at Medallion, to explore how technology is reshaping one of the most overlooked but critical parts of healthcare: managing provider networks. Derek shares the story behind Medallion, why he set out to tackle the complexities of credentialing, licensing, and provider management, and how his team is using automation and AI to make life easier for both providers and organizations.Medallion builds software that simplifies the complexity of running a provider network. From credentialing to licensing, the platform helps organizations get providers seeing patients faster while offering greater efficiency, visibility, and control.In this episode, they talk about:How Medallion helps accurately manage provider networks with AI automationThe complicated and complex process of credentialing—and why it sparked Derek's journeyWhere Medallion comes into play in recruiting and hiring providersHow this benefits organizations beyond the administrative processThe future of AI management software in healthcareHow Medallion is helping drive transformation across the industryA Little About Derek:Derek Lo is the CEO and Founder of Medallion, the leading platform for provider network management. Since launching in 2020, he's grown the company to over 300 customers, built a 150+ person team, and raised $140 million from top investors like Sequoia Capital and Optum Ventures. A second-time founder, Derek previously built and sold Py to Hired.com in 2019. He's a Yale graduate in Computer Science and Statistics, a two-time Forbes 30 Under 30 honoree, and is driving Medallion's mission to simplify healthcare operations with AI-powered automation.

Michigan Minds
Semiconductor manufacturing on the rise in the United States

Michigan Minds

Play Episode Listen Later Oct 9, 2025 19:11


Valeria Bertacco, the Mary Lou Dorf Collegiate Professor of Computer Science and Engineering, joins the Michigan Minds podcast to talk about semiconductors – how ubiquitous they are in our lives, why manufacturing moved overseas, and what it will take to produce them in the U.S.Bertacco's research explores hardware solutions for next generation computing and security. She is also the vice provost for engaged learning at the University of Michigan, supporting international partnerships and initiatives. Hosted on Acast. See acast.com/privacy for more information.

ITSPmagazine | Technology. Cybersecurity. Society
AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 8, 2025 43:24


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with  Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Maya Ackerman, PhD.Generative AI Pioneer | Author | Keynote SpeakerOn LinkedIn: https://www.linkedin.com/in/mackerma/Website: http://www.maya-ackerman.comDr. Maya Ackerman is a pioneer in the generative AI industry, associate professor of Computer Science and Engineering at Santa Clara University, and co-founder/CEO of Wave AI, one of the earliest generative AI startup. Ackerman has been researching generative AI models for text, music and art since 2014, and an early advocate for human-centered generative AI, bringing awareness to the power of AI to profoundly elevate human creativity. Under her leadership as co-founder and CEO, WaveAI has emerged as a leader in musical AI, benefiting millions of artists and creators with their products LyricStudio and MelodyStudio.Dr. Ackerman's expertise and innovative vision have earned her numerous accolades, including being named a "Woman of Influence" by the Silicon Valley Business Journal. She is a regular feature in prestigious media outlets and has spoken on notable stages around the world, such as the United Nations, IBM Research, and Stanford University. Her insights into the convergence of AI and creativity are shaping the future of both technology and music. A University of Waterloo PhD and Caltech Postdoc, her unique blend of scholarly rigor and entrepreneurial acumen makes her a sought-after voice in discussions about the practical and ethical implications of AI in our rapidly evolving digital world. Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society

Just Get Started Podcast
#468 Desiree-Jessica Pely, PhD - Founder of Alfa by Loyee.ai

Just Get Started Podcast

Play Episode Listen Later Oct 7, 2025 54:58


Episode 468 features Desiree-Jessica Pely, PhD, Co-Founder and CEO of Alfa by Loyee.ai (Top 50 GTM Startup).Find Jessica Online:Website: https://www.loyee.ai/Linkedin: https://www.linkedin.com/in/pely/About Jessica:Desiree-Jessica Pely, PhD, is pioneering a finance-led approach to B2B sales, go-to-market strategy, and revenue growth. As Co-Founder and CEO of Alfa by Loyee.ai (Top 50 GTM Startup), she leads the development of an AI-driven platform that transforms complex market signals into precise, actionable insights for sales, marketing, and finance teams.Jessica brings together a PhD in Financial Economics, a background in Computer Science, and hands-on entrepreneurial execution. She has collaborated with Nobel Laureate Richard Thaler, exploring the intersection of behavioral economics and decision-making, and began her career in quantitative finance and predictive modeling.Her passion for redefining GTM strategy grew from a common challenge: sales teams drowning in data yet struggling to identify the accounts that matter most. Loyee.ai was created to solve this, deploying AI research agents that identify high-value accounts, map markets, and adapt continuously to real-time changes, enabling companies to penetrate markets with precision and scale revenue smarter.Recognized as the “Queen of Leads,” Jessica has been named among the Top 100 People in SaaS, and awarded Salesperson of the Year. Beyond her company, she is an active mentor, investor, and coach, championing the next generation of innovators in SaaS, FinTech, and AI.

The Cognitive Crucible
#234 Robert Thibadeau on a Million Identities and Computational Cognitive Neuroscience

The Cognitive Crucible

Play Episode Listen Later Oct 7, 2025 71:15


The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, Bob Thibadeau returns to the Cognitive Crucible and discusses the fundamentals of computational cognitive neuroscience and privacy.  He asserts that everyone should manage a million identities on an embodied chip, share these identities selectively, and change them frequently. Recording Date: 29 Sep 2025 Resources: Cognitive Crucible Podcast Episodes Mentioned #5 Robert Thibadeau on Lies The Internet Court of Truth Robotaxies: Blackmail Comes of Age and the Need for Identity MegaChips (YouTube) Fiat Lies are Genocide on the Human Race (YouTube) Fiat Lies are Genocide on the Human Race (Medium) Flashy Crypto Chipped: A Storage OEM View (YouTube) Robert Thibadeau's Medium Site Frequency-hopping spread spectrum (Wikipedia) Heider and Simmel (1944) animation (YouTube) Link to full show notes and resources Guest Bio: Professor Bob Thibadeau has been affiliated with Carnegie Mellon University School of Computer Science since 1979. His expertise is in Cognitive Science, AI, and Machine Learning. Prof Thibadeau is one of the founding Directors of the Robotics Institute. And, he is author of the book “How to Get Your Lies Back: The Internet Court of Lies.”  Watch his recent Liecourt.com or truthcourt.net trials at https://www.truthcourt.net/sponsor/thibadeau.  “Fiat Lies are Genocide on the Human Race”  is a brief summary of the book available on Medium.com. It is tried for its truthfulness off his TruthCourt.net sponsor page. or directly at https://www.youtube.com/watch?v=qp-Q_Vqm7Eo. His "million identities to protect your privacy," also on Medium.com, is tried for its truthfulness at https://www.youtube.com/watch?v=tyxTdFlmZY8. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.

The Heart of Healthcare with Halle Tecco
Can We Make Cancer Nonlethal? | Reed Jobs & Matt Bettonville of Yosemite

The Heart of Healthcare with Halle Tecco

Play Episode Listen Later Sep 29, 2025 30:40


Cancer drugs cost more than ever, yet survival benefits are often modest—and in some cases, patients can't even access the care that already exists. After losing his father, Steve Jobs, to pancreatic cancer, Reed Jobs committed himself to making this the last generation that loses parents to the disease.Reed now leads Yosemite, a venture fund spun out of Emerson Collective in 2023, alongside Investor Matt Bettonville. Yosemite pairs life sciences and digital health investments with a grantmaking model to accelerate cancer research and ensure breakthroughs actually reach patients.We cover:

Deep State Radio
Siliconsciousness: The AI and Energy Scenario Exercise: Part 2

Deep State Radio

Play Episode Listen Later Sep 19, 2025 46:47


In four years time, how might a theoretical Dem administration grapple with the expanding energy consumption and demand for AI? This is the question the second half of TRG Media and MIT Technology Review's AI and Energy Scenario Exercises seeks to explore. Leading experts come together to role play as key actors in government, private industry, and more to simulate how public policy might take shape in the coming years. This episode contains the second and final phase of the game and a brief wrap-up from the editor in chief of MIT Technology Review Mat Honan and game designer Ed McGrady.  The Players: US Federal POTUS - Merici Vinton, Former Senior Advisor to IRS Commissioner Danny Werfel Security (DoD, DHS, DOS) - Mark Dalton, Senior director of technology and innovation at R Street Energy (DOE, EPA, Interior) - Wayne Brough, Former President of the Innovation Defense Foundation and senior fellow on R Street's Technology and Innovation team Red State Leadership- Soren Dayton, Director of Governance at the Niskanen Center Power generation industry Fossil - David Sandalow, Inaugural Fellow at the Center on Global Energy Policy (CGEP) at Columbia University Solar - Enock Ebban, host of “Sustainability Transformations Podcast” Nuclear [1] - Ashley Finan, Jay and Jill Bernstein Global Fellow at the Center on Global Energy Policy at Columbia University Investors in Al Domestic- Josiah Neeley, R Street Institute's Energy team advisor International - Josh Felser, CO Founder and Managing Partner at Climatic International (Middle East, EU, Russia, China, etc.) - Shaolei Ren, Associate Professor of Electrical and Computer Engineering at the University of California International (Middle East, EU, Russia, China, etc.) - Rachel Ziemba, Adjunct Senior Fellow at the Center for a New American Security (CNAS) Blue State Leadership POTUS Adam Zurofsky - former Director of State Policy and Agency Management for the State of New York Ari Peskoe - Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program Beth Garza - senior fellow with R Street's Energy & Environmental Policy Team Public interest  Environmental - Brent Eubanks, founder of Eubanks Engineering Research Domestic political - Meiyi Li, Ph.D. candidate at The University of Texas at Austin Media - Jen Sidorova, policy analyst at Reason Foundation Al and other Digital Industries AI - Valerie Taylor, division director of Mathematics and Computer Science at Argonne National Laboratory Blockchain -Erica Schoder, Executive Director and co-founder of the R Street Institute Erica Schroder - Elliot David, Head of Climate Strategy at Sustainable Bitcoin Protocol Other digital systems (chips, data center operations, online gaming, streaming, etc.) [1] - Ken Briggs, Faculty Assistant at Harvard University This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices