Podcasts about research lab

Establishment endowed for doing research

  • 180PODCASTS
  • 239EPISODES
  • 38mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 26, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about research lab

Latest podcast episodes about research lab

PurePerformance
The Research Behind the AI and Observability Innovation with Otmar Ertl and Martin Flechl

PurePerformance

Play Episode Listen Later May 26, 2025 50:59


Scientific research is the foundation of many innovative solutions in any field. Did you know that Dynatrace runs its own Research Lab within the Campus of the Johannes Kepler University (JKU) in Linz, Austria - just 2 kilometers away from our global engineering headquarter? What started in 2020 has grown to 20 full time researchers and many more students that do research on topics such as GenAI, Agentic AI, Log Analytics, Procesesing of Large Data Sets, Sampling Strategies, Cloud Native Security or Memory and Storage Optimizations.Tune in and hear from Otmar and Martin how they are researching on the N+2 generation of Observability and AI, how they are contributing to open source projects such as OpenTelemetry, and what their predictions are when AI is finally taking control of us humans!To learn more about their work check out these links:Martin's LinkedIn: https://www.linkedin.com/in/mflechl/Otmar's LinkedIn: https://www.linkedin.com/in/otmar-ertl/Dynatrace Research Lab: https://careers.dynatrace.com/locations/linz/#__researchLab

Earth + Humans
Validating Geographic Research

Earth + Humans

Play Episode Listen Later Apr 30, 2025 27:25


Send us a textCan we take lessons from one location and expect similar results in another location? How does replication strengthen geographic research? Today's guest, Dr. Peter Kedron, an expert in validating geographic research, shares how he thinks about how learning about one location can translate to another location.From the Spatial Pattern Analysis and Research Lab.This episode is produced, edited, and distributed by Lizzy Schattle.Music by Arnav Srivastav.

ATARC Federal IT Newscast
Doing Tech Better in Government with Alexis Bonnell, CIO, Air Force Research Lab (AFRL)

ATARC Federal IT Newscast

Play Episode Listen Later Apr 24, 2025 63:05


In this episode, Alexis Bonnell, now former Chief Information Officer at AFRL, shares insights on how to drive tech adoption in government to stay ahead in today's rapidly evolving digital landscape. She discusses the concept of the “oodaloop” and its importance in taking immediate action to bring digital solutions into practice. While technology advances, the challenge remains in getting people to adopt new processes. Alexis also dives into her passion for human-machine teaming and augmented reality, exploring how these technologies are shaping the future of government operations. Tune in to hear how we can do tech better in government!

The Daily Scoop Podcast
How DOGE got into the National Science Foundation; Air Force Research Lab CIO joins OpenAI

The Daily Scoop Podcast

Play Episode Listen Later Apr 23, 2025 3:55


Members of the Department of Government Efficiency have made their way into the National Science Foundation, as grants throughout the agency are being terminated. Three DOGE affiliates are currently listed as working in the Office of the Director at NSF, according to multiple sources within the agency: Luke Farritor, a former SpaceX intern and AI engineer who has shown up at other agencies DOGE has entered; Rachel Riley, a former McKinsey consultant who has also appeared at the Department of Health and Human Services; and Zachary Terrell. As part of the arrangement, Farritor has a “Budget, Finance, and Administration” clearance, which a source said allows him to view and modify the agency's funding opportunity system. Farritor and Terrell are listed in an agency directory as consultants. On April 18, NSF published a statement that it was terminating grants and awards that don't align with the administration's priorities, including those related to diversity, equity and inclusion (DEI) and misinformation and disinformation. Alexis Bonnell has stepped down from her positions at the Air Force Research Laboratory and transitioned to a new job at OpenAI, the company responsible for the development of ChatGPT. In 2023, Bonnell was tapped to serve as AFRL's first-ever chief information officer and director of the laboratory's Digital Capabilities Directorate, where she led the lab's information technology strategy and overall modernization efforts. According to a Tuesday post on LinkedIn, Bonnell is now working at OpenAI as a partnership manager, a position she took on in March. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

The Misophonia Podcast
#216 - "Between Two Ears" Ep. 2 - Visiting Dr. Sukhbinder Kumar's Research Lab

The Misophonia Podcast

Play Episode Listen Later Apr 3, 2025 7:27


In the second episode of the series "Between Two Ears", I share my recent trip to Iowa City to participate in a groundbreaking misophonia research study led by Dr. Sukhbinder Kumar — a continuation of his earlier work on mirror neurons and the motor basis of misophonia. This new study explores the social context of trigger reactions and involved time in an MRI chamber while exposed to common misophonic triggers.I talk about what it was like to undergo the study, the misophonic challenges(!), and why I believe it was worth it — not just for science, but for personal growth and understanding. I also reflect on meeting Dr. Kumar in person, our conversation about the deeper roots of misophonia, and why this research made me hopeful for the future. I hope to have Dr. Kumar on a regular episode of the podcast in the future!If you're in the area or able to travel, I encourage you to consider participating in studies like this. They matter.Photos and more on social. Here's a link to the lab: https://interoception.lab.uiowa.edu/misophonia-research   -----Web: https://misophoniapodcast.comOrder "Sounds like Misophonia" - by Dr. Jane Gregory and ISupport the podcast at https://misophonia.shopEmail: hello@misophoniapodcast.comSend me any feedback! Also, if you want some beautiful podcast stickers shoot over your address.YouTube channel (with caption transcriptions)Social:Instagram - @misophoniapodcastFacebook - misophoniapodcastTwitter/X - @misophoniashowSoQuiet - Misophonia Advocacyhttps://soquiet.orgSupport the show

Jacksonville's Morning News Interviews
3/31 - Spotlight: Local Election

Jacksonville's Morning News Interviews

Play Episode Listen Later Mar 31, 2025 4:55


Dr. Michael Binder from the UNF Public Opinion & Research Lab joins JMN to look at the FL Distict 6 House seat special election this week. Republican State Senator Randy Fine and Democrrat Josh Weil face off, with Weil leading in fundraising and early polling.

Dare to Disrupt
Bringing Research Lab Innovations to Your Toilet: The Story of spotLESS Materials

Dare to Disrupt

Play Episode Listen Later Mar 18, 2025 36:52


Birgitt Boschitsch is the co-founder and CEO of spotLESS Materials, an advanced materials company and Penn State spinoff commercializing highly repellent anti-fouling coatings.  In this episode, Birgitt shares how her passion for helping others through research led to her co-founding spotLESS Materials. She discusses her formative experiences, academic challenges, and the pivotal moments that shaped her career path, including her time at Princeton and her decision to pursue graduate studies at Penn State.  She discusses the original inspiration behind the idea for a toilet coating that repels sticky waste, as well as the challenges of transitioning from lab work to startup life. She shares the importance of community support and the impact of media exposure on sales.

Earth + Humans
Remote Sensing and Landscape Ecology

Earth + Humans

Play Episode Listen Later Feb 24, 2025 20:26


Send us a textHow connected is our landscape for the species living there? How do we figure out where to place protected areas? Today's guest, Dr. Amy Frazier, an expert in remote sensing, GIS, and landscape ecology, helps us answer these questions about ecosystems and environmental change.From the Spatial Pattern Analysis and Research Lab.This episode is produced, edited, and distributed by Lizzy Schattle.Music by Arnav Srivastav.

Machine Learning Street Talk
Francois Chollet - ARC reflections - NeurIPS 2024

Machine Learning Street Talk

Play Episode Listen Later Jan 9, 2025 86:46


François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? They are hosting an event in Zurich on January 9th with the ARChitects, join if you can. Goto https://tufalabs.ai/ *** Read about the recent result on o3 with ARC here (Chollet knew about it at the time of the interview but wasn't allowed to say): https://arcprize.org/blog/oai-o3-pub-breakthrough TOC: 1. Introduction and Opening [00:00:00] 1.1 Deep Learning vs. Symbolic Reasoning: François's Long-Standing Hybrid View [00:00:48] 1.2 “Why Do They Call You a Symbolist?” – Addressing Misconceptions [00:01:31] 1.3 Defining Reasoning 3. ARC Competition 2024 Results and Evolution [00:07:26] 3.1 ARC Prize 2024: Reflecting on the Narrative Shift Toward System 2 [00:10:29] 3.2 Comparing Private Leaderboard vs. Public Leaderboard Solutions [00:13:17] 3.3 Two Winning Approaches: Deep Learning–Guided Program Synthesis and Test-Time Training 4. Transduction vs. Induction in ARC [00:16:04] 4.1 Test-Time Training, Overfitting Concerns, and Developer-Aware Generalization [00:19:35] 4.2 Gradient Descent Adaptation vs. Discrete Program Search 5. ARC-2 Development and Future Directions [00:23:51] 5.1 Ensemble Methods, Benchmark Flaws, and the Need for ARC-2 [00:25:35] 5.2 Human-Level Performance Metrics and Private Test Sets [00:29:44] 5.3 Task Diversity, Redundancy Issues, and Expanded Evaluation Methodology 6. Program Synthesis Approaches [00:30:18] 6.1 Induction vs. Transduction [00:32:11] 6.2 Challenges of Writing Algorithms for Perceptual vs. Algorithmic Tasks [00:34:23] 6.3 Combining Induction and Transduction [00:37:05] 6.4 Multi-View Insight and Overfitting Regulation 7. Latent Space and Graph-Based Synthesis [00:38:17] 7.1 Clément Bonnet's Latent Program Search Approach [00:40:10] 7.2 Decoding to Symbolic Form and Local Discrete Search [00:41:15] 7.3 Graph of Operators vs. Token-by-Token Code Generation [00:45:50] 7.4 Iterative Program Graph Modifications and Reusable Functions 8. Compute Efficiency and Lifelong Learning [00:48:05] 8.1 Symbolic Process for Architecture Generation [00:50:33] 8.2 Logarithmic Relationship of Compute and Accuracy [00:52:20] 8.3 Learning New Building Blocks for Future Tasks 9. AI Reasoning and Future Development [00:53:15] 9.1 Consciousness as a Self-Consistency Mechanism in Iterative Reasoning [00:56:30] 9.2 Reconciling Symbolic and Connectionist Views [01:00:13] 9.3 System 2 Reasoning - Awareness and Consistency [01:03:05] 9.4 Novel Problem Solving, Abstraction, and Reusability 10. Program Synthesis and Research Lab [01:05:53] 10.1 François Leaving Google to Focus on Program Synthesis [01:09:55] 10.2 Democratizing Programming and Natural Language Instruction 11. Frontier Models and O1 Architecture [01:14:38] 11.1 Search-Based Chain of Thought vs. Standard Forward Pass [01:16:55] 11.2 o1's Natural Language Program Generation and Test-Time Compute Scaling [01:19:35] 11.3 Logarithmic Gains with Deeper Search 12. ARC Evaluation and Human Intelligence [01:22:55] 12.1 LLMs as Guessing Machines and Agent Reliability Issues [01:25:02] 12.2 ARC-2 Human Testing and Correlation with g-Factor [01:26:16] 12.3 Closing Remarks and Future Directions SHOWNOTES PDF: https://www.dropbox.com/scl/fi/ujaai0ewpdnsosc5mc30k/CholletNeurips.pdf?rlkey=s68dp432vefpj2z0dp5wmzqz6&st=hazphyx5&dl=0

DesignSafe Radio
Designing the World's Largest Wind-Wave Research Lab

DesignSafe Radio

Play Episode Listen Later Dec 4, 2024 14:42


Plans are afoot to build the world's largest wind-wave research lab, capable of generating 200 MPH hurricane winds and 5-meter-high waves. This NSF-funded facility will enable full-scale investigations into structural and coastal resilience — and a secure future in the face of destructive natural hazards. On today's show, Florida International University wind engineer Arindam Chowdhury joins us to describe this facility, the National Full-Scale Testing Infrastructure for Community Hardening in Extreme Wind, Surge, and Wave Events — or NICHE, for short.About NICHE. The NICHE lab will have a 20-fan array capable of generating 200 MPH winds, that's a Cat 6 hurricane — as well as generating transient winds like tornadoes and downbursts. NICHE's enormous wind field will enable testing of full-scale two-story structures. It will have a 500-meter-long wave flume and be capable of generating five-meter-high waves. Significantly, the NICHE team is incorporating facility protocols for researchers to deliver expedient, real-world impact. 

The Brain Candy Podcast
865: Real World Reunion, Wet Dog Shake, & Monkeys on the Run

The Brain Candy Podcast

Play Episode Listen Later Nov 18, 2024 64:22


Sarah returned to her Real World house and wasn't prepared for how emotional it would make her, and now she's considering why the experience brought out so many feelings. We talk about "wet dog shake," what causes it, and why some mammals have that response to certain stimuli, while others, like cats and mice, don't. We learn about dozens of monkeys who escaped from a research lab, and debate whether we're on team monkeys or team science. Either way, those monkeys are clever and free. We discuss a man who uses a medical voice board to communicate and how he modified the device to make it more personal and reflective of his identity. We consider which accents we like in the world and which grate on our nerves, and why so many people hate the sound of their own voice. Sarah explains how chocolate exposed racism within the movie industry, and how things have improved since the 70s. Plus, we discuss the Sweet Bobby documentary where a woman is catfished by someone for a decade, and we debate how someone could allow that to happen, why it could happen to anyone, and why there wasn't much punishment for the person who tricked her.Listen to more podcasts like this: https://wavepodcastnetwork.comJoin our Candy Club, shop our merch, sign-up for our free newsletter, & more by visiting The Brain Candy Podcast website: https://www.thebraincandypodcast.comConnect with us on social media:BCP Instagram: https://www.instagram.com/braincandypodcastSusie's Instagram: https://www.instagram.com/susiemeisterSarah's Instagram: https://www.instagram.com/imsarahriceBCP on X: https://www.x.com/braincandypodSponsors:Get an exclusive 20% off your first order at https://thrivecausemetics.com/BRAINCANDYhis episode is sponsored by BetterHelp. Visit https://www.betterhelp.com/braincandy today to get 10% off your first month.Get ten dollars off any order! Enjoy free shipping when you subscribe. Go to https://nutrafol.com and enter the promo code BRAINCANDYGIFTFor 50% off your first order, head to https://www.smalls.com/BRAINCANDY and use code BRAINCANDYSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Big Rhetorical Podcast
169: Trent Wintermeier

The Big Rhetorical Podcast

Play Episode Listen Later Nov 18, 2024 43:58


Keywords: Digital Rhetorics, Sound, Methods, Community Literacy, Digital Humanities. Trent Wintermeier is a PhD student in the Department of Rhetoric and Writing at the University of Texas at Austin. His research interests broadly include sound, digital rhetorics and digital humanities methods, and community literacy. Currently, he's an Assistant Director for the Digital Writing and Research Lab, and he's a Presentations Coordinator for UT Austin's University Writing Center. Besides his research on the hum phenomenon, which has been published by Sounding Out!, he's working on projects concerning the sound of data center cooling equipment and building DIY radio receivers with found objects. Visit thebigrhetoricalpodcast.weebly.com and follow @thebigrhet.

Neurocareers: How to be successful in STEM?
Non-Invasive Deep Brain Stimulation: International Neurotech Career with Emma Acerbo, PhD

Neurocareers: How to be successful in STEM?

Play Episode Listen Later Nov 16, 2024 68:22


Have you heard of temporal interference (TI) electrical stimulation, a revolutionary concept of non-invasive deep brain stimulation (DBS)? Curious about what it takes to develop cutting-edge neuromodulation techniques while pursuing a scientific career across two continents? Welcome to the Women in Neurotech series on the Neurocareers: Doing the Impossible! podcast!

KMJ's Afternoon Drive
How do you catch an escaped monkey from a research lab?

KMJ's Afternoon Drive

Play Episode Listen Later Nov 12, 2024 7:23


25 monkeys returned to Beaufort County research facility; 18 still on the loose    Please Subscribe + Rate & Review KMJ's Afternoon Drive with Philip Teresi & E. Curtis Johnson wherever you listen!  ---     KMJ's Afternoon Drive with Philip Teresi & E. Curtis Johnson is available on the KMJNOW app, Apple Podcasts, Spotify, Amazon Music or wherever else you listen.  ---   Philip Teresi & E. Curtis Johnson – KMJ's Afternoon Drive  Weekdays 2-6 PM Pacific on News/Talk 580 & 105.9 KMJ  DriveKMJ.com | Podcast | Facebook | X | Instagram  ---   Everything KMJ: kmjnow.com | Streaming | Podcasts | Facebook | X | Instagram    See omnystudio.com/listener for privacy information.

Philip Teresi Podcasts
How do you catch an escaped monkey from a research lab?

Philip Teresi Podcasts

Play Episode Listen Later Nov 12, 2024 7:23


25 monkeys returned to Beaufort County research facility; 18 still on the loose    Please Subscribe + Rate & Review KMJ's Afternoon Drive with Philip Teresi & E. Curtis Johnson wherever you listen!  ---     KMJ's Afternoon Drive with Philip Teresi & E. Curtis Johnson is available on the KMJNOW app, Apple Podcasts, Spotify, Amazon Music or wherever else you listen.  ---   Philip Teresi & E. Curtis Johnson – KMJ's Afternoon Drive  Weekdays 2-6 PM Pacific on News/Talk 580 & 105.9 KMJ  DriveKMJ.com | Podcast | Facebook | X | Instagram  ---   Everything KMJ: kmjnow.com | Streaming | Podcasts | Facebook | X | Instagram    See omnystudio.com/listener for privacy information.

ANTIC The Atari 8-bit Podcast
ANTIC Interview 442 - Bob Stein, Atari Research

ANTIC The Atari 8-bit Podcast

Play Episode Listen Later Nov 1, 2024 56:48


Bob Stein, Atari's Encyclopedia Project   Bob Stein worked at Atari Research for 18 months beginning in 1981. He was hired by Alan Kay. He worked almost exclusively on an encyclopedia project, a potential collaboration between Atari and Encyclopaedia Britannica that never went anywhere.   I learned about Bob after he uploaded an item called The Atari Drawings to Internet Archive. It's a collection of nine colorful pencil drawings, drawn in 1982 by Disney animator Glen Keane. The drawings depict futuristic scenarios where people use a computerized encyclopedia to get information: for instance, "An earthquake wakes a couple in the middle of the night. The Intelligent Encyclopedia, connected to an online service, informs them of the severity of the earthquake and makes safety tips readily available." and "A mother and her children looking into a tidepool in Laguna ask the Intelligent Encyclopedia about the plants and animals that they see."   Bob described the collection of art in his introduction to the document:   "In 1982 executives from Warner, Inc., Atari's parent company, were scheduled to visit the Research Lab where the Encyclopedia Project was located. Brenda Laurel and I came up with these scenarios to give the execs a sense of what we were working toward. The drawings were made by Disney animator, Glen Keane.   When you look at these, remember they were made 16 years before Google and 12 years before Yahoo, even 8 years before the earliest web-based search engines.   That said, one of the most interesting things about these scenarios as seen today, is that with the exception of the image of the architect and the teacher none of them indicated any inkling that the most important element of the web to come was that it would bring people into contact with each other. What we see here is almost entirely people accessing content from a central server, no sense that we would be communicating with each other or uploading our own contributions to the collective culture. My own explanation for this lapse focuses on the print-era mentality that saw readers purely as consumers of content."   Bob saved and scanned a large number of materials from his time at Atari, and uploaded them to Internet Archive. In addition to the scans of Keane's Atari Drawings, the documents include memos about the encyclopedia project and a transcript of a 1982 seminar for Atari Research featuring Charles Van Doren. Check the show notes for those links.   After Atari, Bob was co-founder of The Criterion Collection, which restores and distributes important classic films; and co-founder of The Voyager Company, the first commercial multimedia CD-ROM publisher. In 2004, he co-founded The Institute for the Future of the Book, a think tank "investigating the evolution of discourse as it shifts from printed pages to networked screens."   This interview took place December 16, 2023.   Video version of this interview at YouTube   The Atari Drawings   ANTIC Interview 420 - Brenda Laurel, Atari Research   Whither The Encyclopedia Project - Atari Encyclopedia Project memos   Back to the Future -- In honor of Encyclopedia Britannica giving up its print edition (Wayback machine)   Stein Kay Atari Memos Pt 1   Stein Kay Atari Memos Pt 2   Exchange With Steve Weyer And J. David Bolter 1983   Hadley Letter 1980-12-01   Atari...Ifugao Question Journal, Michael Naimark   CVD Atari Seminar 20 December 1982   Encyclopedia And The Intellectual Tools Of The Future . . . November 1981   Bob Stein Archives at Stanford   The Digital Antiquarian — Bob Stein and Voyager   Charles Van Doren in Wikipedia   Bob Stein wants to change how people think about the book (2010)

Jacksonville's Morning News Interviews
10/21 - Dr. Michael Binder, UNF Public Opinion Research Lab

Jacksonville's Morning News Interviews

Play Episode Listen Later Oct 21, 2024 7:34


YOU DECIDE 2024 coverage continues, as Dr. Binder shares results from the latest UNF election poll. Latest polling shows Trump with a significant lead over Harris, and what some might consider surprising positions regarding proposed amendments. The latest poll includes considerations from "Leaners" and "Blurters," as Dr. Binder explains.

The UIUC Talkshow
#46 - This Professor Made Nuclear Physics Viral on YouTube: David Ruzic on Explosives, Love, and Crocs

The UIUC Talkshow

Play Episode Listen Later Oct 8, 2024 72:23


David Ruzic is a Nuclear Engineering Professor at the University of Illinois at Urbana-Champaign and is widely recognized as The Illinois Energy Prof. EPISODE LINKS: David Ruzic's YouTube Channel: @illinoisenergyprof6878 David Ruzic's UIUC Website: https://ece.illinois.edu/about/directory/affiliates/druzic David Ruzic's Research Lab: https://cpmi.illinois.edu/about-cpmi/ David Ruzic's Research Topics: https://ipi.illinois.edu/research/ OUTLINE: 0:00 - Introduction 1:10 - Father 3:52 - Childhood 6:17 - Why Professor? 9:47 - Secrets to Thriving as a Professor 15:06 - How to Master The Bureaucracy 18:21 - How to Make People Care About Your Work 21:02 - Teaching with Explosives

Lab Rats to Unicorns
From Research Lab to CEO: Michael Torres on Innovation in Cancer Care_e.058

Lab Rats to Unicorns

Play Episode Listen Later Sep 26, 2024 56:52


Join us on Lab Rats to Unicorns as we dive into the career of Michael Torres, CEO of Crossbridge Bio. With a Ph.D. in cancer biology and a background in equity research, Michael has been at the forefront of developing cutting-edge oncology treatments. In this episode, he discusses the evolution of antibody-drug conjugates (ADCs) and the process of building biotech companies in the Texas innovation ecosystem, as well as his personal journey from researcher to CEO.

Jacksonville's Morning News Interviews
9/11 - Dr. Michael Binder, UNF Public Opinion Research Lab

Jacksonville's Morning News Interviews

Play Episode Listen Later Sep 11, 2024 7:53


YOU DECIDE 2024 coverage continues, as WOKV political analyst Dr. Michael Binder reviews the Harris/Trump debate, key moments for each candidate, and what impacts this event may have on undecided voters.

Seismic Soundoff
230: Celebrating Sven - A Legacy of Innovation and Mentorship in Geophysics

Seismic Soundoff

Play Episode Listen Later Jul 18, 2024 31:50


"Sven showed us that the goal of a presentation is to transfer knowledge and insight, not to show people how smart you are." In this heartfelt episode, we honor the legacy of Sven Treitel, a beloved figure in geophysics and at SEG. Kurt Marfurt and Sam Gray join host Andrew Geary to reflect on Sven's profound impact on their work and the field. In this episode, we talk about: > How a 25 cents coffee subsidy proved an invaluable investment for Amoco > The power and usefulness of the "chicken test" > How the gaming and AI industry of today relates to the oil and gas industry > The groundbreaking contributions of Sven and Enders Robinson, particularly in digital signal processing > Sven's approach to making complex concepts accessible and understandable > Sven's dedication to professional societies and his mentorship beyond Amoco > How Sven's international background shaped his perspectives and interactions > The humor and humility that made Sven a beloved mentor and colleague Listeners will gain a deep appreciation for Sven's lasting contributions to geophysics and his ability to bridge the gap between research and practical application. This episode is a tribute to a geophysical giant whose influence will be felt for generations. GUEST BIOS Kurt J. Marfurt is the recipient of SEG's highest honor, the Maurice Ewing Medal, awarded to a person deserving of special recognition for making major contributions to the advancement of the science and profession of exploration geophysics. Marfurt is a remarkably productive geophysicist, author, and educator with a distinguished career in academia and the oil and gas industry. After completing his Ph.D. in applied geophysics at Columbia University in 1978 and teaching there, he joined the Amoco Research Center in Tulsa, Oklahoma, as a research geophysicist. During his tenure at Amoco, Marfurt made significant contributions to several processes and patents, particularly the development of seismic attributes. In 1999, Marfurt joined the faculty at the University of Houston, where he served as director of the Allied Geophysical Laboratories. He continued researching seismic imaging, interpretation, and data simulation, notably generating well-used synthetic data sets for the Marmousi model. In 2007, Marfurt joined the faculty of the University of Oklahoma, where he served as the Shultz Professor of Geophysics and is now professor emeritus. He has been involved with SEG as a short course instructor, associate editor of GEOPHYSICS, editor-in-chief of Interpretation, director at large on the SEG Board of Directors, and coauthor of more than 800 papers and abstracts. Samuel Gray received a PhD in Mathematics in 1978, and he joined the oil and gas industry in 1982 at Amoco's Research Lab in Tulsa, Oklahoma, where he worked on seismic imaging, amplitude analysis, and velocity estimation problems. He moved to Amoco Canada in 1994, where the near surface humbled him. He joined Veritas (now CGGVeritas) in 1999. Gray has published and presented widely and has won awards for Best Paper in Geophysics and The Leading Edge, Best Presentation at SEG and CSEG meetings, and Honorable Mention for Best Paper in Geophysics. He has also served several times as an Associate Editor of Geophysics. In 2010, he received the SEG's Reginald Fessenden Award for his work on both the theoretical and practical sides of imaging. He won the SEG Maurice Ewing Medal in 2017. Sam retired as Senior Researcher, Subsurface Imaging, CGG (now Viridien). LINKS * Visit https://seg.org/podcasts/episode-230-celebrating-sven-a-legacy-of-innovation-and-mentorship-in-geophysics/ for links to Sven's Memorial in TLE, his video interview, the complete interview transcript, and more. SHOW CREDITS Andrew Geary at TreasureMint hosted, edited and produced this episode. The SEG podcast team comprises Jennifer Cobb, Kathy Gamble, and Ally McGinnis.

The Dream World
EP72: Lucid Dream Research Lab in Bern, Switzerland

The Dream World

Play Episode Listen Later May 17, 2024 25:35 Transcription Available


Emma Peters is a PhD student at Universität Bern, Switzerland. She is working on the development of a reliable and effective lucid dreaming induction strategy using different types of bodily stimulation together with traditional lucid dreaming strategies. Before working at the University of Bern, Emma spent 3 years at the Sleep and Memory lab in Nijmegen, the Netherlands, working on lucid dream induction and longitudinal sleep research. Lucid Lab Bern is actively researching lucid dreaming. They are always looking for people interested in dreaming and/or frequent and skilled lucid dreamers in and around Bern. If you are interested in helping out or in having a sneak peek into the lab and research, feel free to email emma.peters@unibe.chLucid Lab Bern InstagramAbout the lab team Catch Amina LIVE on the radio on the Dream Journal podcast with host Dr. Katherine Bell from Experiential Dreamwork talking about some of her favorite dream-related topics. You can tune in to the live conversation on Saturday, May 25 at 10 AM Pacific Time (1pm EST). It will be broadcast in the Santa Cruz area at 90.7 FM or can be heard streaming live at KSQD.orgSupport the Show.Follow The Dream World PodcastVisit Our WebsiteInstagram @TheDreamWorldPodcastTik Tok @aminasdreamworldSpotifyFacebookLucid Dreaming Online Course

Rotten Mango
#357: Yale Student Found Inside The Wall of Multi Million Dollar Research Lab on Campus

Rotten Mango

Play Episode Listen Later May 12, 2024 110:31


There's a boring red brick building in the middle of Yale University's campus. If you walk past it you might think it's an administrative office. It looks - boring.  But if you look closely - there are 75 CCTV cameras pointed directly at the building. Covering every square inch of the exterior. Why would they need so much security for a university building? It is the Animal Research Laboratory. It houses over 4000 research mice and has tens of millions of dollars of research experiments being conducted inside the brick walls.  It is one of the most secure buildings on campus.  Till a brilliant Yale graduate student disappears inside.  She walks in.  Never walks out.  She is nowhere to be found in the building - or at least that's what the authorities initially thought.  But have they checked the walls? Full Source Notes: rottenmangopodcast.com To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

The Assistant Principal Podcast
A Human Approach to Schooling with Dr. Faiza Jamil

The Assistant Principal Podcast

Play Episode Listen Later Apr 2, 2024 46:25


A Human Approach to Schooling with Dr. Faiza Jamil Power Quote: We lose our ability to see the whole  Description:You can think of today's show as a bit of a thought experiment. What changes if we make relationships, at every level of the school, THE priority? It is a thought experiment, but we also have lots of experience that tells us how that experiment could turn out. Guest Bio:Faiza M. Jamil is a former K-12 teacher and currently serves as Associate Professor of Education and Human Development in the area of Learning Sciences at Clemson University. Jamil is the Founding Director of the Contexts of Learning and Development Lab, where she conducts interdisciplinary research with a focus on educational equity across learning contexts. She is the author of Public Education in Turbulent Times: Innovative Strategies for Leadership and Learning.  Warmup questions:·      We always like to start with a celebration. What are you celebrating today?·      Is there a story that will help listeners understand why you are doing what you do?  Questions/Topics/Prompts·      Talk about importance of being colleagues and investing time together·      School purpose => individual purposeo   How do we prioritize the relationship pieceo   Task v persono   CIRCLE model·      Mindset shift·      Mental health Closing questions:·      What part of your own leadership are you still trying to get better at?·      If listeners could take just one thing away from today's podcast, what would it be?·      Before we go, is there anything else that you'd like to share with our listeners?·      Where can people learn more about you and your work… Close·      We covered a lot of ground today. I encourage you to do three things:o   Monitor your own inner dialog – when you say you don't have time to listen, to build relationships, what are you actually doing instead?o   Begin asking people – teachers and students – “what would make school better for you?”o   Use your expertise to ask powerful questions instead of goving powerful answers.·      Leadership is a journey and thank you for choosing to walk some of this magical path with me.·      You can find links to all sorts of stuff in the show notes, including my website https://www.frederickbuskey.com/·      I love hearing from you so consider email me at frederick@frederickbuskey.com or connecting with me on LinkedIn.·      Please remember to subscribe, rate, and review the podcast.·      Have a great rest of the week, be present for others and, more importantly, take time to reflect and recover so you can continue to live and lead better.·      Cheers!  Guest links:My Book: https://link.springer.com/book/10.1007/978-3-031-43237-8  Research Lab: https://www.clemson.edu/education/programs/labs/clad.html  Twitter: https://twitter.com/eduprofjamilLinkedIn: https://www.linkedin.com/in/faiza-m-jamil/   Frederick's Links:Email: frederick@frederickbuskey.comWebsite: https://www.frederickbuskey.com/LinkedIn: http://www.linkedin.com/in/strategicleadershipconsultingDaily Email subscribe: https://adept-experimenter-3588.ck.page/fdf37cbf3a

Jacksonville's Morning News Interviews
4/2 - Dr. Michael Binder, UNF Public Opinion Research Lab

Jacksonville's Morning News Interviews

Play Episode Listen Later Apr 2, 2024 6:21


SPOTLIGHT - Dr. Binder looks at controversial ballot issues, as voting for abortion access terms and for recreational marijuana use will be on this year's election ballot.

Growing Impact
Youth climate leadership

Growing Impact

Play Episode Listen Later Apr 1, 2024 25:44


The global push to involve youth in climate action is gaining momentum, harnessing their innovative spirit, deep investment in the future, and strong collective voice to combat climate change. Getting young people involved ensures that climate policies are forward-thinking and geared towards sustainable development, while their global solidarity and use of digital platforms amplify the call for urgent action. At the forefront of this movement, Penn State's Global Youth Storytelling and Research Lab aims to become a pivotal transnational research hub, empowering young leaders to shape the future of climate and environmental justice.

The Daily Scoop Podcast
Inside the work of the Air Force Research Lab's Digital Capabilities Directorate

The Daily Scoop Podcast

Play Episode Listen Later Mar 21, 2024 39:06


The Air Force Research Lab last March established its Digital Capabilities Directorate to speed up its modernization pursuits and enable its scientists and engineers to explore and more efficiently collaborate via a growing suite of emerging technologies. The directorate fuses together former AFRL's elements, like its Research Collaboration and Computing Directorate, as well as the lab's former Business Process Reengineering Division and others. It also has roots that trace back to a temporary digital “war room” that was stood up to help drive innovation. Alexis Bonnell is Air Force Research Lab CIO and director of the Digital Capabilities Directorate, appointed to those roles last July. She joins the podcast to discuss her journey to the role, her priorities nearly a year in and how things are shifting as AI becomes an increasingly important part of AFRL's enterprise IT.

The Nosleep Radio
I Worked at a Top Secret Government Research Lab

The Nosleep Radio

Play Episode Listen Later Mar 13, 2024 38:35


This Creepypasta scary story is from the creepypasta website, written by Bryan A Young, make sure to check out the original story and support the author: "I Worked at a Top Secret Government Research Lab, I Need to Share My Journals" https://www.creepypasta.com/i-worked-at-a-top-secret-government-research-lab-i-need-to-share-my-journals/ Learn more about your ad choices. Visit megaphone.fm/adchoices

The Dark Somnium
I Worked at a Top Secret Government Research Lab

The Dark Somnium

Play Episode Listen Later Mar 13, 2024 38:35


This Creepypasta scary story is from the creepypasta website, written by Bryan A Young, make sure to check out the original story and support the author: "I Worked at a Top Secret Government Research Lab, I Need to Share My Journals" https://www.creepypasta.com/i-worked-at-a-top-secret-government-research-lab-i-need-to-share-my-journals/ Learn more about your ad choices. Visit megaphone.fm/adchoices

ZakBabyTV
I Worked At A Top Secret Government Research Lab. I Need To Share My Journals!

ZakBabyTV

Play Episode Listen Later Mar 8, 2024 49:28


The Coaching Psychology Pod
01: Realities of running or working for a coaching business

The Coaching Psychology Pod

Play Episode Listen Later Mar 1, 2024 78:58


In this episode Dr Natalie Lancer, with Professor Jonathan Passmore, Xenia Angevin and Kaveh Mir, discuss the realities of running your own coaching practice or working for a large, digital coaching platform. We cover the fundamental questions to help you consider how to find your clients, decide on a niche and philosophy and tap into different coaching markets. We explore: •    What counts more: coach expertise or experience? •    How can coaches be tactical and strategic when navigating the gig economy of coaching? •    How do you develop your own unique coaching identity in a business context? •    What do you want your day-to-day coaching life to look like? •    How has coaching evolved to where we are in the current coaching marketplace? •    What can a coach earn, as a novice or an expert, working for a large digital platform? •    How do you choose whether you want to work for a digital provider and which one? •    What are the selection criteria for coaches that digital platforms use?  •    What are the benefits and constraints when working with a digital coaching provider? •    What are the different roles a coaching psychologist can adopt as part of their portfolio? •    How can coaching become more inclusive as a profession? •    Why is coaching psychology a good second career? The digital coaching landscape is evolving and has arguably transformed coaching from a ‘cottage industry' to a global, scalable enterprise. We query whether coaching education needs to be updated and how coaching standards can be maintained and measured to reflect this new context. Our guests today are: Professor Jonathan Passmore is an award winning and international renowned Chartered Occupational Psychologist and the Inaugural Chair of the BPS Division of Coaching Psychologists. He has published widely, with 40 books, 150 book chapters and 100+ scientific papers. His forthcoming books in 2024 include: ‘Becoming a Team Coach: The Essential ICF Guide' (Springer), ‘The Digital & AI Coaches Handbook' (Routledge), ‘The Health & Wellbeing Coaches Handbook' (Routledge) and the second edition of ‘Becoming a Coach: The Essential ICF Guide (Springer), with three new titles plus a host of research projects in progress for the future. He is listed in the Thinkers 50 Marshall Goldsmith Top 8 Global Coaches and Global Gurus Top 30 Thought Leaders. He is currently Professor of Coaching and Behavioural Change at Henley Business School, Senior Vice President at EZRA (the coaching arm of LHH) and previously worked for PricewaterhouseCoopers, IBM Business Consulting and OPM. His current research interests include AI, digital and well-being. Kaveh Mir is currently an ICF Global Director at the Institute of Thought Leadership and a Master Certified Coach who works with Executives on critical psychological processes using Positive Behaviour Change and evidence based Coaching Psychology. He is licensed in a portfolio of psychometric assessments tools and a BPS qualified assessor on User Test Occupational Ability and Personality. Kaveh has a degree in Computer Science, a Master's degree in Human-Computer Interaction, a Master's degree in Applied Positive Psychology and Coaching Psychology and an Executive MBA.  Kaveh has coached senior executives from international organisations such as Deloitte, Amazon, and Google. He has held various senior executive roles and was the founder of a technology start-up firm. He wrote ‘Wars at Work: An Action Guide for Resolving Workplace Battles' which seeks to identify causes for workplace conflict and offer solutions to effectively resolve these issues. Xenia Angevin, MBA, is a Coaching Psychologist, promoting a dialogue within the Helping and People professions, and across the scientific domains. Xenia's specific expertise is in differential psychology and atypical neurodevelopment. She is a Principal Coaching Psychologist and Head of the Research Lab at Shimmer, directing a coaching practice portfolio for adults with ADHD, Autism and other neurodevelopmental presentations. Xenia is a Steering Group Committee member of the Neurodiversity-Affirming Research & Practice SIG at the Association for Contextual Behavioural Science. Xenia is a Fellow member of the Chartered Institute of Arbitrators (2008) and has worked in complex socio-political environments for the BBC News. Xenia served as a Head of Research and Government Liaison (Diplomacy) Unit at The Royal Household of Queen Elizabeth The Second. In the past 20 years, she has focused on the professional application of non-directive approaches including coaching, mentoring, mediation, supervision, facilitation, organisational development, and policy work in support of these. Your host, Dr Natalie Lancer, is a Chartered Coaching Psychologist, and British Psychological Society (BPS) Registered Supervisor. She is the Chair of the BPS's Division of Coaching Psychology and an accredited member of the Association for Coaching. She is the host of this podcast series and invites you to email any comments to docp-tcppod@bps.org.uk https://www.bps.org.uk/member-networks/division-coaching-psychology © British Psychological Society 2024

Bright Side
Here's What We'll Do in Space by 2124

Bright Side

Play Episode Listen Later Feb 27, 2024 12:40


Space travel in 2124 is like something straight out of science fiction, but it's becoming more real by the day! Sleek, high-speed spacecraft shuttling passengers to distant planets within hours, thanks to advancements in propulsion technology. With bases on the Moon and Mars, humans are establishing permanent colonies, pushing the boundaries of exploration like never before. Meanwhile, space tourism has become commonplace, with everyday folks embarking on vacations to orbiting hotels or even venturing to the far reaches of our solar system. And let's not forget about the incredible discoveries waiting to be made as we delve deeper into the mysteries of the universe. Credit: New Shepard booster: LunchboxLarry - https://flic.kr/p/WP3Bnh, CC BY 2.0 https://creativecommons.org/licenses/..., https://commons.wikimedia.org/wiki/Fi... CC BY 4.0 https://creativecommons.org/licenses/... SpaceX Falcon Heavy: AllThingsSpace, https://sketchfab.com/3d-models/space... SpaceX Falcon: SunnyChen754, https://sketchfab.com/3d-models/space... Research Lab: xenkor, https://sketchfab.com/3d-models/resea... Colony Rover: Aleksey Basinskiy, https://sketchfab.com/3d-models/colon... Hale "Whitlock" Edwards: andreas9343, https://sketchfab.com/3d-models/hale-... gold Stone: skghost, https://sketchfab.com/3d-models/gold-... Platinum Ore: UQ School of Earth and Environmental Sciences PRO, https://sketchfab.com/3d-models/plati... Wormhole: Miguelangelo Rosario, https://sketchfab.com/3d-models/wormh... Riviera: BILAL AHMAD, https://sketchfab.com/3d-models/rivie... NASA Headquarters / NASA/Aubrey Gemignani NASA Animation is created by Bright Side. #brightside ---------------------------------------------------------------------------------------- Music by Epidemic Sound https://www.epidemicsound.com Check our Bright Side podcast on Spotify and leave a positive review! https://open.spotify.com/show/0hUkPxD... Subscribe to Bright Side: https://goo.gl/rQTJZz ---------------------------------------------------------------------------------------- Our Social Media: Facebook:   / brightside   Instagram:   / brightside.official   TikTok: https://www.tiktok.com/@brightside.of... Stock materials (photos, footages and other): https://www.depositphotos.com https://www.shutterstock.com https://www.eastnews.ru ---------------------------------------------------------------------------------------- For more videos and articles visit: http://www.brightside.me Learn more about your ad choices. Visit megaphone.fm/adchoices

What I Want to Know with Kevin P. Chavous
142. How should we talk to our kids about social media? with Director of the WCW Youth, Media, and Well-Being Research Lab Linda Charmaraman

What I Want to Know with Kevin P. Chavous

Play Episode Listen Later Feb 21, 2024 33:17


According to a survey by the Pew Research Center, 95% of teenagers reported using social media and more than a third of them use it “almost constantly.”  High social media usage in children has caused concern in parents with many wondering if social media is safe for their kids.  What are the negative and positive impacts of social media? Should parents place a limit on their kids' social media usage?  Linda Charmaraman joins Kevin in this episode to discuss how we can talk to our kids about social media. Meet Linda Linda Charmaraman is the director and founder of the Youth Media and Well-Being Research Lab at Wellesley College.  Her research and action interests include social technology and adolescent health, digital citizenship, innovative research methods, and how social identities affect wellbeing.  Youth Media and Well-Being Research Lab at Wellesley College website: https://www.wcwonline.org/Youth-Media-Wellbeing-Research-Lab/youth-media-wellbeing-research-lab  Learn more about Promising Practices 2024: https://promisingpractices24-spdc.vfairs.com/?leadsource=organic_social&utm_product=stride&lead_source_detail=podcast&utm_campaign=comms_wiwtk  This is, What I Want to Know.  

The NeoLiberal Round
Caribbean Thought and Theology: Lecture Exploring Jamaican Afro-Caribbean Beliefs Part 1

The NeoLiberal Round

Play Episode Listen Later Feb 17, 2024 77:55


This Lecture was delivered by Renaldo McKenzie at Jamaica Theological Seminary for the course Caribbean Theology on July 13, 2023. He provided an overview of Jamaican Afro-Caribbean Beliefs and introduced a study exploring the changing attitudes towards Afro-Caribbean Beliefs. The research is also available in The NeoLiberal Journals at theneoliberal.com and in Renaldo McKenzie's Research Lab at Research Gate: https://www.researchgate.net/publication/372364213_Exploring_Changing_Attitudes_towards_Afro-Caribbean_Beliefs_in_JamaicaCaribbean_A_Study_of_Socio-Political_Religious_and_Cultural_Influences Rev. Renaldo McKenzie is author of Neoliberalism, Globalization, Income Inequality, Poverty and Resistance, and the upcoming book, Neoliberal Globalization Reconsidered, Neo-Capitalism and the Death of Nations. Renaldo is an Adjunct Professor at Jamaica Theological Seminary. Support us at https://anchor.fm/theneoliberal/support. theneoliberal.com renaldocmckenzie.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/theneoliberal/message Support this podcast: https://podcasters.spotify.com/pod/show/theneoliberal/support

The Dream World
EP60: Lucid Dream Research Lab

The Dream World

Play Episode Listen Later Feb 17, 2024 33:19 Transcription Available


Karen Konkoly is a 6th-year graduate student in a cognitive neuroscience lab working on studies about lucid dreaming, dream engineering, and memory. Karen has made groundbreaking discoveries within the lab, including being able to communicate with lucid dreamers while they are asleep inside the dream! Lucid dreamers have been able to send signals to researchers to confirm they are currently inside the dream, and they report receiving the signals in real-time, from the researchers that are incorporated into the dream in interesting ways. Mentioned in the EpisodeKaren's research publications A Field Guide to Lucid Dreaming: Mastering the Art of OneironauticsSupport the showFollow The Dream World PodcastVisit Our WebsiteInstagram @TheDreamWorldPodcastTik Tok @aminasdreamworldSpotifyFacebookClubhouse

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Cloud Intelligence at the speed of 5000 tok/s - with Ce Zhang and Vipul Ved Prakash of Together AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 8, 2024 63:11


Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ​”Beyond Transformers & GPUs”, and “​Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer

Federal Drive with Tom Temin
Air Force Research Lab creates a new approach to situational awareness in space

Federal Drive with Tom Temin

Play Episode Listen Later Jan 25, 2024 11:44


You may not wake up thinking about Cislunar-space situational awareness, but people at the Air Force Research Laboratories. In fact, AFRL has had two programs for such awareness. Two programs they have brought together. The resulting program is called the Oracle family of systems. To find out more, Federal Drive Host Tom Temin talked with two of the project leaders: Mission Lead Jaime Stearns and Investigator James Frith. Learn more about your ad choices. Visit megaphone.fm/adchoices

Federal Drive with Tom Temin
Air Force Research Lab creates a new approach to situational awareness in space

Federal Drive with Tom Temin

Play Episode Listen Later Jan 25, 2024 10:59


You may not wake up thinking about Cislunar-space situational awareness, but people at the Air Force Research Laboratories. In fact, AFRL has had two programs for such awareness. Two programs they have brought together. The resulting program is called the Oracle family of systems. To find out more, Federal Drive Host Tom Temin talked with two of the project leaders: Mission Lead Jaime Stearns and Investigator James Frith. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Other Side Of The Firewall
U.S. Nuclear Research Lab Data Breach - The Other Side of the Firewall Season 2 Episode 510

The Other Side Of The Firewall

Play Episode Listen Later Dec 19, 2023 12:38


In this episode, Ryan and Shannon discuss how 45K Nuclear research personnel were impacted by a data breach. Please LISTEN

Longplay
Pokémon Red, Blue & Yellow

Longplay

Play Episode Listen Later Oct 9, 2023


We're heading back to the Game Boy for this requested Longplay and the very colourful (at least in their names anyway!) soundtracks to the first games in the Pokémon series. Chapters: (00:00:00) - Welcome to Longplay! (00:01:10) - Pokémon Red, Blue, & Yellow - Opening Movie (Red, Green & Blue Version) (00:01:21) - Pokémon Red, Blue, & Yellow - Opening Movie - Stereo (Red, Green & Blue Version) (00:01:31) - Pokémon Red, Blue, & Yellow - Title Screen (00:03:08) - This is Longplay 1 (00:09:43) - Pokémon Red, Blue, & Yellow - Pallet Town (00:10:53) - Pokémon Red, Blue, & Yellow - Professor Oak (00:11:33) - Pokémon Red, Blue, & Yellow - Hurry Along (00:12:07) - Pokémon Red, Blue, & Yellow - Oak Pokémon Research Lab (00:12:42) - Pokémon Red, Blue, & Yellow - Fanfare: Pokémon Obtained (00:12:44) - Pokémon Red, Blue, & Yellow - A Rival Appears (00:13:19) - Pokémon Red, Blue, & Yellow - Battle! (Trainer Battle) (00:15:00) - Pokémon Red, Blue, & Yellow - Fanfare: Level Up (00:15:02) - Pokémon Red, Blue, & Yellow - Victory! (Trainer Battle) (00:15:28) - Pokémon Red, Blue, & Yellow - Route 1 (00:16:20) - Pokémon Red, Blue, & Yellow - Battle! (Wild Pokémon) (00:17:41) - Pokémon Red, Blue, & Yellow - Victory! (Wild Pokémon) (00:18:16) - Pokémon Red, Blue, & Yellow - Fanfare: Item Obtained (00:18:17) - This is Longplay 2 (00:22:01) - Pokémon Red, Blue, & Yellow - Viridian City (00:23:03) - Pokémon Red, Blue, & Yellow - Pokémon Center (00:24:06) - Pokémon Red, Blue, & Yellow - Pokémon Healed (00:24:08) - Pokémon Red, Blue, & Yellow - Fanfare: Pokémon Caught (00:24:11) - Pokémon Red, Blue, & Yellow - Fanfare: Trade Pokémon Received (00:24:13) - Pokémon Red, Blue, & Yellow - Viridian Forest (00:25:59) - Pokémon Red, Blue, & Yellow - A Trainer Appears (Boy Version) (00:26:28) - Pokémon Red, Blue, & Yellow - Jigglypuff's Song (00:26:34) - Pokémon Red, Blue, & Yellow - Fanfare: Professor Oak's Evaluation (00:26:36) - Pokémon Red, Blue, & Yellow - Evolution (00:27:00) - Pokémon Red, Blue, & Yellow - Pokémon Gym (00:28:03) - Pokémon Red, Blue, & Yellow - Battle! (Gym Leader Battle) (00:29:49) - Pokémon Red, Blue, & Yellow - Victory! (Gym Leader Battle) (00:30:36) - This is Longplay 3 (00:33:06) - Pokémon Red, Blue, & Yellow - Route 3 (00:34:02) - Pokémon Red, Blue, & Yellow - A Trainer Appears (Girl Version) (00:34:27) - Pokémon Red, Blue, & Yellow - Mt. Moon (00:35:55) - Pokémon Red, Blue, & Yellow - A Trainer Appears (Bad Guy Version) (00:36:19) - Pokémon Red, Blue, & Yellow - Cerulean City (00:37:25) - Pokémon Red, Blue, & Yellow - Route 24 - Welcome to the World of Pokémon! (00:38:11) - Pokémon Red, Blue, & Yellow - Vermilion City (00:39:04) - Pokémon Red, Blue, & Yellow - The S.S. Anne (00:40:13) - Pokémon Red, Blue, & Yellow - Cycling (00:41:28) - This is Longplay 4 (00:47:56) - Pokémon Red, Blue, & Yellow - Route 11 (00:49:05) - Pokémon Red, Blue, & Yellow - Lavender Town (00:50:33) - Pokémon Red, Blue, & Yellow - Lavender Town (00:51:46) - Pokémon Red, Blue, & Yellow - Rocket Game Corner (00:53:06) - Pokémon Red, Blue, & Yellow - Rocket Hideout (00:54:32) - Pokémon Red, Blue, & Yellow - Sylph Co. (00:56:23) - Pokémon Red, Blue, & Yellow - Pokémon Tower (00:57:34) - Pokémon Red, Blue, & Yellow - Poké Flute (00:57:41) - This is Longplay 5 (01:00:46) - Pokémon Red, Blue, & Yellow - Surf (01:02:06) - Pokémon Red, Blue, & Yellow - Cinnabar Island (01:02:54) - Pokémon Red, Blue, & Yellow - Pokémon Mansion (01:04:28) - Pokémon Red, Blue, & Yellow - Victory Road (01:05:38) - Pokémon Red, Blue, & Yellow - Final Battle! (Rival) (01:07:07) - Pokémon Red, Blue, & Yellow - Hall of Fame (01:08:02) - Pokémon Red, Blue, & Yellow - Ending (01:09:39) - Pokémon Red, Blue, & Yellow - Unused Track (01:10:27) - This is Longplay 6 (01:16:19) - Pokémon Red, Blue, & Yellow - Opening Movie (Yellow Version) (01:16:40) - Pokémon Red, Blue, & Yellow - Printer Menu (01:18:21) - Pokémon Red, Blue, & Yellow - A Trainer Appears (Rocket Duo Version) (01:18:59) - Pokémon Red, Blue, & Yellow - Pikachu's Beach

Brain Inspired
BI 170 Ali Mohebi: Starting a Research Lab

Brain Inspired

Play Episode Listen Later Jul 11, 2023 77:15


Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future. Ali's website. Twitter: @mohebial

Simon Ward, The Triathlon Coach Podcast Channel
From the research lab to the real world with Dr Kerry McGawley

Simon Ward, The Triathlon Coach Podcast Channel

Play Episode Listen Later May 24, 2023 104:56


One of my stated aims with the podcast is to interview folks who are experts in their field and try to tease out what's relevant for the listener without it sounding too scientific and overwhelming. There is so much research out there regarding every aspect of training, recovery, nutrition and more. While much of it is relevant to you the listeners, much of it isn't.   Fortunately today's guest, Dr Kerry McGawley, not only does a lot of research, she also distills it into easy to understand snippets, much of which she incorporates into her own training. Kerry was recently crowned ITU world long distance champion in her age group so she really does practice what she preaches.   Her work, mostly done at the mid Sweden University,  has covered training at altitude, pacing strategies, block periodisation and some female specific interventions around the monthly cycle.   We talk about a lot of this so I'm sure you will find something that interests you:   When looking at research, is there evidence that one approach is clearly better than another? The absurdity of age group athletes attempting to follow pro athlete training routines Staying away from extreme approaches The dangers of group training for females athletes Kerry shares her DIY nutrition product Heat and altitude as training tools   To find out more about, or to connect with, Dr Kerry McGawley, please look for her on the following social media channels: Twitter @kerrymcgawley Instagram @kerrymcgawley   In line with many of our other guests, Kerry also recommended some of these books  as useful and interesting reads:   Invisible Women by Caroline Criado Perez The Range by David Epstein The Sports Gene by David Epstein Faster by Michael Hutchinson Endurance Training: Science and Practice, edited by Iñigo Mujika (although best to wait for the 2nd edition, coming soon, with a new chapter written by Kerry!)   Join our SWAT/High Performance Human tribe using this link, with a happiness guarantee! You can watch a brief video about the group by going to our website here, and join our SWAT High Performance Human tribe here. Purchase a copy of my High Performance Human e-book featuring more than 30 top tips on how to upgrade your life. If you would like to help offset the cost of our podcast production, we would be so grateful. Please click here to support the HPH podcast. Thank you! Visit Simon's website for more information about his coaching programmes. Links to all of Simon's social media channels can be found here.  For any questions please email Beth@TheTriathlonCoach.com.

Beekeeping Today Podcast
Amy Vu - University of Florida Extension (S5, E49)

Beekeeping Today Podcast

Play Episode Listen Later May 22, 2023 42:07


Amy Vu is the State Extension Entomologist for the State of Florida. She works out of the University Of Florida's Honey Bee Research Lab, headed by Dr. Jamie Ellis. As part of the Research Lab she is involved in the many project ongoing there, including The Bee College, The Master Beekeeper programs and Bee Learning courses. She also works closely with the migratory beekeepers who spend winters in Florida, bringing with them about a million colonies each fall. This, certainly is one of the reasons Florida is always in the top ten honey producing states in the US. Another part of her activities is working with Dr. Ellis on the Podcast they produce, Two Bees And A Pod. Another activity is working with the honey sources Florida provides, some of which, unfortunately, are not indigenous, but invasive. Chinese Tallow trees are a good example, and her work with this problem, develping programs that manage the problem plants while, keeping it in check so it doesn't run amuck and still is useful to beekeepers. We hope you enjoy the episode. Leave comments and questions in the Comments Section of the episode's website. Links and websites mentioned in this podcast:  University of Florida Extension: https://entnemdept.ufl.edu/honey-bee/extension/ Two Bees in a Podcast: https://entnemdept.ufl.edu/honey-bee/podcast/ University of Florida Beekeeping Resources: https://entnemdept.ufl.edu/honey-bee/beekeeper-resources/ Beekeeping Today Podcast on YouTube: https://www.youtube.com/@beekeepingtodaypodcast Kim's Climate Change Blog: https://www.growingplanetmedia.com/blog Honey Bee Obscura: https://www.honeybeeobscura.com ______________ This episode is brought to you by Global Patties! Global offers a variety of standard and custom patties. Visit them today at http://globalpatties.com and let them know you appreciate them sponsoring this episode!  We welcome Betterbee as sponsor of today's episode. Betterbee's mission is to support every beekeeper with excellent customer service, continued education and quality equipment. From their colorful and informative catalog to their support of beekeeper educational activities, including this podcast series, Betterbee truly is Beekeepers Serving Beekeepers. See for yourself at www.betterbee.com Thanks to Strong Microbials for their support of Beekeeping Today Podcast. Find out more about heir line of probiotics in our Season 3, Episode 12 episode and from their website: https://www.strongmicrobials.com We welcome Blue Sky Bee Supply as a sponsor of the podcast! Check out blueskybeesupply.com for the best selection of honey containers, caps, lids, and customized honey labels. Enter coupon code PODCAST and receive 10% off an order of honey containers, caps, lids, or customized honey labels. Offer ends December 31, 2023. Some exclusions apply. Thanks for Northern Bee Books for their support. Northern Bee Books is the publisher of bee books available worldwide from their website or from Amazon and bookstores everywhere. They are also the publishers of The Beekeepers Quarterly and Natural Bee Husbandry. _______________ We hope you enjoy this podcast and welcome your questions and comments in the show notes of this episode or: questions@beekeepingtodaypodcast.com Thank you for listening!  Podcast music: Be Strong by Young Presidents; Epilogue by Musicalman; Walking in Paris by Studio Le Bus; A Fresh New Start by Pete Morse; Wedding Day by Boomer; Original guitar background instrumental by Jeff Ott Beekeeping Today Podcast is an audio production of Growing Planet Media, LLC Copyright © 2023 by Growing Planet Media, LLC

The UCI Podcast
UCI Podcast: Outreach program brings high school students into cardivascular research lab

The UCI Podcast

Play Episode Listen Later Apr 24, 2023 9:50


Arash Kheradvar, UC Irvine professor of biomedical engineering, is co-principal investigator on a project to study congenital heart defects. As part of the National Science Foundation-funded initiative, Dr. Kheradvar invited local area high school students into his laboratory on the UCI campus. The students got first-hand exposure to cardiovascular research activities, experts in the field and the latest medical research technologies. The outcome was a group of students well-prepared to pursue further education in biomedical engineering.

Kathy and Suzy's kids podcast
USM Research Lab Captain Joshua White #178

Kathy and Suzy's kids podcast

Play Episode Listen Later Mar 21, 2023 112:03


Captain Joshua white works for USM Research Lab and growing up in Cedar Grove. Captain Josh is one of our childhood friends and his favorite artist is Master P and Big Tymers. #KathyandSuzyskids #CrawfishandBeer #Stingem Podcast Link/ Social Media https://linktr.ee/jourdanandmatthew Merch www.crawfishandbeer.com Sponsors Gulf South Productions https://www.gulfsouthproductions.com/ Golden Gulf Insurance www.goldengulfins.com

Heel Talk
UNC's COVID-19 Research Lab, Esports Club and Krasno Events Series

Heel Talk

Play Episode Listen Later Mar 9, 2023 13:40


This week, Guillermo and Liv talk to Senior Writer Lauren Fichten about her story regarding COVID-19 research at the UNC Baric Lab, and the internet conspiracies that it has generated. They are also joined by Audio Staffers Evin Sahin and W. H. Hayes with information about the UNC Esports Club and the latest event in UNC's Krasno Series with the Polish Ambassador to the US. 

2 Be Blunt w/Peezy
The Curaleaf Scandal with Journalist Grant Smith-Ellis

2 Be Blunt w/Peezy

Play Episode Listen Later Jan 27, 2023 58:00


2 Be Blunt w/Peezy! Brought To You By The PartyCast Network and PodConX! Powered by StashlogixJoining me is a very special guest! The man behind the breaking news everyone's talking about regarding #curaleaf, Grant Smith-Ellis! We're going to deep dive into what looks to be the beginning of the end for curaleaf and more along with Kristin Souza and Lou Rinaldi!Grant Smith-Ellis is a disabled grassroots policy activist from Massachusetts. He serves as the Chairperson of the Board of the Massachusetts Cannabis Reform Coalition (MassCann), works as a legal intern for the non-partisan federal policy think tank Parabola Center, studies law at New England Law | Boston and covers developments in the cannabis industry on a freelance basis for DigBoston.You can find more from Grant, and get early access to his on-the-ground reporting via Patreon.com/GrantSmithEllis (including exclusive newly-breaking coverage of a regulatory investigation by Massachusetts officials into industry giant Curaleaf).#breakingnews #journalist #podcast #podcasts #badactors #russianties #shuttingdown #comedy---------------------------------------Subscribe to the YouTube channel!www.YouTube.com/2bebluntpodcast__________________________________www.2bebluntpodcast.com__________________________________#gethighrapper #certifiedstoner #stonerlife  #weed #loud #cannabissociety #cannabisindustry #zooted #podcastlife #podcasts #podcaster #comedy #podcasts #homegrow #growrights #smokeweedeveryday #wedontsmokethesame #rollup #goodvibes #stoner #bluntsmoker #smokesesh #weedporn

EXP. Share: Pokemon Playthrough Podcast
Route 5, Dimensional Research Lab, Diglett's Tunnel, Route 9, Konikoni City, Memorial Hill, Ruins of Like, and Kahuna Olivia's Grand Trial (#145)

EXP. Share: Pokemon Playthrough Podcast

Play Episode Listen Later Nov 29, 2022 47:53


To partially quote Rob Thomas feat. Santana: man, this is a hot one. Like, hot as in stolen because there's a bit of a stir this week when it's uncovered that Tanner is playing on non-regulation equipment. But as Skull Grunt always says, "you can't make an omelet without breaking a few Exeggcutes and you can't catch a presumably neon pink or green Poliwrath without playing the game at 2x speed." Or was that an Elon Musk quote? We meet a handsome old friend along Route 9, some not-so-handsome old friends in Diglett tunnel, and five new friends who can be whatever you want them to be... even bad guys. Then we cap it all off with a roll in the mud with Kahuna Olivia. (Not a euphemism.)

Science Magazine Podcast
Snakes living the high-altitude life, and sending computing power to the edges of the internet

Science Magazine Podcast

Play Episode Listen Later Oct 20, 2022 20:28


On this week's show: How some snakes have adapted to the extremes of height and temperature on the Tibetan Plateau, and giving low-power sensors more processing power First up on the podcast, tough snakes reveal their secrets. Host Sarah Crespi talks with Staff Writer Liz Pennisi about how snakes have adapted to the harsh conditions of the Tibetan Plateau. Next on the show, Producer Meagan Cantwell talks about moving more computing power to the edges of the internet. She is joined by Alexander Sludds, a graduate student at the Massachusetts Institute of Technology's Research Lab for Electronics. They discuss a faster, more energy-efficient approach to give edge devices—such as low-power smart sensors or tiny aerial drones—the computing power of far larger machines. This week's episode was produced with help from Podigy. [Image: JUN-FENG GUO; Music: Jeffrey Cook] [alt: photo of a Tibetan hot-spring snake near a geothermal pool with podcast overlay symbol] Authors: Sarah Crespi; Liz Pennisi; Meagan Cantwell Episode page: https://www.science.org/doi/10.1126/science.adf3782 About the Science Podcast: https://www.science.org/content/page/about-science-podcast See omnystudio.com/listener for privacy information.

The Larry Elder Show
NATIONAL SUICIDE: New Chinese Primate Research Lab Coming to Florida?!

The Larry Elder Show

Play Episode Listen Later Oct 7, 2022 48:54


Topics include: 1) Chinese company Joinn Laboratories buys farmland in Levy County, Fl for primate research; 2)China launches police station in New York City; 3)Election software exec in Michigan arrested for alleged personal ID theft, storing data in China and 4) 5 myths about China that will get Americans killed. More: www.Carljacksonshow.comFacebook: https://www.facebook.com/carljacksonradioTwitter:https://twitter.com/carljacksonshowParler: https://parler.com/carljacksonshowhttp://www.TheCarlJacksonPodcast.comSee omnystudio.com/listener for privacy information.