POPULARITY
Hey yo hey yo, better late then never right? Unfortunately because I lost the live recording last month's episode I had to re-record the set (I guess, fortunately for you that means the set is nice and tidy), Episode 14 of The Bonus Stages is presented with minimal talk, and maximal beats. nekonoyounamono + Wasei “JJ” Chikada - 電子の海 [Bandcamp] Noteblock - Studiopolis Zone (from "Sonic Mania") (Funk House Version) [Bandcamp] Doni - Go Straight (RobKTA's Go Skate Remix) [Gamechops] Fake Blood vs Stardust - Mars Sounds Better With Me (The Young Punx Mashup) [Fake Blood, The Young Punx, Stardust] RoboRob - Chemical Plant Zone [Bandcamp] Dirty Androids - Into the Wild [DA Recordings] boshii - Bob-Omb Battlefield [Dance] (from "Super Mario 64") ft. Lucas Guimaraes [Bandcamp] Chameo - Wow [Beatport] Wontolla - Mantis Lords (from "Hollow Knight") (Electro House Version) [Bandcamp] Wolfgang Gartner - Wolfgang's 5th Symphony [Beatport] BT - Somnambulist [SNR Super Smash - Extended Mix] [SNR, BT] RoBKTA - Cerulean Dancefloor VIP (from "Pokémon Red & Blue") (Disco House Version) [Bandcamp] Ben Briggs - Beetle Brawl (Mega Man X Series) [Bandcamp] Elevic - Underworld [Beatport] BlueDrak3 - Warframe "The Last Sequence" [OC ReMix] capsule - Never Let Me Go (Extended-Mix) [AmazonMP3] Thomas Feijk - Hi [Beatport] Prime Ordnance - ElectriCity (from "SimCity 4: Rush Hour") (Electro House Version) [Bandcamp] bLiNd - Wild Arms "Fireflight" [OC ReMix] Flexstyle - Divekick "It's Okay, I Still Made Money" [OC ReMix] Electric Soulside, Muzyc - Luvin' You [Beatport] VGR - Super Smash Bros. Ultimate Main Theme [Bandcamp] Jewbei - Wild Arms "Desirous Sacrifice" [OC ReMix] Stay Funky -DJ LvL Note 1: "Mars Sounds Better With Me" and the SNR Super Smash of "Somnambulist" are old bootlegs I cant find any links to, link provided to Fake Blood, The Young Blood, Stardust, SNR and BT. Note 2: Bandcamp links always provided when able. Beatport tracks can usually also be found on AmazonMP3 at a lower price and the advantage of unlimited cloud backup, but at a fixed fidelity.
Foundations of Amateur Radio Recently I saw a social media post featuring a screenshot of some random website with pretty charts and indicators describing "current HF propagation". Aside from lacking a date, it helpfully included notations like "Solar Storm Imminent" and "Band Closed". It made me wonder, not for the first time, what the reliability of this type of notification is. Does it actually indicate what you might expect when you get on air to make noise, is it globally relevant, is the data valid or real-time? You get the idea. How do you determine the relationship between this pretty display and reality? Immediately the WSPR or Weak Signal Propagation Reporter database came to mind. It's a massive collection of signal reports capturing time, band, station and other parameters, one of which is the Signal To Noise ratio or SNR. If the number of sun spots, or a geomagnetic index change affected propagation, can we see an effect on the SNR? Although there's close on a million records per day, I'll note in advance that my current approach of taking a daily average across all reports on a specific band, completely ignores the number of reports, the types and direction of antennas, the distance between stations, transmitter power, local noise or any number of other variables. Using the online "wspr.live" database, looking only at 2024, I linked the daily recorded WSPR SNR average per band to the Sun Spot Numbers and Geomagnetic Index and immediately ran into problems. For starters the daily Sun Spot Number or SSN, from the Royal Observatory in Belgium does not appear to be complete. I'm not yet sure why. For example, there's only 288 days of SSN data in 2024. Does this mean that the observers were on holiday on the other 78 days, or was the SSN zero? Curiously there's 60 days where there's more than one recording and as a bonus, on New Years Eve 2024, there's three recordings, all with the same time stamp, midnight, with 181, 194 and 194 sun spots, so I took the daily average. Also, I ignored the timezone, since that's not apparent. Similarly the Geomagnetic Index data from the Helmholtz Centre for Geosciences in Potsdam, Germany has several weird artefacts around 1970's data, but fortunately not within 2024 that I saw. The data is collected every three hours, so I averaged that, too. After excluding days where the SSN was missing, I ran into the next issue, my database query was too big, understandable, since there are many reports in this database, 2 billion, give or take, for 2024 alone. Normally I'd be running this type of query on my own hardware, but you might know that I lost my main research computer last year, well, I didn't lose it as such, I can see it from where I am right now, but it won't power up. Money aside, I've been working on it, but being unceremoniously moved from Intel to ARM is not something I'd recommend. I created a script that extracted the data, one day at a time, with 30 seconds between each query. Three hours later I had preliminary numbers. The result was 6,239 records across 116 bands, which of course should immediately spark interest, since we don't really have that many bands. I sorted the output by the number of reports per band and discovered that the maximum number of days per band was 276. This in turn should surprise you, since there's 365 days in a year, well technically a smidge more, but for now, 365 is fine, not to mention that 2024 was a leap-year. So, what happened to the other 90 days? We know that 78 are missing because the SSN wasn't in the database but the other 12 days? I'm going to ignore that too. I removed all the bands that had less than 276 reports per day, leaving 17 bands, including the well known 13 MHz band, the what, yeah, there's a few others like that. I removed the obvious weird band, but what's the 430 MHz band, when the 70cm band in WSPR is defined as 432 MHz? I manually created 15 charts plotting dates against SNR, SSN, Kp and ap indices. Remember, this is a daily average of each of these, just to get a handle on what I'm looking at. Immediately several things become apparent. There are plenty of bands where the relationship between the average SNR and the other influences appear to be negligible. We can see the average SNR move up and down across the year, following the seasons - which raises a specific question. If the SNR is averaged across the whole planet from all WSPR stations, why are we seeing seasonal variation, given that while it's Winter here in VK, it's Summer on the other side of the equator? If you compare the maximum average SNR of a band against the minimum average SNR of the same band, you can get a sense of how much the sun spots and geomagnetic index influences the planet as a whole on that band. The band with the least amount of variation is the 30m band. Said differently, with all the changes going on around propagation, the 30m band appears to be the most stable, followed by the 12m and 15m bands. The SNR across all of HF varies, on average, no more than 5 dB. The higher the band, the more variation there is. Of course it's also possible that there's less reports there, so we might be seeing the impact of individual station variables more keenly. It's too early for conclusions, but I can tell you that this gives us plenty of new questions to ask. I'm Onno VK6FLAB
Snr. Bishop Clive Mould
Snr. Bishop Clive Mould
Following on to the last episode, you are the VPHR for a manufacturing company. The Board and CEO have chartered you to manage the search and selection of a new Snr. VP Operations. A key part of the role will be to (1) specify AI priorities, (2) secure CEO and Board approval, (3) ensure projects are developed and completed in a timely and cost/quality manner, and (4) effectively lead program execution. Per our last recommendation, you have selected a rather high profile, high powered, highly talented, and diverse hiring team who you will first lead through the process of identifying key job specifications: (1) Job Expectations) (2) Experience and Competency Requirements and (3) Essential Personal Characteristics (relates to the AI executive profile I recommended in the last Episode). (Remember, clear job specs are important for the hiring team, executive recruiter, and the assessment psychologist.) Watch here: https://youtu.be/8T_-1w5a6Lw
Following on to the last episode, you are the VPHR for a manufacturing company. The Board and CEO have chartered you to manage the search and selection of a new Snr. VP Operations. A key part of the role will be to (1) specify AI priorities, (2) secure CEO and Board approval, (3) ensure projects are developed and completed in a timely and cost/quality manner, and (4) effectively lead program execution. Per our last recommendation, you have selected a rather high profile, high powered, highly talented, and diverse hiring team who you will first lead through the process of identifying key job specifications: (1) Job Expectations) (2) Experience and Competency Requirements and (3) Essential Personal Characteristics (relates to the AI executive profile I recommended in the last Episode). (Remember, clear job specs are important for the hiring team, executive recruiter, and the assessment psychologist.) Watch here: https://youtu.be/8T_-1w5a6Lw
Following on to the last episode, you are the VPHR for a manufacturing company. The Board and CEO have chartered you to manage the search and selection of a new Snr. VP Operations. A key part of the role will be to (1) specify AI priorities, (2) secure CEO and Board approval, (3) ensure projects are developed and completed in a timely and cost/quality manner, and (4) effectively lead program execution. Per our last recommendation, you have selected a rather high profile, high powered, highly talented, and diverse hiring team who you will first lead through the process of identifying key job specifications: (1) Job Expectations) (2) Experience and Competency Requirements and (3) Essential Personal Characteristics (relates to the AI executive profile I recommended in the last Episode). (Remember, clear job specs are important for the hiring team, executive recruiter, and the assessment psychologist.) Watch here: https://youtu.be/8T_-1w5a6Lw
As we get closer to the NFL Draft, MIchael is joined by former Steeler and current SNR announcer Max Starks to look ahead to the NFL Draft - and look ahead to the big news that has been announced this week - the Steelers NFL Draft Party in Croke Park at the end of the month!See omnystudio.com/listener for privacy information.
Michael is joined this week by SNR's Matt Williamson to break down free agency so far and what to look ahead to in the 2025 NFL Draft, which is only a few weeks away.See omnystudio.com/listener for privacy information.
This sermon by Snr. Pastor Daniel Musonda Kaira, titled Contending for Your Hope, emphasizes the importance of living by example, growing spiritually, and maintaining hope as the foundation of faith. Key themes include the transformative power of hope, faith as a system of acquisition, and the necessity of internal renewal through God's word. The message highlights the dangers of unresolved issues, external influences, and spiritual stagnation. It also provides practical ways to strengthen hope, such as managing emotions, developing a clear vision, and understanding God's tests in appetite, pride, and power.
What's going on in Dayton? Leandra Carr, President of Sierra Nevada Realtors tells us about the market in northern Nevada and give us the latest statistics on home sales activity. Brian Cushing talks about interest rates on home mortgages and how they fluctuate based on numerous factors. Cheri Hill talks to us about the importance of incorporating your real estate investment properties. What happens if you don't? We'll talk about a couple of client emergencies and how they turned out for the better. SNR.Realtor LahontonProperties.com BCushing@afncorp.com Sageintl.com
Send us a textUnlock the secrets of speech audiometry and speech perception with the renowned Dr. Lisa Lucks Mendel. With over 35 years of expertise, Dr. Mendel offers an enlightening exploration into the significance of choosing the right tests for speech perception assessments. Learn why classic tests like NU6 and CIDW22 remain relevant and how full 50-item word lists provide a more authentic reflection of natural speech sounds. Discover the rationale behind shorter word lists and how they can streamline assessment without compromising their purpose.Get ready to unravel the complexities of evaluating speech recognition in challenging auditory environments. The Signal-to-Noise Ratio 50 (SNR-50) test stands as a pivotal tool in understanding hearing loss and the benefits of hearing aids. As we examine the nuances of phoneme-focused scoring, particularly impactful for cochlear implant users, we offer fresh insights into setting realistic expectations for auditory device performance. This episode also delves into the scoring protocols that might just change the way we interpret hearing capabilities.Join us as we compare the efficacy of modern MP3 recordings against traditional monitored live voice (MLV) in audiometric testing. Uncover the surprising findings from our student-led research and the implications for clinical practice moving forward. As we advocate for standardized methods in speech and noise assessments, Dr. Mendel reflects on the historical recommendations that still resonate today. This episode promises a comprehensive look at enhancing real-world hearing evaluations, leaving our listeners informed and inspired by Dr. Mendel's invaluable contributions. Connect with the Hearing Matters Podcast TeamEmail: hearingmatterspodcast@gmail.com Instagram: @hearing_matters_podcast Twitter: @hearing_mattasFacebook: Hearing Matters Podcast
In this episode of The Ticket, Customer Support Operations Analyst at Intercom, Kevin Furlong discusses his critical role, focusing on the balance between proactive and reactive work, the importance of data-driven decision-making, and the ongoing need for optimization in customer support tools. Kevin shares insights with Bobby Stapleton, Snr. Director, Human Support, Intercom on change management, cross-functional collaboration with product teams, and the impact of AI on customer support.Watch on YouTube: https://youtu.be/jbFv3XiNQhY?si=_AisMjjiuQTwIz--
Aubrey speaks to Dr Steven Zwane, Founder of Yled, and Snr lecturer at GIBS, about a book he just published with Yamkela Khoza Tywakadi called “Rising from the Township”. The book focuses on the importance of entrepreneurial role modelling to appeal to young people towards them choosing entrepreneurship as a career choice. See omnystudio.com/listener for privacy information.
Snr. Bishop Isaac Clive Mould
Eric and Dave sit down with Chris Donlan, Snr. Manager of Solutions Engineering, to better understand the cybersecurity threat landscape, help to simplify the security conversation for advisors and share best practices for cross-selling.
Tento rok neslávime len 80. výročie SNP. Na povstaleckom území prebrala moc do rúk Slovenská národná rada, teda náš parlament, ktorý v rôznych podobách funguje až dodnes. Pri tejto príležitosti sme si pripravili dvojdielny rozhovor s politológom Jurajom Marušiakom, s ktorým sa rozprávame o tom, čo viedlo k vzniku Národnej rady a aké boli jej premeny v dejinách. v podcaste sa dozviete aj to: - Čo predchádzalo vzniku Slovenskej národnej rady - Aká bola funkcia Národnej rady počas Povstania - Ako sa menila úloha SNR v povojnovom Československu - Akým spôsobom fungovala Národná rada v komunistickom režime - Čo viedlo k vytvoreniu federácie a posilneniu právomocí SNR
Welcome to the daily304 – your window into Wonderful, Almost Heaven, West Virginia. Today is Thursday, Aug. 29, 2024. Grab your paddle and get ready to tackle America's best whitewater when Gauley Season returns September 6th…the 2025 WV Wildlife Calendar is now available for purchase--grab one and help support important SNR wildlife resources programs…and a massive music festival featuring 50+ bands is coming to the Capital City in October…on today's daily304. #1 – From WV NEWS – It's almost time! Spanning six weeks from early September to mid-October, the event known as Gauley Season draws thousands eager to take on the challenging rapids of the Gauley River in West Virginia. The Gauley River, set within a stunning and rugged canyon, offers some of the most intense rapids available anywhere. Over 100 rapids are crammed into just 25 miles, making the Gauley a must-visit for serious rafters. The excitement of Gauley Season is made possible by controlled water releases from the Summersville Lake Dam, managed by the US Army Corps of Engineers. For those looking to experience this thrill in 2024, Gauley Season runs from September 6th to October 20th, with water releases scheduled across several weekends. A whitewater trip on the Gauley pairs well with a visit to West Virginia's newest state park, Summersville Lake State Park, or to the many attractions in and around the nearby New River Gorge National Park. Visit wvtourism.com to learn more and help plan your itinerary in Almost Heaven. Read more: https://www.wvnews.com/news/wvnews/experience-the-thrill-of-gauley-season-west-virginias-premier-whitewater-rafting-adventure/article_cc26e276-5b2b-11ef-8495-ff6d51288c7f.html #2 – From MY BUCKHANNON – The 2025 West Virginia Wildlife Calendar is now available to purchase online and in stores around the state. This year also marks the 40th anniversary of the West Virginia Division of Natural Resources publishing the popular calendar, which helps fund important Wildlife Resources Section programs. The calendar features beautiful paintings of state animals, important hunting and fishing dates, peak wildlife activity times and articles that will help people get the most out of their outdoor adventures in 2025. Calendars can be purchased online at WVstateparks.com. Read more: https://www.mybuckhannon.com/2025-west-virginia-wildlife-calendar-now-available-to-purchase/ #3 – From WV GAZETTE-MAIL – Collaborating with FestivALL's FestiFALL and the Risers Agency, radio station WTSQ 88.1 FM, “The Status Quo,” will moderate, participate and celebrate the first-time Risers Fest Charleston music and arts festival. The event takes place throughout various city venues from Oct. 11 through Oct. 13. Boasting 50-plus-and-counting local, regional and national acts (plus one from Mexico City), Risers Fest musicians and bands will run the gamut from “A” to, well, “Y” (unless a “Z”-named group signs on before the event). For updates and more info about the bands or venues, visit www.risersfest.com. For the latest news about FestivFALL events and activities, visit festivallcharleston.com/festivall-fall. Read more: https://www.wvgazettemail.com/townnews/radio/risers-fest-to-rock-and-rollick-in-october-in-charleston/article_5e936400-5a4f-11ef-9c0b-fff4fe4ba477.html Find these stories and more at wv.gov/daily304. The daily304 curated news and information is brought to you by the West Virginia Department of Commerce: Sharing the wealth, beauty and opportunity in West Virginia with the world. Follow the daily304 on Facebook, Twitter and Instagram @daily304. Or find us online at wv.gov and just click the daily304 logo. That's all for now. Take care. Be safe. Get outside and enjoy all the opportunity West Virginia has to offer.
Dale Lolley of Steelers.com, SNR, and the Steelers pregame show joins me to discuss what went wrong in the preseason, what went right as the regular season approaches, the state of the WR room, and we laugh at the alleged QB battle. Learn more about your ad choices. Visit megaphone.fm/adchoices
Leandra Carr, President-elect at Sierra Nevada Realtors gives us the market statistics for northern Nevada. Ben Galles, Sr. VP at CBRE NV# S.47543 talks about multi-family investment opportunities in northern Nevada. Yes! There are some outstanding opportunities for you while others sit on the sidelines. Ben Galles 775-750-6429 Leandra Carr www.SNR.com Peter Padilla www.Sageintl.com
Welcome to "May Karch" (shoutout Joe Ciupik), where all this month we feature one of the co-hosts, our own "Slick" Mick Karch! This week discuss booking guests for SNR, getting short changed by the AWA, broadcasters he would have liked to work with, after parties, Bobby Heenan calling Verne Gagne's daughter a horse and more! We have a new one stop shop for AWA Unleashed merch, it's https://www.teepublic.com/user/unleashed-plus. You can get t-shirts, hoodies, mugs, phone cases, and tons more.
With Paris-Roubaix, arguably the most hotly anticipated weekend on the pro cycling calendar, approaching fast around the next cobbled bend, episode 74 of the road.cc Podcast features two representatives of the past, present, and future of the Queen of the Classics: Canyon-Sram's father-daughter duo Magnus and Zoe Bäckstedt, 20 years on from Magnus' career-defining Roubaix victory.The 2024 Paris-Roubaix not only marks the 20th anniversary of Bäckstedt Snr's victory at the Hell of the North, but also the first time the Canyon-Sram sports director will be taking on cycling's most famous one-day race with daughter Zoe as one of his charges, after the 19-year-old joined the German team from EF Education last autumn. The pair discuss Magnus' 2004 win, what it's like working together, Zoe's adjustment to the Women's World Tour after dominating as a junior, and why Roubaix is the race everyone wants to win.Meanwhile, in part two, British adventurer and explorer Oli France joins us, mid-marathon packing session, just before setting for the west coast of the United States, where he will be taking on phase two of his record-breaking attempt to travel from the lowest geographical point to the highest on every continent, by bike and on foot.He chats about his approach to training and preparing for extreme temperatures and the different physical demands of cycling and climbing, and why – after six weeks slogging through deserts, over tough, sapping roads, and in the freezing cold on his bike – climbing a mountain at the end of it all seems like the “easy part”…
An introduction to the five player positions in basketball with reference to legendary NBA players along with an explanation of the NBA conference and division structures and historical rivalries. William Lyttle is a sports commentary enthusiast with a special interest in basketball. Will presents the Community Armchair sports segment on community radio station 2SER and co-hosts the Jnr & Snr: 2 Views podcast with his father.
Tom and Jacob discuss some of the players that the SNR guys have had their eyes on while at the combine in Indy for potential targets for the Steelers in this year's draft.See omnystudio.com/listener for privacy information.
Leaders In Payments and FinTech - The EDC Podcast with Martin Koderisch
This week we are digging into low code technology with Dave Wyatt, Snr staff engineer at a large retailer. Dave is an expert Microsoft Power Platform Developer. The Power Platform is a fascinating and evolving platform. It's a Azure cloud based platform that integrates Microsoft's low-code solutions – Power BI, Power Automate, Power Apps, Power Pages - with Microsoft365 and Dynamics 365. It's also where Microsoft's co-pilot AI lives. It sounds and I think is a massive deal. I think its highly relevant to payments particularly when you think about digital transformation and finance automation. Automating the multitude of manual back-offices processes and replacing or at least upgrading the industry's dependency on Excel. So, a lot to pay attention to in this podcast with David Wyatt. In our conversation we discuss how David got into engineering and low code software development via early experience in project management &implementation of bespoke IT solutions for business processes using SharePoint,Power Apps, Power Automate and Excel VBA. We discuss a lot besides, so I do hope you enjoy this conversation with David.
Pendant des décennies, les résidus de supernova (SNR) ont été considérés comme les principales sources de rayons cosmiques galactiques. Mais la question de savoir si les SNR peuvent accélérer des protons jusqu'aux énergies de l'ordre du PeV (ce qui en ferait donc des PeVatrons) fait actuellement l'objet d'un débat intense. Une équipe d'astrophysiciens à étudié un site de production potentiel, à savoir le jeune résidu de supernova Cassiopeia A, grâce aux photons gamma ultra-énergétiques qui en proviennent et qui doivent être liés à la production de rayons cosmiques. Ils publient leurs résultat dans The Astrophysical Journal Letters.https://www.ca-se-passe-la-haut.fr/2024/01/cassiopeia-pevatron-ou-pas-pevatron.html Source Does or Did the Supernova Remnant Cassiopeia A Operate as a PeVatron?Zhen Cao et al.The Astrophysical Journal Letters, Volume 961, Number 2 (30 january 2024)https://doi.org/10.3847/2041-8213/ad1d62
We are running an end of year survey for our listeners! Please let us know any feedback you have, what episodes resonated with you, and guest requests for 2024! Survey link here!Before language models became all the rage in November 2022, image generation was the hottest space in AI (it was the subject of our first piece on Latent Space!) In our interview with Sharif Shameem from Lexica we talked through the launch of StableDiffusion and the early days of that space. At the time, the toolkit was still pretty rudimentary: Lexica made it easy to search images, you had the AUTOMATIC1111 Web UI to generate locally, some HuggingFace spaces that offered inference, and eventually DALL-E 2 through OpenAI's platform, but not much beyond basic text-to-image workflows.Today's guest, Suhail Doshi, is trying to solve this with Playground AI, an image editor reimagined with AI in mind. Some of the differences compared to traditional text-to-image workflows:* Real-time preview rendering using consistency: as you change your prompt, you can see changes in real-time before doing a final rendering of it.* Style filtering: rather than having to prompt exactly how you'd like an image to look, you can pick from a whole range of filters both from Playground's model as well as Stable Diffusion (like RealVis, Starlight XL, etc). We talk about this at 25:46 in the podcast.* Expand prompt: similar to DALL-E3, Playground will do some prompt tuning for you to get better results in generation. Unlike DALL-E3, you can turn this off at any time if you are a prompting wizard* Image editing: after generation, you have tools like a magic eraser, inpainting pencil, etc. This makes it easier to do a full workflow in Playground rather than switching to another tool like Photoshop.Outside of the product, they have also trained a new model from scratch, Playground v2, which is fully open source and open weights and allows for commercial usage. They benchmarked the model against SDXL across 1,000 prompts and found that humans preferred the Playground generation 70% of the time. They had similar results on PartiPrompts:They also created a new benchmark, MJHQ-30K, for “aesthetic quality”:We introduce a new benchmark, MJHQ-30K, for automatic evaluation of a model's aesthetic quality. The benchmark computes FID on a high-quality dataset to gauge aesthetic quality.We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.Suhail was pretty open with saying that Midjourney is currently the best product for imagine generation out there, and that's why they used it as the base for this benchmark. I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? [00:23:47]We also talked a lot about Suhail's founder journey from starting Mixpanel in 2009, then going through YC again with Mighty, and eventually sunsetting that to pivot into Playground. Enjoy!Show Notes* Suhail's Twitter* “Starting my road to learn AI”* Bill Gates book trip* Playground* Playground v2 Announcement* $40M raise announcement* “Running infra dev ops for 24 A100s”* Mixpanel* Mighty* “I decided to stop working on Mighty”* Fast.ai* CivitTimestamps* [00:00:00] Intros* [00:02:59] Being early in ML at Mixpanel* [00:04:16] Pivoting from Mighty to Playground and focusing on generative AI* [00:07:54] How DALL-E 2 inspired Mighty* [00:09:19] Reimagining the graphics editor with AI* [00:17:34] Training the Playground V2 model from scratch to advance generative graphics* [00:21:11] Techniques used to improve Playground V2 like data filtering and model tuning* [00:25:21] Releasing the MJHQ30K benchmark to evaluate generative models* [00:30:35] The limitations of current models for detailed image editing tasks* [00:34:06] Using post-generation user feedback to create better benchmarks* [00:38:28] Concerns over potential misuse of powerful generative models* [00:41:54] Rethinking the graphics editor user experience in the AI era* [00:45:44] Integrating consistency models into Playground using preview rendering* [00:47:23] Interacting with the Stable Diffusion LoRAs community* [00:51:35] Running DevOps on A100s* [00:53:12] Startup ideas?TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:15]Swyx: Hey, and today in the studio we have Suhail Doshi, welcome. [00:00:18]Suhail: Yeah, thanks. Thanks for having me. [00:00:20]Swyx: So among many things, you're a CEO and co-founder of Mixpanel, and I think about three years ago you left to start Mighty, and more recently, I think about a year ago, transitioned into Playground, and you've just announced your new round. How do you like to be introduced beyond that? [00:00:34]Suhail: Just founder of Playground is fine, yeah, prior co-founder and CEO of Mixpanel. [00:00:40]Swyx: Yeah, awesome. I'd just like to touch on Mixpanel a little bit, because it's obviously one of the more successful analytics companies we previously had amplitude on, and I'm curious if you had any reflections on the interaction of that amount of data that people would want to use for AI. I don't know if there's still a part of you that stays in touch with that world. [00:00:59]Suhail: Yeah, I mean, the short version is that maybe back in like 2015 or 2016, I don't really remember exactly, because it was a while ago, we had an ML team at Mixpanel, and I think this is when maybe deep learning or something really just started getting kind of exciting, and we were thinking that maybe given that we had such vast amounts of data, perhaps we could predict things. So we built two or three different features, I think we built a feature where we could predict whether users would churn from your product. We made a feature that could predict whether users would convert, we built a feature that could do anomaly detection, like if something occurred in your product, that was just very surprising, maybe a spike in traffic in a particular region, can we tell you that that happened? Because it's really hard to like know everything that's going on with your data, can we tell you something surprising about your data? And we tried all of these various features, most of it boiled down to just like, you know, using logistic regression, and it never quite seemed very groundbreaking in the end. And so I think, you know, we had a four or five person ML team, and I think we never expanded it from there. And I did all these Fast AI courses trying to learn about ML. And that was the- That's the first time you did fast AI. Yeah, that was the first time I did fast AI. Yeah, I think I've done it now three times, maybe. [00:02:12]Swyx: Oh, okay. [00:02:13]Suhail: I didn't know it was the third. No, no, just me reviewing it, it's maybe three times, but yeah. [00:02:16]Swyx: You mentioned prediction, but honestly, like it's also just about the feedback, right? The quality of feedback from users, I think it's useful for anyone building AI applications. [00:02:25]Suhail: Yeah. Yeah, I think I haven't spent a lot of time thinking about Mixpanel because it's been a long time, but sometimes I'm like, oh, I wonder what we could do now. And then I kind of like move on to whatever I'm working on, but things have changed significantly since. [00:02:39]Swyx: And then maybe we'll touch on Mighty a little bit. Mighty was very, very bold. My framing of it was, you will run our browsers for us because everyone has too many tabs open. I have too many tabs open and slowing down your machines that you can do it better for us in a centralized data center. [00:02:51]Suhail: Yeah, we were first trying to make a browser that we would stream from a data center to your computer at extremely low latency, but the real objective wasn't trying to make a browser or anything like that. The real objective was to try to make a new kind of computer. And the thought was just that like, you know, we have these computers in front of us today and we upgrade them or they run out of RAM or they don't have enough RAM or not enough disk or, you know, there's some limitation with our computers, perhaps like data locality is a problem. Why do I need to think about upgrading my computer ever? And so, you know, we just had to kind of observe that like, well, actually it seems like a lot of applications are just now in the browser, you know, it's like how many real desktop applications do we use relative to the number of applications we use in the browser? So it's just this realization that actually like, you know, the browser was effectively becoming more or less our operating system over time. And so then that's why we kind of decided to go, hmm, maybe we can stream the browser. Fortunately, the idea did not work for a couple of different reasons, but the objective is try to make sure new computer. [00:03:50]Swyx: Yeah, very, very bold. [00:03:51]Alessio: Yeah, and I was there at YC Demo Day when you first announced it. It was, I think, the last or one of the last in-person ones, at Pier34 in Mission Bay. How do you think about that now when everybody wants to put some of these models in people's machines and some of them want to stream them in, do you think there's maybe another wave of the same problem before it was like browser apps too slow, now it's like models too slow to run on device? [00:04:16]Suhail: Yeah. I mean, I've obviously pivoted away from Mighty, but a lot of what I somewhat believed at Mighty, maybe why I'm so excited about AI and what's happening, a lot of what Mighty was about was like moving compute somewhere else, right? Right now, applications, they get limited quantities of memory, disk, networking, whatever your home network has, et cetera. You know, what if these applications could somehow, if we could shift compute, and then these applications have vastly more compute than they do today. Right now it's just like client backend services, but you know, what if we could change the shape of how applications could interact with things? And it's changed my thinking. In some ways, AI has like a bit of a continuation of my belief that like perhaps we can really shift compute somewhere else. One of the problems with Mighty was that JavaScript is single-threaded in the browser. And what we learned, you know, the reason why we kind of abandoned Mighty was because I didn't believe we could make a new kind of computer. We could have made some kind of enterprise business, probably it could have made maybe a lot of money, but it wasn't going to be what I hoped it was going to be. And so once I realized that most of a web app is just going to be single-threaded JavaScript, then the only thing you could do largely withstanding changing JavaScript, which is a fool's errand most likely, make a better CPU, right? And there's like three CPU manufacturers, two of which sell, you know, big ones, you know, AMD, Intel, and then of course like Apple made the M1. And it's not like single-threaded CPU core performance, single-core performance was increasing very fast, it's plateauing rapidly. And even these different companies were not doing as good of a job, you know, sort of with the continuation of Moore's law. But what happened in AI was that you got like, if you think of the AI model as like a computer program, like just like a compiled computer program, it is literally built and designed to do massive parallel computations. And so if you could take like the universal approximation theorem to its like kind of logical complete point, you know, you're like, wow, I can get, make computation happen really rapidly and parallel somewhere else, you know, so you end up with these like really amazing models that can like do anything. It just turned out like perhaps the new kind of computer would just simply be shifted, you know, into these like really amazing AI models in reality. Yeah. [00:06:30]Swyx: Like I think Andrej Karpathy has always been, has been making a lot of analogies with the LLMOS. [00:06:34]Suhail: I saw his video and I watched that, you know, maybe two weeks ago or something like that. I was like, oh man, this, I very much resonate with this like idea. [00:06:41]Swyx: Why didn't I see this three years ago? [00:06:43]Suhail: Yeah. I think, I think there still will be, you know, local models and then there'll be these very large models that have to be run in data centers. I think it just depends on kind of like the right tool for the job, like any engineer would probably care about. But I think that, you know, by and large, like if the models continue to kind of keep getting bigger, you're always going to be wondering whether you should use the big thing or the small, you know, the tiny little model. And it might just depend on like, you know, do you need 30 FPS or 60 FPS? Maybe that would be hard to do, you know, over a network. [00:07:13]Swyx: You tackled a much harder problem latency wise than the AI models actually require. Yeah. [00:07:18]Suhail: Yeah. You can do quite well. You can do quite well. You definitely did 30 FPS video streaming, did very crazy things to make that work. So I'm actually quite bullish on the kinds of things you can do with networking. [00:07:30]Swyx: Maybe someday you'll come back to that at some point. But so for those that don't know, you're very transparent on Twitter. Very good to follow you just to learn your insights. And you actually published a postmortem on Mighty that people can read up on and willing to. So there was a bit of an overlap. You started exploring the AI stuff in June 2022, which is when you started saying like, I'm taking fast AI again. Maybe, was there more context around that? [00:07:54]Suhail: Yeah. I think I was kind of like waiting for the team at Mighty to finish up, you know, something. And I was like, okay, well, what can I do? I guess I will make some kind of like address bar predictor in the browser. So we had, you know, we had forked Chrome and Chromium. And I was like, you know, one thing that's kind of lame is that like this browser should be like a lot better at predicting what I might do, where I might want to go. It struck me as really odd that, you know, Chrome had very little AI actually or ML inside this browser. For a company like Google, you'd think there's a lot. Code is actually just very, you know, it's just a bunch of if then statements is more or less the address bar. So it seemed like a pretty big opportunity. And that's also where a lot of people interact with the browser. So, you know, long story short, I was like, hmm, I wonder what I could build here. So I started to take some AI courses and review the material again and get back to figuring it out. But I think that was somewhat serendipitous because right around April was, I think, a very big watershed moment in AI because that's when Dolly 2 came out. And I think that was the first truly big viral moment for generative AI. [00:08:59]Swyx: Because of the avocado chair. [00:09:01]Suhail: Yeah, exactly. [00:09:02]Swyx: It wasn't as big for me as Stable Diffusion. [00:09:04]Suhail: Really? [00:09:05]Swyx: Yeah, I don't know. Dolly was like, all right, that's cool. [00:09:07]Suhail: I don't know. Yeah. [00:09:09]Swyx: I mean, they had some flashy videos, but it didn't really register. [00:09:13]Suhail: That moment of images was just such a viral novel moment. I think it just blew people's mind. Yeah. [00:09:19]Swyx: I mean, it's the first time I encountered Sam Altman because they had this Dolly 2 hackathon and they opened up the OpenAI office for developers to walk in back when it wasn't as much of a security issue as it is today. I see. Maybe take us through the journey to decide to pivot into this and also choosing images. Obviously, you were inspired by Dolly, but there could be any number of AI companies and businesses that you could start and why this one, right? [00:09:45]Suhail: Yeah. So I think at that time, Mighty and OpenAI was not quite as popular as it is all of a sudden now these days, but back then they had a lot more bandwidth to kind of help anybody. And so we had been talking with the team there around trying to see if we could do really fast low latency address bar prediction with GPT-3 and 3.5 and that kind of thing. And so we were sort of figuring out how could we make that low latency. I think that just being able to talk to them and kind of being involved gave me a bird's eye view into a bunch of things that started to happen. Latency first was the Dolly 2 moment, but then stable diffusion came out and that was a big moment for me as well. And I remember just kind of like sitting up one night thinking, I was like, you know, what are the kinds of companies one could build? Like what matters right now? One thing that I observed is that I find a lot of inspiration when I'm working in a field in something and then I can identify a bunch of problems. Like for Mixpanel, I was an intern at a company and I just noticed that they were doing all this data analysis. And so I thought, hmm, I wonder if I could make a product and then maybe they would use it. And in this case, you know, the same thing kind of occurred. It was like, okay, there are a bunch of like infrastructure companies that put a model up and then you can use their API, like Replicate is a really good example of that. There are a bunch of companies that are like helping you with training, model optimization, Mosaic at the time, and probably still, you know, was doing stuff like that. So I just started listing out like every category of everything, of every company that was doing something interesting. I started listing out like weights and biases. I was like, oh man, weights and biases is like this great company. Do I want to compete with that company? I might be really good at competing with that company because of Mixpanel because it's so much of like analysis. But I was like, no, I don't want to do anything related to that. That would, I think that would be too boring now at this point. So I started to list out all these ideas and one thing I observed was that at OpenAI, they had like a playground for GPT-3, right? All it was is just like a text box more or less. And then there were some settings on the right, like temperature and whatever. [00:11:41]Swyx: Top K. [00:11:42]Suhail: Yeah, top K. You know, what's your end stop sequence? I mean, that was like their product before GPT, you know, really difficult to use, but fun if you're like an engineer. And I just noticed that their product kind of was evolving a little bit where the interface kind of was getting a little bit more complex. They had like a way where you could like generate something in the middle of a sentence and all those kinds of things. And I just thought to myself, I was like, everything is just like this text box and you generate something and that's about it. And stable diffusion had kind of come out and it was all like hugging face and code. Nobody was really building any UI. And so I had this kind of thing where I wrote prompt dash like question mark in my notes and I didn't know what was like the product for that at the time. I mean, it seems kind of trite now, but I just like wrote prompt. What's the thing for that? Manager. Prompt manager. Do you organize them? Like, do you like have a UI that can play with them? Yeah. Like a library. What would you make? And so then, of course, then you thought about what would the modalities be given that? How would you build a UI for each kind of modality? And so there are a couple of people working on some pretty cool things. And I basically chose graphics because it seemed like the most obvious place where you could build a really powerful, complex UI. That's not just only typing a box. It would very much evolve beyond that. Like what would be the best thing for something that's visual? Probably something visual. Yeah. I think that just that progression kind of happened and it just seemed like there was a lot of effort going into language, but not a lot of effort going into graphics. And then maybe the very last thing was, I think I was talking to Aditya Ramesh, who was the co-creator of DALL-E 2 and Sam. And I just kind of went to these guys and I was just like, hey, are you going to make like a UI for this thing? Like a true UI? Are you going to go for this? Are you going to make a product? For DALL-E. Yeah. For DALL-E. Yeah. Are you going to do anything here? Because if you are going to do it, just let me know and I will stop and I'll go do something else. But if you're not going to do anything, I'll just do it. And so we had a couple of conversations around what that would look like. And then I think ultimately they decided that they were going to focus on language primarily. And I just felt like it was going to be very underinvested in. Yes. [00:13:46]Swyx: There's that sort of underinvestment from OpenAI, but also it's a different type of customer than you're used to, presumably, you know, and Mixpanel is very good at selling to B2B and developers will figure on you or not. Yeah. Was that not a concern? [00:14:00]Suhail: Well, not so much because I think that, you know, right now I would say graphics is in this very nascent phase. Like most of the customers are just like hobbyists, right? Yeah. Like it's a little bit of like a novel toy as opposed to being this like very high utility thing. But I think ultimately, if you believe that you could make it very high utility, the probably the next customers will end up being B2B. It'll probably not be like a consumer. There will certainly be a variation of this idea that's in consumer. But if your quest is to kind of make like something that surpasses human ability for graphics, like ultimately it will end up being used for business. So I think it's maybe more of a progression. In fact, for me, it's maybe more like Mixpanel started out as SMB and then very much like ended up starting to grow up towards enterprise. So for me, I think it will be a very similar progression. But yeah, I mean, the reason why I was excited about it is because it was a creative tool. I make music and it's AI. It's like something that I know I could stay up till three o'clock in the morning doing. Those are kind of like very simple bars for me. [00:14:56]Alessio: So you mentioned Dolly, Stable Diffusion. You just had Playground V2 come out two days ago. Yeah, two days ago. [00:15:02]Suhail: Two days ago. [00:15:03]Alessio: This is a model you train completely from scratch. So it's not a cheap fine tune on something. You open source everything, including the weights. Why did you decide to do it? I know you supported Stable Diffusion XL in Playground before, right? Yep. What made you want to come up with V2 and maybe some of the interesting, you know, technical research work you've done? [00:15:24]Suhail: Yeah. So I think that we continue to feel like graphics and these foundation models for anything really related to pixels, but also definitely images continues to be very underinvested. It feels a little like graphics is in like this GPT-2 moment, right? Like even GPT-3, even when GPT-3 came out, it was exciting, but it was like, what are you going to use this for? Yeah, we'll do some text classification and some semantic analysis and maybe it'll sometimes like make a summary of something and it'll hallucinate. But no one really had like a very significant like business application for GPT-3. And in images, we're kind of stuck in the same place. We're kind of like, okay, I write this thing in a box and I get some cool piece of artwork and the hands are kind of messed up and sometimes the eyes are a little weird. Maybe I'll use it for a blog post, you know, that kind of thing. The utility feels so limited. And so, you know, and then we, you sort of look at Stable Diffusion and we definitely use that model in our product and our users like it and use it and love it and enjoy it, but it hasn't gone nearly far enough. So we were kind of faced with the choice of, you know, do we wait for progress to occur or do we make that progress happen? So yeah, we kind of embarked on a plan to just decide to go train these things from scratch. And I think the community has given us so much. The community for Stable Diffusion I think is one of the most vibrant communities on the internet. It's like amazing. It feels like, I hope this is what like Homebrew Club felt like when computers like showed up because it's like amazing what that community will do and it moves so fast. I've never seen anything in my life and heard other people's stories around this where an academic research paper comes out and then like two days later, someone has sample code for it. And then two days later, there's a model. And then two days later, it's like in nine products, you know, they're all competing with each other. It's incredible to see like math symbols on an academic paper go to well-designed features in a product. So I think the community has done so much. So I think we wanted to give back to the community kind of on our way. Certainly we would train a better model than what we gave out on Tuesday, but we definitely felt like there needs to be some kind of progress in these open source models. The last kind of milestone was in July when Stable Diffusion Excel came out, but there hasn't been anything really since. Right. [00:17:34]Swyx: And there's Excel Turbo now. [00:17:35]Suhail: Well, Excel Turbo is like this distilled model, right? So it's like lower quality, but fast. You have to decide, you know, what your trade off is there. [00:17:42]Swyx: It's also a consistency model. [00:17:43]Suhail: I don't think it's a consistency model. It's like it's they did like a different thing. Yeah. I think it's like, I don't want to get quoted for this, but it's like something called ad like adversarial or something. [00:17:52]Swyx: That's exactly right. [00:17:53]Suhail: I've read something about that. Maybe it's like closer to GANs or something, but I didn't really read the full paper. But yeah, there hasn't been quite enough progress in terms of, you know, there's no multitask image model. You know, the closest thing would be something called like EmuEdit, but there's no model for that. It's just a paper that's within meta. So we did that and we also gave out pre-trained weights, which is very rare. Usually you just get the aligned model and then you have to like see if you can do anything with it. So we actually gave out, there's like a 256 pixel pre-trained stage and a 512. And we did that for academic research because we come across people all the time in academia, they have access to like one A100 or eight at best. And so if we can give them kind of like a 512 pre-trained model, our hope is that there'll be interesting novel research that occurs from that. [00:18:38]Swyx: What research do you want to happen? [00:18:39]Suhail: I would love to see more research around things that users care about tend to be things like character consistency. [00:18:45]Swyx: Between frames? [00:18:46]Suhail: More like if you have like a face. Yeah, yeah. Basically between frames, but more just like, you know, you have your face and it's in one image and then you want it to be like in another. And users are very particular and sensitive to faces changing because we know we're trained on faces as humans. Not seeing a lot of innovation, enough innovation around multitask editing. You know, there are two things like instruct pics to pics and then the EmuEdit paper that are maybe very interesting, but we certainly are not pushing the fold on that in that regard. All kinds of things like around that rotation, you know, being able to keep coherence across images, style transfer is still very limited. Just even reasoning around images, you know, what's going on in an image, that kind of thing. Things are still very, very underpowered, very nascent. So therefore the utility is very, very limited. [00:19:32]Alessio: On the 1K Prompt Benchmark, you are 2.5x prefer to Stable Diffusion XL. How do you get there? Is it better images in the training corpus? Can you maybe talk through the improvements in the model? [00:19:44]Suhail: I think they're still very early on in the recipe, but I think it's a lot of like little things and you know, every now and then there are some big important things like certainly your data quality is really, really important. So we spend a lot of time thinking about that. But I would say it's a lot of things that you kind of clean up along the way as you train your model. Everything from captions to the data that you align with after pre-train to how you're picking your data sets, how you filter your data sets. I feel like there's a lot of work in AI that doesn't really feel like AI. It just really feels like just data set filtering and systems engineering and just like, you know, and the recipe is all there, but it's like a lot of extra work to do that. I think we plan to do a Playground V 2.1, maybe either by the end of the year or early next year. And we're just like watching what the community does with the model. And then we're just going to take a lot of the things that they're unhappy about and just like fix them. You know, so for example, like maybe the eyes of people in an image don't feel right. They feel like they're a little misshapen or they're kind of blurry feeling. That's something that we already know we want to fix. So I think in that case, it's going to be about data quality. Or maybe you want to improve the kind of the dynamic range of color. You know, we want to make sure that that's like got a good range in any image. So what technique can we use there? There's different things like offset noise, pyramid noise, terminal zero, SNR, like there are all these various interesting things that you can do. So I think it's like a lot of just like tricks. Some are tricks, some are data, and some is just like cleaning. [00:21:11]Swyx: Specifically for faces, it's very common to use a pipeline rather than just train the base model more. Do you have a strong belief either way on like, oh, they should be separated out to different stages for like improving the eyes, improving the face or enhance or whatever? Or do you think like it can all be done in one model? [00:21:28]Suhail: I think we will make a unified model. Yeah, I think it will. I think we'll certainly in the end, ultimately make a unified model. There's not enough research about this. Maybe there is something out there that we haven't read. There are some bottlenecks, like for example, in the VAE, like the VAEs are ultimately like compressing these things. And so you don't know. And then you might have like a big informational information bottleneck. So maybe you would use a pixel based model, perhaps. I think we've talked to people, everyone from like Rombach to various people, Rombach trained stable diffusion. I think there's like a big question around the architecture of these things. It's still kind of unknown, right? Like we've got transformers and we've got like a GPT architecture model, but then there's this like weird thing that's also seemingly working with diffusion. And so, you know, are we going to use vision transformers? Are we going to move to pixel based models? Is there a different kind of architecture? We don't really, I don't think there have been enough experiments. Still? Oh my God. [00:22:21]Swyx: Yeah. [00:22:22]Suhail: That's surprising. I think it's very computationally expensive to do a pipeline model where you're like fixing the eyes and you're fixing the mouth and you're fixing the hands. [00:22:29]Swyx: That's what everyone does as far as I understand. [00:22:31]Suhail: I'm not exactly sure what you mean, but if you mean like you get an image and then you will like make another model specifically to fix a face, that's fairly computationally expensive. And I think it's like not probably not the right way. Yeah. And it doesn't generalize very well. Now you have to pick all these different things. [00:22:45]Swyx: Yeah. You're just kind of glomming things on together. Yeah. Like when I look at AI artists, like that's what they do. [00:22:50]Suhail: Ah, yeah, yeah, yeah. They'll do things like, you know, I think a lot of ARs will do control net tiling to do kind of generative upscaling of all these different pieces of the image. Yeah. And I think these are all just like, they're all hacks ultimately in the end. I mean, it just to me, it's like, let's go back to where we were just three years, four years ago with where deep learning was at and where language was that, you know, it's the same thing. It's like we were like, okay, well, I'll just train these very narrow models to try to do these things and kind of ensemble them or pipeline them to try to get to a best in class result. And here we are with like where the models are gigantic and like very capable of solving huge amounts of tasks when given like lots of great data. [00:23:28]Alessio: You also released a new benchmark called MJHQ30K for automatic evaluation of a model's aesthetic quality. I have one question. The data set that you use for the benchmark is from Midjourney. Yes. You have 10 categories. How do you think about the Playground model, Midjourney, like, are you competitors? [00:23:47]Suhail: There are a lot of people, a lot of people in research, they like to compare themselves to something they know they can beat, right? Maybe this is the best reason why it can be helpful to not be a researcher also sometimes like I'm not trained as a researcher, I don't have a PhD in anything AI related, for example. But I think if you care about products and you care about your users, then the most important thing that you want to figure out is like everyone has to acknowledge that Midjourney is very good. They are the best at this thing. I'm happy to admit that. I have no problem admitting that. Just easy. It's very visual to tell. So I think it's incumbent on us to try to compare ourselves to the thing that's best, even if we lose, even if we're not the best. At some point, if we are able to surpass Midjourney, then we only have ourselves to compare ourselves to. But on First Blush, I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? So we put out that benchmark for no other reason to say like, this seems like a worthy thing for us to at least try, for people to try to get to. And then if we surpass it, great, we'll come up with another one. [00:25:06]Alessio: Yeah, no, that's awesome. And you killed Stable Diffusion Excel and everything. In the benchmark chart, it says Playground V2 1024 pixel dash aesthetic. Do you have kind of like, yeah, style fine tunes or like what's the dash aesthetic for? [00:25:21]Suhail: We debated this, maybe we named it wrong or something, but we were like, how do we help people realize the model that's aligned versus the models that weren't? Because we gave out pre-trained models, we didn't want people to like use those. So that's why they're called base. And then the aesthetic model, yeah, we wanted people to pick up the thing that makes things pretty. Who wouldn't want the thing that's aesthetic? But if there's a better name, we're definitely open to feedback. No, no, that's cool. [00:25:46]Alessio: I was using the product. You also have the style filter and you have all these different styles. And it seems like the styles are tied to the model. So there's some like SDXL styles, there's some Playground V2 styles. Can you maybe give listeners an overview of how that works? Because in language, there's not this idea of like style, right? Versus like in vision model, there is, and you cannot get certain styles in different [00:26:11]Suhail: models. [00:26:12]Alessio: So how do styles emerge and how do you categorize them and find them? [00:26:15]Suhail: Yeah, I mean, it's so fun having a community where people are just trying a model. Like it's only been two days for Playground V2. And we actually don't know what the model's capable of and not capable of. You know, we certainly see problems with it. But we have yet to see what emergent behavior is. I mean, we've just sort of discovered that it takes about like a week before you start to see like new things. I think like a lot of that style kind of emerges after that week, where you start to see, you know, there's some styles that are very like well known to us, like maybe like pixel art is a well known style. Photorealism is like another one that's like well known to us. But there are some styles that cannot be easily named. You know, it's not as simple as like, okay, that's an anime style. It's very visual. And in the end, you end up making up the name for what that style represents. And so the community kind of shapes itself around these different things. And so if anyone that's into stable diffusion and into building anything with graphics and stuff with these models, you know, you might have heard of like Proto Vision or Dream Shaper, some of these weird names, but they're just invented by these authors. But they have a sort of je ne sais quoi that, you know, appeals to users. [00:27:26]Swyx: Because it like roughly embeds to what you what you want. [00:27:29]Suhail: I guess so. I mean, it's like, you know, there's one of my favorite ones that's fine tuned. It's not made by us. It's called like Starlight XL. It's just this beautiful model. It's got really great color contrast and visual elements. And the users love it. I love it. And it's so hard. I think that's like a very big open question with graphics that I'm not totally sure how we'll solve. I don't know. It's, it's like an evolving situation too, because styles get boring, right? They get fatigued. Like it's like listening to the same style of pop song. I try to relate to graphics a little bit like with music, because I think it gives you a little bit of a different shape to things. Like it's not as if we just have pop music, rap music and country music, like all of these, like the EDM genre alone has like sub genres. And I think that's very true in graphics and painting and art and anything that we're doing. There's just these sub genres, even if we can't quite always name them. But I think they are emergent from the community, which is why we're so always happy to work with the community. [00:28:26]Swyx: That is a struggle. You know, coming back to this, like B2B versus B2C thing, B2C, you're going to have a huge amount of diversity and then it's going to reduce as you get towards more sort of B2B type use cases. I'm making this up here. So like you might be optimizing for a thing that you may eventually not need. [00:28:42]Suhail: Yeah, possibly. Yeah, possibly. I think like a simple thing with startups is that I worry sometimes by trying to be overly ambitious and like really scrutinizing like what something is in its most nascent phase that you miss the most ambitious thing you could have done. Like just having like very basic curiosity with something very small can like kind of lead you to something amazing. Like Einstein definitely did that. And then he like, you know, he basically won all the prizes and got everything he wanted and then basically did like kind of didn't really. He can dismiss quantum and then just kind of was still searching, you know, for the unifying theory. And he like had this quest. I think that happens a lot with like Nobel Prize people. I think there's like a term for it that I forget. I actually wanted to go after a toy almost intentionally so long as that I could see, I could imagine that it would lead to something very, very large later. Like I said, it's very hobbyist, but you need to start somewhere. You need to start with something that has a big gravitational pull, even if these hobbyists aren't likely to be the people that, you know, have a way to monetize it or whatever, even if they're, but they're doing it for fun. So there's something, something there that I think is really important. But I agree with you that, you know, in time we will absolutely focus on more utilitarian things like things that are more related to editing feats that are much harder. And so I think like a very simple use case is just, you know, I'm not a graphics designer. It seems like very simple that like you, if we could give you the ability to do really complex graphics without skill, wouldn't you want that? You know, like my wife the other day was set, you know, said, I wish Playground was better. When are you guys going to have a feature where like we could make my son, his name's Devin, smile when he was not smiling in the picture for the holiday card. Right. You know, just being able to highlight his, his mouth and just say like, make him smile. Like why can't we do that with like high fidelity and coherence, little things like that, all the way to putting you in completely different scenarios. [00:30:35]Swyx: Is that true? Can we not do that in painting? [00:30:37]Suhail: You can do in painting, but the quality is just so bad. Yeah. It's just really terrible quality. You know, it's like you'll do it five times and it'll still like kind of look like crooked or just artifact. Part of it's like, you know, the lips on the face, there's such little information there. So small that the models really struggle with it. Yeah. [00:30:55]Swyx: Make the picture smaller and you don't see it. That's my trick. I don't know. [00:30:59]Suhail: Yeah. Yeah. That's true. Or, you know, you could take that region and make it really big and then like say it's a mouth and then like shrink it. It feels like you're wrestling with it more than it's doing something that kind of surprises you. [00:31:12]Swyx: Yeah. It feels like you are very much the internal tastemaker, like you carry in your head this vision for what a good art model should look like. Do you find it hard to like communicate it to like your team and other people? Just because it's obviously it's hard to put into words like we just said. [00:31:26]Suhail: Yeah. It's very hard to explain. Images have such high bitrate compared to just words and we don't have enough words to describe these things. It's not terribly difficult. I think everyone on the team, if they don't have good kind of like judgment taste or like an eye for some of these things, they're like steadily building it because they have no choice. Right. So in that realm, I don't worry too much, actually. Like everyone is kind of like learning to get the eye is what I would call it. But I also have, you know, my own narrow taste. Like I don't represent the whole population either. [00:31:59]Swyx: When you benchmark models, you know, like this benchmark we're talking about, we use FID. Yeah. Input distance. OK. That's one measure. But like it doesn't capture anything you just said about smiles. [00:32:08]Suhail: Yeah. FID is generally a bad metric. It's good up to a point and then it kind of like is irrelevant. Yeah. [00:32:14]Swyx: And then so are there any other metrics that you like apart from vibes? I'm always looking for alternatives to vibes because vibes don't scale, you know. [00:32:22]Suhail: You know, it might be fun to kind of talk about this because it's actually kind of fresh. So up till now, we haven't needed to do a ton of like benchmarking because we hadn't trained our own model and now we have. So now what? What does that mean? How do we evaluate it? And, you know, we're kind of like living with the last 48, 72 hours of going, did the way that we benchmark actually succeed? [00:32:43]Swyx: Did it deliver? [00:32:44]Suhail: Right. You know, like I think Gemini just came out. They just put out a bunch of benchmarks. But all these benchmarks are just an approximation of how you think it's going to end up with real world performance. And I think that's like very fascinating to me. So if you fake that benchmark, you'll still end up in a really bad scenario at the end of the day. And so, you know, one of the benchmarks we did was we kind of curated like a thousand prompts. And I think that's kind of what we published in our blog post, you know, of all these tasks that we a lot of some of them are curated by our team where we know the models all suck at it. Like my favorite prompt that no model is really capable of is a horse riding an astronaut, the inverse one. And it's really, really hard to do. [00:33:22]Swyx: Not in data. [00:33:23]Suhail: You know, another one is like a giraffe underneath a microwave. How does that work? Right. There's so many of these little funny ones. We do. We have prompts that are just like misspellings of things. Yeah. We'll figure out if the models will figure it out. [00:33:36]Swyx: They should embed to the same space. [00:33:39]Suhail: Yeah. And just like all these very interesting weirdo things. And so we have so many of these and then we kind of like evaluate whether the models are any good at it. And the reality is that they're all bad at it. And so then you're just picking the most aesthetic image. We're still at the beginning of building like the best benchmark we can that aligns most with just user happiness, I think, because we're not we're not like putting these in papers and trying to like win, you know, I don't know, awards at ICCV or something if they have awards. You could. [00:34:05]Swyx: That's absolutely a valid strategy. [00:34:06]Suhail: Yeah, you could. But I don't think it could correlate necessarily with the impact we want to have on humanity. I think we're still evolving whatever our benchmarks are. So the first benchmark was just like very difficult tasks that we know the models are bad at. Can we come up with a thousand of these, whether they're hand rated and some of them are generated? And then can we ask the users, like, how do we do? And then we wanted to use a benchmark like party prompts. We mostly did that so people in academia could measure their models against ours versus others. But yeah, I mean, fit is pretty bad. And I think in terms of vibes, it's like you put out the model and then you try to see like what users make. And I think my sense is that we're going to take all the things that we notice that the users kind of were failing at and try to find like new ways to measure that, whether that's like a smile or, you know, color contrast or lighting. One benefit of Playground is that we have users making millions of images every single day. And so we can just ask them for like a post generation feedback. Yeah, we can just ask them. We can just say, like, how good was the lighting here? How was the subject? How was the background? [00:35:06]Swyx: Like a proper form of like, it's just like you make it, you come to our site, you make [00:35:10]Suhail: an image and then we say, and then maybe randomly you just say, hey, you know, like, how was the color and contrast of this image? And you say it was not very good, just tell us. So I think I think we can get like tens of thousands of these evaluations every single day to truly measure real world performance as opposed to just like benchmark performance. I would like to publish hopefully next year. I think we will try to publish a benchmark that anyone could use, that we evaluate ourselves on and that other people can, that we think does a good job of approximating real world performance because we've tried it and done it and noticed that it did. Yeah. I think we will do that. [00:35:45]Swyx: I personally have a few like categories that I consider special. You know, you know, you have like animals, art, fashion, food. There are some categories which I consider like a different tier of image. Top among them is text in images. How do you think about that? So one of the big wow moments for me, something I've been looking out for the entire year is just the progress of text and images. Like, can you write in an image? Yeah. And Ideogram came out recently, which had decent but not perfect text and images. Dolly3 had improved some and all they said in their paper was that they just included more text in the data set and it just worked. I was like, that's just lazy. But anyway, do you care about that? Because I don't see any of that in like your sample. Yeah, yeah. [00:36:27]Suhail: The V2 model was mostly focused on image quality versus like the feature of text synthesis. [00:36:33]Swyx: Well, as a business user, I care a lot about that. [00:36:35]Suhail: Yeah. Yeah. I'm very excited about text synthesis. And yeah, I think Ideogram has done a good job of maybe the best job. Dolly has like a hit rate. Yes. You know, like sometimes it's Egyptian letters. Yeah. I'm very excited about text synthesis. You know, I don't have much to say on it just yet. You know, you don't want just text effects. I think where this has to go is it has to be like you could like write little tiny pieces of text like on like a milk carton. That's maybe not even the focal point of a scene. I think that's like a very hard task that, you know, if you could do something like that, then there's a lot of other possibilities. Well, you don't have to zero shot it. [00:37:09]Swyx: You can just be like here and focus on this. [00:37:12]Suhail: Sure. Yeah, yeah. Definitely. Yeah. [00:37:16]Swyx: Yeah. So I think text synthesis would be very exciting. I'll also flag that Max Wolf, MiniMaxxier, which you must have come across his work. He's done a lot of stuff about using like logo masks that then map onto food and vegetables. And it looks like text, which can be pretty fun. [00:37:29]Suhail: That's the wonderful thing about like the open source community is that you get things like control net and then you see all these people do these just amazing things with control net. And then you wonder, I think from our point of view, we sort of go that that's really wonderful. But how do we end up with like a unified model that can do that? What are the bottlenecks? What are the issues? The community ultimately has very limited resources. And so they need these kinds of like workaround research ideas to get there. But yeah. [00:37:55]Swyx: Are techniques like control net portable to your architecture? [00:37:58]Suhail: Definitely. Yeah. We kept the Playground V2 exactly the same as SDXL. Not because not out of laziness, but just because we knew that the community already had tools. You know, all you have to do is maybe change a string in your code and then, you know, retrain a control net for it. So it was very intentional to do that. We didn't want to fragment the community with different architectures. Yeah. [00:38:16]Swyx: So basically, I'm going to go over three more categories. One is UIs, like app UIs, like mock UIs. Third is not safe for work, and then copyrighted stuff. I don't know if you care to comment on any of those. [00:38:28]Suhail: I think the NSFW kind of like safety stuff is really important. I kind of think that one of the biggest risks kind of going into maybe the U.S. election year will probably be very interrelated with like graphics, audio, video. I think it's going to be very hard to explain, you know, to a family relative who's not kind of in our world. And our world is like sometimes very, you know, we think it's very big, but it's very tiny compared to the rest of the world. Some people like there's still lots of humanity who have no idea what chat GPT is. And I think it's going to be very hard to explain, you know, to your uncle, aunt, whoever, you know, hey, I saw President Biden say this thing on a video, you know, I can't believe, you know, he said that. I think that's going to be a very troubling thing going into the world next year, the year after. [00:39:12]Swyx: That's more like a risk thing, like deepfakes, faking, political faking. But there's a lot of studies on how for most businesses, you don't want to train on not safe for work images, except that it makes you really good at bodies. [00:39:24]Suhail: Personally, we filter out NSFW type of images in our data set so that it's, you know, so our safety filter stuff doesn't have to work as hard. [00:39:32]Swyx: But you've heard this argument that not safe for work images are very good at human anatomy, which you do want to be good at. [00:39:38]Suhail: It's not like necessarily a bad thing to train on that data. It's more about like how you go and use it. That's why I was kind of talking about safety, you know, in part, because there are very terrible things that can happen in the world. If you have an extremely powerful graphics model, you know, suddenly like you can kind of imagine, you know, now if you can like generate nudes and then there's like you could do very character consistent things with faces, like what does that lead to? Yeah. And so I tend to think more what occurs after that, right? Even if you train on, let's say, you know, new data, if it does something to kind of help, there's nothing wrong with the human anatomy, it's very valid for a model to learn that. But then it's kind of like, how does that get used? And, you know, I won't bring up all of the very, very unsavory, terrible things that we see on a daily basis on the site, but I think it's more about what occurs. And so we, you know, we just recently did like a big sprint on safety. It's very difficult with graphics and art, right? Because there is tasteful art that has nudity, right? They're all over in museums, like, you know, there's very valid situations for that. And then there's the things that are the gray line of that, you know, what I might not find tasteful, someone might be like, that is completely tasteful, right? And then there are things that are way over the line. And then there are things that maybe you or, you know, maybe I would be okay with, but society isn't, you know? So where does that kind of end up on the spectrum of things? I think it's really hard with art. Sometimes even if you have like things that are not nude, if a child goes to your site, scrolls down some images, you know, classrooms of kids, you know, using our product, it's a really difficult problem. And it stretches mostly culture, society, politics, everything. [00:41:14]Alessio: Another favorite topic of our listeners is UX and AI. And I think you're probably one of the best all-inclusive editors for these things. So you don't just have the prompt, images come out, you pray, and now you do it again. First, you let people pick a seed so they can kind of have semi-repeatable generation. You also have, yeah, you can pick how many images and then you leave all of them in the canvas. And then you have kind of like this box, the generation box, and you can even cross between them and outpaint. There's all these things. How did you get here? You know, most people are kind of like, give me text, I give you image. You know, you're like, these are all the tools for you. [00:41:54]Suhail: Even though we were trying to make a graphics foundation model, I think we think that we're also trying to like re-imagine like what a graphics editor might look like given the change in technology. So, you know, I don't think we're trying to build Photoshop, but it's the only thing that we could say that people are largely familiar with. Oh, okay, there's Photoshop. What would Photoshop compare itself to pre-computer? I don't know, right? It's like, or kind of like a canvas, but you know, there's these menu options and you can use your mouse. What's a mouse? So I think that we're trying to re-imagine what a graphics editor might look like, not just for the fun of it, but because we kind of have no choice. Like there's this idea in image generation where you can generate images. That's like a super weird thing. What is that in Photoshop, right? You have to wait right now for the time being, but the wait is worth it often for a lot of people because they can't make that with their own skills. So I think it goes back to, you know, how we started the company, which was kind of looking at GPT-3's Playground, that the reason why we're named Playground is a homage to that actually. And, you know, it's like, shouldn't these products be more visual? These prompt boxes are like a terminal window, right? We're kind of at this weird point where it's just like MS-DOS. I remember my mom using MS-DOS and I memorized the keywords, like DIR, LS, all those things, right? It feels a little like we're there, right? Prompt engineering, parentheses to say beautiful or whatever, waits the word token more in the model or whatever. That's like super strange. I think a large portion of humanity would agree that that's not user-friendly, right? So how do we think about the products to be more user-friendly? Well, sure, you know, sure, it would be nice if I wanted to get rid of, like, the headphones on my head, you know, it'd be nice to mask it and then say, you know, can you remove the headphones? You know, if I want to grow, expand the image, you know, how can we make that feel easier without typing lots of words and being really confused? I don't even think we've nailed the UI UX yet. Part of that is because we're still experimenting. And part of that is because the model and the technology is going to get better. And whatever felt like the right UX six months ago is going to feel very broken now. So that's a little bit of how we got there is kind of saying, does everything have to be like a prompt in a box? Or can we do things that make it very intuitive for users? [00:44:03]Alessio: How do you decide what to give access to? So you have things like an expand prompt, which Dally 3 just does. It doesn't let you decide whether you should or not. [00:44:13]Swyx: As in, like, rewrites your prompts for you. [00:44:15]Suhail: Yeah, for that feature, I think once we get it to be cheaper, we'll probably just give it up. We'll probably just give it away. But we also decided something that might be a little bit different. We noticed that most of image generation is just, like, kind of casual. You know, it's in WhatsApp. It's, you know, it's in a Discord bot somewhere with Majorny. It's in ChatGPT. One of the differentiators I think we provide is at the expense of just lots of users necessarily. Mainstream consumers is that we provide as much, like, power and tweakability and configurability as possible. So the only reason why it's a toggle, because we know that users might want to use it and might not want to use it. There's some really powerful power user hobbyists that know what they're doing. And then there's a lot of people that just want something that looks cool, but they don't know how to prompt. And so I think a lot of Playground is more about going after that core user base that, like, knows, has a little bit more savviness and how to use these tools. You know, the average Dell user is probably not going to use ControlNet. They probably don't even know what that is. And so I think that, like, as the models get more powerful, as there's more tooling, hopefully you'll imagine a new sort of AI-first graphics editor that's just as, like, powerful and configurable as Photoshop. And you might have to master a new kind of tool. [00:45:28]Swyx: There's so many things I could go bounce off of. One, you mentioned about waiting. We have to kind of somewhat address the elephant in the room. Consistency models have been blowing up the past month. How do you think about integrating that? Obviously, there's a lot of other companies also trying to beat you to that space as well. [00:45:44]Suhail: I think we were the first company to integrate it. Ah, OK. [00:45:47]Swyx: Yeah. I didn't see your demo. [00:45:49]Suhail: Oops. Yeah, yeah. Well, we integrated it in a different way. OK. There are, like, 10 companies right now that have kind of tried to do, like, interactive editing, where you can, like, draw on the left side and then you get an image on the right side. We decided to kind of, like, wait and see whether there's, like, true utility on that. We have a different feature that's, like, unique in our product that is called preview rendering. And so you go to the product and you say, you know, we're like, what is the most common use case? The most common use case is you write a prompt and then you get an image. But what's the most annoying thing about that? The most annoying thing is, like, it feels like a slot machine, right? You're like, OK, I'm going to put it in and maybe I'll get something cool. So we did something that seemed a lot simpler, but a lot more relevant to how users already use these products, which is preview rendering. You toggle it on and it will show you a render of the image. And then graphics tools already have this. Like, if you use Cinema 4D or After Effects or something, it's called viewport rendering. And so we try to take something that exists in the real world that has familiarity and say, OK, you're going to get a rough sense of an early preview of this thing. And then when you're ready to generate, we're going to try to be as coherent about that image that you saw. That way, you're not spending so much time just like pulling down the slot machine lever. I think we were the first company to actually ship a quick LCM thing. Yeah, we were very excited about it. So we shipped it very quick. Yeah. [00:47:03]Swyx: Well, the demos I've been seeing, it's not like a preview necessarily. They're almost using it to animate their generations. Like, because you can kind of move shapes. [00:47:11]Suhail: Yeah, yeah, they're like doing it. They're animating it. But they're sort of showing, like, if I move a moon, you know, can I? [00:47:17]Swyx: I don't know. To me, it unlocks video in a way. [00:47:20]Suhail: Yeah. But the video models are already so much better than that. Yeah. [00:47:23]Swyx: There's another one, which I think is general ecosystem of Loras, right? Civit is obviously the most popular repository of Loras. How do you think about interacting with that ecosystem? [00:47:34]Suhail: The guy that did Lora, not the guy that invented Loras, but the person that brought Loras to Stable Diffusion actually works with us on some projects. His name is Simu. Shout out to Simu. And I think Loras are wonderful. Obviously, fine tuning all these Dreambooth models and such, it's just so heavy. And it's obvious in our conversation around styles and vibes, it's very hard to evaluate the artistry of these things. Loras give people this wonderful opportunity to create sub-genres of art. And I think they're amazing. Any graphics tool, any kind of thing that's expressing art has to provide some level of customization to its user base that goes beyond just typing Greg Rakowski in a prompt. We have to give more than that. It's not like users want to type these real artist names. It's that they don't know how else to get an image that looks interesting. They truly want originality and uniqueness. And I think Loras provide that. And they provide it in a very nice, scalable way. I hope that we find something even better than Loras in the long term, because there are still weaknesses to Loras, but I think they do a good job for now. Yeah. [00:48:39]Swyx: And so you would never compete with Civit? You would just kind of let people import? [00:48:43]Suhail: Civit's a site where all these things get kind of hosted by the community, right? And so, yeah, we'll often pull down some of the best things there. I think when we have a significantly better model, we will certainly build something that gets closer to that. Again, I go back to saying just I still think this is very nascent. Things are very underpowered, right? Loras are not easy to train. They're easy for an engineer. It sure would be nicer if I could just pick five or six reference images, right? And they might even be five or six different reference images that are not... They're just very different. They communicate a style, but they're actually like... It's like a mood board, right? And you have to be kind of an engineer almost to train these Loras or go to some site and be technically savvy, at least. It seems like it'd be much better if I could say, I love this style. Here are five images and you tell the model, like, this is what I want. And the model gives you something that's very aligned with what your style is, what you're talking about. And it's a style you couldn't even communicate, right? There's n
Michael, Jimmy and Dave recap on the Steelers loss in Indianapolis in Week 15. They are also joined by SNR analyst and former Steeler Craig Wolfey, who rejoins the show towards the end of the regular season. With a huge matchup against the Bengals on tap this weekend, the guys also hear from Coach Tomlin and make their predictions ahead of a key game for the 2023 season. You can listen to the Bengals game this Saturday evening on OTBRadio on the island of Ireland, while the game will be live on Sky Sports NFL from 9:30PM. Get involved with the podcast by following us at SteelersIreland on X and Instagram!See omnystudio.com/listener for privacy information.
Do dnešnej časti relácie Startitup Diskusný Klub prijal pozvanie slovenský politik, bývalý predseda SNR a dlhoročný poslanec Národnej rady Slovenskej republiky František Mikloško. S naším moderátorom Šimonom Žďárským rozoberali najmä aktuálnu politickú situáciu v našej krajine. Pozreli sa na smerovanie Slovenska po voľbách, ale aj na konkrétne strany či KDH. Viac as dozvieš v rozhovore. „Čoraz viac som unavený z takéhoto nacionalizmu a ,hejslováctva'.” Pre viac rozhovorov so zaujímavými osobnosťami a o aktuálnej politickej situácii sleduj aj ďalšie epizódy Diskusný Klub.
Sara joins us today for a look at the latest statistics in northern Nevada on a market-by-market basis. You'll be amazed the difference in home pricing between Reno and Fernley, Sparks and Fallon, Carson City and Douglas County. Story County stands out as having the fewest transactions. Hear the rest and more! www.SNR.realtor www.Sageintl.com
Foundations of Amateur Radio Today I'd like to talk about noise, but before I do, I need to cover some ground. Recently I explored the idea that, on their own, neither antenna, nor coax, made a big difference in the potential for a contact when compared to the impact of path loss between two stations. I went on to point out that you'd be unlikely to even notice the difference in normal communications. Only when you're working at the margins, when the signal is barely detectable, would adding a single dB here or there make any potential difference. In saying that, I skipped over one detail, noise. Noise is by definition an unwanted signal that arrives together with a wanted signal at the receiver. In HF communications, noise comes from many sources, the galaxy, our atmosphere, and man-made noise from things like electrical switches, motors, alternator circuits, inverters and computers. The example I used was my 10 dBm beacon being reported by an Antarctic station. My signal report was about 5 dB above the minimum decode level and based on signal path calculations, -129 dBm, or around an S0 signal level. What that statement hides is that this is in the context of a noise level that's lower than -129 dBm. Remember, a negative dBm value means a fraction of a milliwatt. While you're considering that, think of the reality of an Antarctic station. This particular station, "Neumayer III" has three 75 kW diesel generators, a 30 kW wind turbine generator, 20 caterpillar trucks, 10 snowmobiles and 2 snow blowers and computers and technology to support 60 people, in other words, plenty of local noise. This makes it all the more remarkable that my 10 dBm beacon was heard and that there was an amateur there to set-up the receiver in the first place. Before I continue, picture mountain tops peaking through the top of a cloud layer as viewed from the window of an aeroplane. If the cloud layer increases in height, less and less mountain tops are visible, until at some point, only clouds are visible. Alternatively, if the cloud layer descends, more and more of the peaks are visible, until at some point no cloud remains and you see the mountains in all their magnificent glory. In that analogy, mountains represent signals and the cloud layer is the equivalent of the noise floor, and in a similar way, signals can be heard or not, depending on the relationship between the level of noise in comparison to the level of the signal. There's a name for this, it's called the signal to noise ratio or SNR, where a value of 0 dB means that noise and signal are at the same level, negative SNR values mean that the signal is weaker than the noise, positive SNR values means that the signal is stronger than the noise. If you know the power level in dBm for both the noise and the signal, you can subtract the two and end up with the signal to noise ratio. In reality, all receiving stations have to contend with noise. If I arbitrarily set the local noise floor at -100 dBm, somewhere halfway between S4 and S5, I'll mostly get laughed at by many stations, either because it's too high or too low. In case you're wondering, I've worked my station in both S0 noise and S9 noise environments and it's fun trying either and comparing. It's one of the reasons I often use a mobile station, to get away from urban noise around me, and you don't have to go far, a local park might be far enough from local noise to whet your appetite. Besides, -100 dBm is a nice round number to play with. You might recall that a typical path loss number for a 2,500 km contact on HF on the 10m band is about 129 dB. With a noise floor of -100 dBm, we immediately know how much output power is required to be heard above the noise. If the received signal has to be at least more than -100 dBm and we know that the path loss is 129 dB, then our transmitted signal needs to at least be enough to make up the difference. Said differently, if our output power is too low, the signal at the receive station will fall below the noise and they won't be able to hear us. So, if we start at say 30 dBm, have a path loss of 129 dB, we'll end up at -99 dBm, which is 1 dB above -100 dBm. Said in another way, the SNR for this is 1 dB. I'd like you to notice something. I've said nothing about the noise floor at the transmitter. We could have low noise, or horrendous noise, either way, it makes no difference to the receiver. What it hears is entirely dependent on the noise floor at the receiving station. I wonder if that observation changes anything about what you think the impact might be of adding an 18 dBi Yagi to your station? I'm Onno VK6FLAB
Life is one big initiation. Join Gervase and her Focalizing teacher, Jo Miller, as they discuss what it would look like to have ‘enough' in the face of trauma, the ways in which we've been lied to as women, and how we can resource somatics and focalizing to create a fuller, more robust sense of self in ourselves and our children. Jo Miller is a somatic teacher and mentor who helps the big-hearted helpers, overthinkers, and creatives in the world to live an empowered life with compassionate boundaries, free from trauma and shame. Joe is a member of the SNR leadership team of the Focalizing Institute and also works within her own private practice. Connect with Jo: Somatic Therapy | Joanna Miller | Margate (joanna-miller.com) Jo Miller | Somatic Teacher & Mentor (@joannamiller_healing) • Instagram The next cohort of The Higher Mastermind begins the first week of November: Find more info on this intimate, 3 month season of becoming here: https://www.gervasekolmos.com/higher Includes access to everything in the Gateway and deep discount for the Phoenix Sessions Retreat in January
Connecting with Blacklock's Reporter senior editor Tom Korski on this week's goings-on in the nation's capital. Guest: Tom Korski. Snr editor, Blacklock's Reporter. Learn more about your ad choices. Visit megaphone.fm/adchoices
-- Finches Diversify in Decades, Opals Form in Months, Man's Genetic Diversity in 200 Generations, C-14 Everywhere: Real Science Radio hosts Bob Enyart and Fred Williams present their classic program that led to the audience-favorites rsr.org/list-shows! See below and hear on today's radio program our list of Not So Old and Not So Slow Things! From opals forming in months to man's genetic diversity in 200 generations, and with carbon 14 everywhere it's not supposed to be (including in diamonds and dinosaur bones!), scientific observations fill the guys' most traditional list challenging those who claim that the earth is billions of years old. Many of these scientific finds demand a re-evaluation of supposed million and billion-year ages. * Finches Adapt in 17 Years, Not 2.3 Million: Charles Darwin's finches are claimed to have taken 2,300,000 years to diversify from an initial species blown onto the Galapagos Islands. Yet individuals from a single finch species on a U.S. Bird Reservation in the Pacific were introduced to a group of small islands 300 miles away and in at most 17 years, like Darwin's finches, they had diversified their beaks, related muscles, and behavior to fill various ecological niches. Hear about this also at rsr.org/spetner. * Opals Can Form in "A Few Months" And Don't Need 100,000 Years: A leading authority on opals, Allan W. Eckert, observed that, "scientific papers and textbooks have told that the process of opal formation requires tens of thousands of years, perhaps hundreds of thousands... Not true." A 2011 peer-reviewed paper in a geology journal from Australia, where almost all the world's opal is found, reported on the: "new timetable for opal formation involving weeks to a few months and not the hundreds of thousands of years envisaged by the conventional weathering model." (And apparently, per a 2019 report from Entomology Today, opals can even form around insects!) More knowledgeable scientists resist the uncritical, group-think insistence on false super-slow formation rates (as also for manganese nodules, gold veins, stone, petroleum, canyons and gullies, and even guts, all below). Regarding opals, Darwinian bias led geologists to long ignore possible quick action, as from microbes, as a possible explanation for these mineraloids. For both in nature and in the lab, opals form rapidly, not even in 10,000 years, but in weeks. See this also from creationists by a geologist, a paleobiochemist, and a nuclear chemist. * Finches Speciate in Two Generations vs Two Million Years for Darwin's Birds? Darwin's finches on the Galapagos Islands are said to have diversified into 14 species over a period of two million years. But in 2017 the journal Science reported a newcomer to the Island which within two generations spawned a reproductively isolated new species. In another instance as documented by Lee Spetner, a hundred birds of the same finch species introduced to an island cluster a 1,000 kilometers from Galapagos diversified into species with the typical variations in beak sizes, etc. "If this diversification occurred in less than seventeen years," Dr. Spetner asks, "why did Darwin's Galapagos finches [as claimed by evolutionists] have to take two million years?" * Blue Eyes Originated Not So Long Ago: Not a million years ago, nor a hundred thousand years ago, but based on a peer-reviewed paper in Human Genetics, a press release at Science Daily reports that, "research shows that people with blue eyes have a single, common ancestor. A team at the University of Copenhagen have tracked down a genetic mutation which took place 6-10,000 years ago and is the cause of the eye colour of all blue-eyed humans alive on the planet today." * Adding the Entire Universe to our List of Not So Old Things? Based on March 2019 findings from Hubble, Nobel laureate Adam Riess of the Space Telescope Science Institute and his co-authors in the Astrophysical Journal estimate that the universe is about a billion years younger than previously thought! Then in September 2019 in the journal Science, the age dropped precipitiously to as low as 11.4 billion years! Of course, these measurements also further squeeze the canonical story of the big bang chronology with its many already existing problems including the insufficient time to "evolve" distant mature galaxies, galaxy clusters, superclusters, enormous black holes, filaments, bubbles, walls, and other superstructures. So, even though the latest estimates are still absurdly too old (Google: big bang predictions, and click on the #1 ranked article, or just go on over there to rsr.org/bb), regardless, we thought we'd plop the whole universe down on our List of Not So Old Things! * After the Soft Tissue Discoveries, NOW Dino DNA: When a North Carolina State University paleontologist took the Tyrannosaurus Rex photos to the right of original biological material, that led to the 2016 discovery of dinosaur DNA, So far researchers have also recovered dinosaur blood vessels, collagen, osteocytes, hemoglobin, red blood cells, and various proteins. As of May 2018, twenty-six scientific journals, including Nature, Science, PNAS, PLoS One, Bone, and Journal of Vertebrate Paleontology, have confirmed the discovery of biomaterial fossils from many dinosaurs! Organisms including T. Rex, hadrosaur, titanosaur, triceratops, Lufengosaur, mosasaur, and Archaeopteryx, and many others dated, allegedly, even hundreds of millions of years old, have yielded their endogenous, still-soft biological material. See the web's most complete listing of 100+ journal papers (screenshot, left) announcing these discoveries at bflist.rsr.org and see it in layman's terms at rsr.org/soft. * Rapid Stalactites, Stalagmites, Etc.: A construction worker in 1954 left a lemonade bottle in one of Australia's famous Jenolan Caves. By 2011 it had been naturally transformed into a stalagmite (below, right). Increasing scientific knowledge is arguing for rapid cave formation (see below, Nat'l Park Service shrinks Carlsbad Caverns formation estimates from 260M years, to 10M, to 2M, to it "depends"). Likewise, examples are growing of rapid formations with typical chemical make-up (see bottle, left) of classic stalactites and stalagmites including:- in Nat'l Geo the Carlsbad Caverns stalagmite that rapidly covered a bat - the tunnel stalagmites at Tennessee's Raccoon Mountain - hundreds of stalactites beneath the Lincoln Memorial - those near Gladfelter Hall at Philadelphia's Temple University (send photos to Bob@rsr.org) - hundreds of stalactites at Australia's zinc mine at Mt. Isa. - and those beneath Melbourne's Shrine of Remembrance. * Most Human Mutations Arose in 200 Generations: From Adam until Real Science Radio, in only 200 generations! The journal Nature reports The Recent Origin of Most Human Protein-coding Variants. As summarized by geneticist co-author Joshua Akey, "Most of the mutations that we found arose in the last 200 generations or so" (the same number previously published by biblical creationists). Another 2012 paper, in the American Journal of Physical Anthropology (Eugenie Scott's own field) on High mitochondrial mutation rates, shows that one mitochondrial DNA mutation occurs every other generation, which, as creationists point out, indicates that mtEve would have lived about 200 generations ago. That's not so old! * National Geographic's Not-So-Old Hard-Rock Canyon at Mount St. Helens: As our List of Not So Old Things (this web page) reveals, by a kneejerk reaction evolutionary scientists assign ages of tens or hundreds of thousands of years (or at least just long enough to contradict Moses' chronology in Genesis.) However, with closer study, routinely, more and more old ages get revised downward to fit the world's growing scientific knowledge. So the trend is not that more information lengthens ages, but rather, as data replaces guesswork, ages tend to shrink until they are consistent with the young-earth biblical timeframe. Consistent with this observation, the May 2000 issue of National Geographic quotes the U.S. Forest Service's scientist at Mount St. Helens, Peter Frenzen, describing the canyon on the north side of the volcano. "You'd expect a hard-rock canyon to be thousands, even hundreds of thousands of years old. But this was cut in less than a decade." And as for the volcano itself, while again, the kneejerk reaction of old-earthers would be to claim that most geologic features are hundreds of thousands or millions of years old, the atheistic National Geographic magazine acknowledges from the evidence that Mount St. Helens, the volcanic mount, is only about 4,000 years old! See below and more at rsr.org/mount-st-helens. * Mount St. Helens Dome Ten Years Old not 1.7 Million: Geochron Laboratories of Cambridge, Mass., using potassium-argon and other radiometric techniques claims the rock sample they dated, from the volcano's dome, solidified somewhere between 340,000 and 2.8 million years ago. However photographic evidence and historical reports document the dome's formation during the 1980s, just ten years prior to the samples being collected. With the age of this rock known, radiometric dating therefore gets the age 99.99999% wrong. * Devils Hole Pupfish Isolated Not for 13,000 Years But for 100: Secular scientists default to knee-jerk, older-than-Bible-age dates. However, a tiny Mojave desert fish is having none of it. Rather than having been genetically isolated from other fish for 13,000 years (which would make this small school of fish older than the Earth itself), according to a paper in the journal Nature, actual measurements of mutation rates indicate that the genetic diversity of these Pupfish could have been generated in about 100 years, give or take a few. * Polystrates like Spines and Rare Schools of Fossilized Jellyfish: Previously, seven sedimentary layers in Wisconsin had been described as taking a million years to form. And because jellyfish have no skeleton, as Charles Darwin pointed out, it is rare to find them among fossils. But now, reported in the journal Geology, a school of jellyfish fossils have been found throughout those same seven layers. So, polystrate fossils that condense the time of strata deposition from eons to hours or months, include: - Jellyfish in central Wisconsin were not deposited and fossilized over a million years but during a single event quick enough to trap a whole school. (This fossil school, therefore, taken as a unit forms a polystrate fossil.) Examples are everywhere that falsify the claims of strata deposition over millions of years. - Countless trilobites buried in astounding three dimensionality around the world are meticulously recovered from limestone, much of which is claimed to have been deposited very slowly. Contrariwise, because these specimens were buried rapidly in quickly laid down sediments, they show no evidence of greater erosion on their upper parts as compared to their lower parts.- The delicacy of radiating spine polystrates, like tadpole and jellyfish fossils, especially clearly demonstrate the rapidity of such strata deposition. - A second school of jellyfish, even though they rarely fossilized, exists in another locale with jellyfish fossils in multiple layers, in Australia's Brockman Iron Formation, constraining there too the rate of strata deposition. By the way, jellyfish are an example of evolution's big squeeze. Like galaxies evolving too quickly, galaxy clusters, and even human feet (which, like Mummy DNA, challenge the Out of Africa paradigm), jellyfish have gotten into the act squeezing evolution's timeline, here by 200 million years when they were found in strata allegedly a half-a-billion years old. Other examples, ironically referred to as Medusoid Problematica, are even found in pre-Cambrian strata. - 171 tadpoles of the same species buried in diatoms. - Leaves buried vertically through single-celled diatoms powerfully refute the claimed super-slow deposition of diatomaceous rock. - Many fossils, including a Mesosaur, have been buried in multiple "varve" layers, which are claimed to be annual depositions, yet they show no erosional patterns that would indicate gradual burial (as they claim, absurdly, over even thousands of years). - A single whale skeleton preserved in California in dozens of layers of diatom deposits thus forming a polystrate fossil. - 40 whales buried in the desert in Chile. "What's really interesting is that this didn't just happen once," said Smithsonian evolutionist Dr. Nick Pyenson. It happened four times." Why's that? Because "the fossil site has at least four layers", to which Real Science Radio's Bob Enyart replies: "Ha ha ha ha ha ha ha ha ha ha ha", with RSR co-host Fred Williams thoughtfully adding, "Ha ha!" * Polystrate Trees: Examples abound around the world of polystrate trees: - Yellowstone's petrified polystrate forest (with the NPS exhibit sign removed; see below) with successive layers of rootless trees demonstrating the rapid deposition of fifty layers of strata. - A similarly formed polystrate fossil forest in France demonstrating the rapid deposition of a dozen strata. - In a thousand locations including famously the Fossil Cliffs of Joggins, Nova Scotia, polystrate fossils such as trees span many strata. - These trees lack erosion: Not only should such fossils, generally speaking, not even exist, but polystrates including trees typically show no evidence of erosion increasing with height. All of this powerfully disproves the claim that the layers were deposited slowly over thousands or millions of years. In the experience of your RSR radio hosts, evolutionists commonly respond to this hard evidence with mocking. See CRSQ June 2006, ICR Impact #316, and RSR 8-11-06 at KGOV.com. * Yellowstone Petrified Trees Sign Removed: The National Park Service removed their incorrect sign (see left and more). The NPS had claimed that in dozens of different strata over a 40-square mile area, many petrified trees were still standing where they had grown. The NPS eventually removed the sign partly because those petrified trees had no root systems, which they would have had if they had grown there. Instead, the trees of this "fossil forest" have roots that are abruptly broken off two or three feet from their trunks. If these mature trees actually had been remnants of sequential forests that had grown up in strata layer on top of strata layer, 27 times on Specimen Ridge (and 50 times at Specimen Creek), such a natural history implies passage of more time than permitted by biblical chronology. So, don't trust the National Park Service on historical science because they're wrong on the age of the Earth. * Wood Petrifies Quickly: Not surprisingly, by the common evolutionary knee-jerk claim of deep time, "several researchers believe that several millions of years are necessary for the complete formation of silicified wood". Our List of Not So Old and Not So Slow Things includes the work of five Japanese scientists who proved creationist research and published their results in the peer-reviewed journal Sedimentary Geology showing that wood can and does petrify rapidly. Modern wood significantly petrified in 36 years these researchers concluded that wood buried in strata could have been petrified in "a fairly short period of time, in the order of several tens to hundreds of years." * The Scablands: The primary surface features of the Scablands, which cover thousands of square miles of eastern Washington, were long believed to have formed gradually. Yet, against the determined claims of uniformitarian geologists, there is now overwhelming evidence as presented even in a NOVA TV program that the primary features of the Scablands formed rapidly from a catastrophic breach of Lake Missoula causing a massive regional flood. Of course evolutionary geologists still argue that the landscape was formed over tens of thousands of years, now by claiming there must have been a hundred Missoula floods. However, the evidence that there was Only One Lake Missoula Flood has been powerfully reinforced by a University of Colorado Ph.D. thesis. So the Scablands itself is no longer available to old-earthers as de facto evidence for the passage of millions of years. * The Heart Mountain Detachment: in Wyoming just east of Yellowstone, this mountain did not break apart slowly by uniformitarian processes but in only about half-an-hour as widely reported including in the evolutionist LiveScience.com, "Land Speed Record: Mountain Moves 62 Miles in 30 Minutes." The evidence indicates that this mountain of rock covering 425 square miles rapidly broke into 50 pieces and slid apart over an area of more than 1,300 square miles in a biblical, not a "geological," timeframe. * "150 Million" year-old Squid Ink Not Decomposed: This still-writable ink had dehydrated but had not decomposed! The British Geological Survey's Dr. Phil Wilby, who excavated the fossil, said, "It is difficult to imagine how you can have something as soft and sloppy as an ink sac fossilised in three dimensions, still black, and inside a rock that is 150 million years old." And the Daily Mail states that, "the black ink was of exactly the same structure as that of today's version", just desiccated. And Wilby added, "Normally you would find only the hard parts like the shell and bones fossilised but... these creatures... can be dissected as if they are living animals, you can see the muscle fibres and cells. It is difficult to imagine... The structure is similar to ink from a modern squid so we can write with it..." Why is this difficult for evolutionists to imagine? Because as Dr. Carl Wieland writes, "Chemical structures 'fall apart' all by themselves over time due to the randomizing effects of molecular motion."Decades ago Bob Enyart broadcast a geology program about Mount St. Helens' catastrophic destruction of forests and the hydraulic transportation and upright deposition of trees. Later, Bob met the chief ranger from Haleakala National Park on Hawaii's island of Maui, Mark Tanaka-Sanders. The ranger agreed to correspond with his colleague at Yellowstone to urge him to have the sign removed. Thankfully, it was then removed. (See also AIG, CMI, and all the original Yellowstone exhibit photos.) Groundbreaking research conducted by creation geologist Dr. Steve Austin in Spirit Lake after Mount St. Helens eruption provided a modern-day analog to the formation of Yellowstone fossil forest. A steam blast from that volcano blew over tens of thousands of trees leaving them without attached roots. Many thousands of those trees were floating upright in Spirit Lake, and began sinking at varying rates into rapidly and sporadically deposited sediments. Once Yellowstone's successive forest interpretation was falsified (though like with junk DNA, it's too big to fail, so many atheists and others still cling to it), the erroneous sign was removed. * Asiatic vs. European Honeybees: These two populations of bees have been separated supposedly for seven million years. A researcher decided to put the two together to see what would happen. What we should have here is a failure to communicate that would have resulted after their "language" evolved over millions of years. However, European and Asiatic honeybees are still able to communicate, putting into doubt the evolutionary claim that they were separated over "geologic periods." For more, see the Public Library of Science, Asiatic Honeybees Can Understand Dance Language of European Honeybees. (Oh yeah, and why don't fossils of poorly-formed honeycombs exist, from the millions of years before the bees and natural selection finally got the design right? Ha! Because they don't exist! :) Nautiloid proves rapid limestone formation. * Remember the Nautiloids: In the Grand Canyon there is a limestone layer averaging seven feet thick that runs the 277 miles of the canyon (and beyond) that covers hundreds of square miles and contains an average of one nautiloid fossil per square meter. Along with many other dead creatures in this one particular layer, 15% of these nautiloids were killed and then fossilized standing on their heads. Yes, vertically. They were caught in such an intense and rapid catastrophic flow that gravity was not able to cause all of their dead carcasses to fall over on their sides. Famed Mount St. Helens geologist Steve Austin is also the world's leading expert on nautiloid fossils and has worked in the canyon and presented his findings to the park's rangers at the invitation of National Park Service officials. Austin points out, as is true of many of the world's mass fossil graveyards, that this enormous nautiloid deposition provides indisputable proof of the extremely rapid formation of a significant layer of limestone near the bottom of the canyon, a layer like the others we've been told about, that allegedly formed at the bottom of a calm and placid sea with slow and gradual sedimentation. But a million nautiloids, standing on their heads, literally, would beg to differ. At our sister stie, RSR provides the relevant Geologic Society of America abstract, links, and video. * Now It's Allegedly Two Million Year-Old Leaves: "When we started pulling leaves out of the soil, that was surreal, to know that it's millions of years old..." sur-re-al: adjective: a bizarre mix of fact and fantasy. In this case, the leaves are the facts. Earth scientists from Ohio State and the University of Minnesota say that wood and leaves they found in the Canadian Arctic are at least two million years old, and perhaps more than ten million years old, even though the leaves are just dry and crumbly and the wood still burns! * Gold Precipitates in Veins in Less than a Second: After geologists submitted for decades to the assumption that each layer of gold would deposit at the alleged super slow rates of geologic process, the journal Nature Geoscience reports that each layer of deposition can occur within a few tenths of a second. Meanwhile, at the Lihir gold deposit in Papua New Guinea, evolutionists assumed the more than 20 million ounces of gold in the Lihir reserve took millions of years to deposit, but as reported in the journal Science, geologists can now demonstrate that the deposit could have formed in thousands of years, or far more quickly! Iceland's not-so-old Surtsey Island looks ancient. * Surtsey Island, Iceland: Of the volcanic island that formed in 1963, New Scientist reported in 2007 about Surtsey that "geographers... marvel that canyons, gullies and other land features that typically take tens of thousands or millions of years to form were created in less than a decade." Yes. And Sigurdur Thorarinsson, Iceland's chief geologist, wrote in the months after Surtsey formed, "that the time scale," he had been trained "to attach to geological developments is misleading." [For what is said to] take thousands of years... the same development may take a few weeks or even days here [including to form] a landscape... so varied and mature that it was almost beyond belief... wide sandy beaches and precipitous crags... gravel banks and lagoons, impressive cliffs… hollows, glens and soft undulating land... fractures and faultscarps, channels and screes… confounded by what met your eye... boulders worn by the surf, some of which were almost round... -Iceland's chief geologist * The Palouse River Gorge: In the southeast of Washington State, the Palouse River Gorge is one of many features formed rapidly by 500 cubic miles of water catastrophically released with the breaching of a natural dam in the Lake Missoula Flood (which gouged out the Scablands as described above). So, hard rock can be breached and eroded rapidly. * Leaf Shapes Identical for 190 Million Years? From Berkley.edu, "Ginkgo biloba... dates back to... about 190 million years ago... fossilized leaf material from the Tertiary species Ginkgo adiantoides is considered similar or even identical to that produced by modern Ginkgo biloba trees... virtually indistinguishable..." The literature describes leaf shapes as "spectacularly diverse" sometimes within a species but especially across the plant kingdom. Because all kinds of plants survive with all kinds of different leaf shapes, the conservation of a species retaining a single shape over alleged deep time is a telling issue. Darwin's theory is undermined by the unchanging shape over millions of years of a species' leaf shape. This lack of change, stasis in what should be an easily morphable plant trait, supports the broader conclusion that chimp-like creatures did not become human beings and all the other ambitious evolutionary creation of new kinds are simply imagined. (Ginkgo adiantoides and biloba are actually the same species. Wikipedia states, "It is doubtful whether the Northern Hemisphere fossil species of Ginkgo can be reliably distinguished." For oftentimes, as documented by Dr. Carl Werner in his Evolution: The Grand Experiment series, paleontogists falsely speciate identical specimens, giving different species names, even different genus names, to the fossil and living animals that appear identical.) * Box Canyon, Idaho: Geologists now think Box Canyon in Idaho, USA, was carved by a catastrophic flood and not slowly over millions of years with 1) huge plunge pools formed by waterfalls; 2) the almost complete removal of large basalt boulders from the canyon; 3) an eroded notch on the plateau at the top of the canyon; and 4) water scour marks on the basalt plateau leading to the canyon. Scientists calculate that the flood was so large that it could have eroded the whole canyon in as little as 35 days. See the journal Science, Formation of Box Canyon, Idaho, by Megaflood, and the Journal of Creation, and Creation Magazine. * Manganese Nodules Rapid Formation: Allegedly, as claimed at the Wikipedia entry from 2005 through 2021: "Nodule growth is one of the slowest of all geological phenomena – in the order of a centimeter over several million years." Wow, that would be slow! And a Texas A&M Marine Sciences technical slide presentation says, “They grow very slowly (mm/million years) and can be tens of millions of years old", with RWU's oceanography textbook also putting it at "0.001 mm per thousand years." But according to a World Almanac documentary they have formed "around beer cans," said marine geologist Dr. John Yates in the 1997 video Universe Beneath the Sea: The Next Frontier. There are also reports of manganese nodules forming around ships sunk in the First World War. See more at at youngearth.com, at TOL, in the print edition of the Journal of Creation, and in this typical forum discussion with atheists (at the Chicago Cubs forum no less :). * "6,000 year-old" Mitochondrial Eve: As the Bible calls "Eve... the mother of all living" (Gen. 3:20), genetic researchers have named the one woman from whom all humans have descended "Mitochondrial Eve." But in a scientific attempt to date her existence, they openly admit that they included chimpanzee DNA in their analysis in order to get what they viewed as a reasonably old date of 200,000 years ago (which is still surprisingly recent from their perspective, but old enough not to strain Darwinian theory too much). But then as widely reported including by Science magazine, when they dropped the chimp data and used only actual human mutation rates, that process determined that Eve lived only six thousand years ago! In Ann Gibbon's Science article, "Calibrating the Mitochondrial Clock," rather than again using circular reasoning by assuming their conclusion (that humans evolved from ape-like creatures), they performed their calculations using actual measured mutation rates. This peer-reviewed journal then reported that if these rates have been constant, "mitochondrial Eve… would be a mere 6000 years old." See also the journal Nature and creation.com's "A shrinking date for Eve," and Walt Brown's assessment. Expectedly though, evolutionists have found a way to reject their own unbiased finding (the conclusion contrary to their self-interest) by returning to their original method of using circular reasoning, as reported in the American Journal of Human Genetics, "calibrating against recent evidence for the divergence time of humans and chimpanzees," to reset their mitochondrial clock back to 200,000 years. * Even Younger Y-Chromosomal Adam: (Although he should be called, "Y-Chromosomal Noah.") While we inherit our mtDNA only from our mothers, only men have a Y chromosome (which incidentally genetically disproves the claim that the fetus is "part of the woman's body," since the little boy's y chromosome could never be part of mom's body). Based on documented mutation rates on and the extraordinary lack of mutational differences in this specifically male DNA, the Y-chromosomal Adam would have lived only a few thousand years ago! (He's significantly younger than mtEve because of the genetic bottleneck of the global flood.) Yet while the Darwinian camp wrongly claimed for decades that humans were 98% genetically similar to chimps, secular scientists today, using the same type of calculation only more accurately, have unintentionally documented that chimps are about as far genetically from what makes a human being a male, as mankind itself is from sponges! Geneticists have found now that sponges are 70% the same as humans genetically, and separately, that human and chimp Y chromosomes are "horrendously" 30%
Unravel the mysteries of Signal-to-Noise Ratio (SNR) in copper testing alongside our guest for the day, Steve Cowles RCDD NTS, a product manager and technical services manager from AEM Precision Cable Test. Steve breaks down this complex topic, explaining the need for a higher signal to noise ratio for optimum performance, and the significance of channel operating margin testing. We also delve into the standards set by IEEE that ensure reliable and efficient cabling, with Steve advising on how to set testers for optimum results.Ever wondered how the length of your cable affects insertion loss? Or why Cat 6A is the superior choice for higher data rates? Let Steve guide you through these intricacies, and discover why SNR testing is the fast, accurate method for determining if your cabling can support specific speeds. Gain an understanding of the role shielding and testing technologies play in mitigating crosstalk and interference. And it's not just about understanding the technique, but about troubleshooting too. We chat about potential issues causing signal losses - from excessive length and mismatches to heat and moisture. Steve stresses the importance of buying from reputable manufacturers, offering actionable advice on navigating the world of cabling. We end the conversation by discussing electromagnetic interference and the importance of adhering to industry standards. Join us for this enlightening conversation, packed with practical advice for anyone tackling cabling and connectivity issues.Support the showKnowledge is power! Make sure to stop by the webpage to buy me a cup of coffee or support the show at https://linktr.ee/letstalkcabling . Also if you would like to be a guest on the show or have a topic for discussion send me an email at chuck@letstalkcabling.com Chuck Bowser RCDD TECH#CBRCDD #RCDD
Uncomfortable Conversations Podcast The Untold Stories of the 3HO Kundalini Yoga Community
Els Coenen (formerly Ravinder Kaur) lives in Belgium. She was 49 when she took her first Kundalini Yoga class, immediately fell in love with it and became a teacher in 2008. She taught Kundalini Yoga for ten years, gave multiple classes a week, organized workshops and teacher training programs in Belgium and East-Africa and assisted in them. Every year she went to the European Yoga Festival, had a boot at the bazar to collect funds for the seva-based teacher training programs in East-Africa, combining this work with a job in the Telecom sector. She organized a Sat Nam Rasayan training in Belgium. In 2013, two women were sexually abused by an SNR instructor. Because Els insisted that the harm would be recognized, and action would be taken to prevent future abuse, she was excommunicated by Guru Dev Singh master of this healing technique and student of Yogi Bhajan, and Belgium was declared "a no-go-zone" for SNR for two years. From 2010 till 2018 Els presided the Belgian Federation of Kundalini Yoga and represented her country at an international level. Disillusioned by many things she stopped all Kundalini Yoga related activities in 2018. When in 2020 the book Premka triggered many survivors to tell their stories, Els stepped in again, hoping that the time for transparency and clarity had come. She was part of an advisory team of the Compassionate Reconciliation Program but stepped out at the end of 2022 as she experienced it as window dressing. In April 2021 after she read or listened to survivor stories on many different platforms and had watched hours and hours of Uncomfortable Conversations, she contacted GuruNischan to collaborate. She made extracts of the interviews and put them together on a website (abuse-in-kundalini-yoga.com) to allow people with less free time to be informed. Now, she has written a book in partnership with GuruNischan, including voices of survivor stories called Under the Yoga Mat: The Dark History of Yogi Bhajan's Kundalini Yoga. Song Credit: Because of You – by Gustaph The book can be found on the website below for platforms available for purchase in Europe, USA and many other countries throughout Asia as well. www.undertheyogamat.com www.abuse-in-kundalini-yoga.com You can DONATE to this broadcast at: http://www.gurunischan.com/uncomfortableconversations Uncomfortable Conversations Spotify Playlist: https://open.spotify.com/playlist/2lEfcoaDgbCCmztPZ4XIuN?si=vH-cH7HzRs-qFxzEuogOqg
Education On Fire - Sharing creative and inspiring learning in our schools
Tami Harel is the Chief Audiologist and Director of Clinical Research at Nuance Hearing. Tami is an Audiologist and Speech Pathologist with a Masters Degree in Communication Disorders and is currently working on her PhD in Gerontology. With more than 15 years of experience with hearing aids and auditory rehabilitation, in the public and private sector, she joined Nuance Hearing in the early stages of the company's development.Nuance Hearing was founded in 2015 with the fundamental goal of developing a technological solution for the ‘cocktail party problem' - the difficulty of understanding speech in noisy environments.Over the years the company has made significant technological developments and algorithmic advancements within the framework of acoustic beamforming, becoming a world-leader in directional hearing signal-to-noise ratio (SNR).These developments have led to an impressive array of collaborations with business partners in the hearing aid and assistive technology industry, clinical researcher scientists, educators and EdTech advocates. This includes a table microphone, designed to be used with hearing aids and developed in partnership with hearing aid industry leader Starkey.Websitewww.nuancehear.comSocial Media Informationwww.linkedin.com/company/nuancehearingtwitter.com/nuancehearingwww.facebook.com/NuanceHearingResources MentionedThe Boy, The Mole, The Fox and The HorseShow Sponsor – National Association for Primary Education (NAPE)Primary Education Summit – ‘Visions for the Future' – 2023Get access NOW at www.nape.org.uk/summit
Foundations of Amateur Radio We describe the relationship between the power of a wanted signal and unwanted noise as the signal to noise ratio or SNR. It's often expressed in decibels or dB which makes it possible to represent really big and really small numbers side-by-side, rather than using lots of leading and trailing zeros. For example one million is the same as 60 on a dB scale and one millionth, or 0.000001 is -60. One of the potentially more perplexing ideas in communication is the notion of a negative signal to noise ratio. Before I dig in how that works and how we can still communicate, I should point out that in general for communication to happen, there needs to be a way to distinguish unwanted noise from a desired signal and how that is achieved is where the magic happens. Let's look at a negative SNR, let's say -20 dB. What that means is that the ratio between the wanted signal and the unwanted noise is equivalent to 0.01, said differently, the signal is 100 times weaker than the noise. In other words, all that a negative SNR means is that the ratio between signal and noise is a fraction, as-in, more than zero, but less than one. It's simpler to say the SNR is -30 dB than saying the noise is 1000 times stronger than the signal. Numbers like this are not unusual. The Weak Signal Propagation Reporter or WSPR is often described as being able to work with an SNR of -29 dB, which indicates that the signal is about 800 times weaker than the noise. To see how this works behind the scenes, let's start with the idea of bandwidth. On a typical SSB amateur radio, voice takes up about 3000 Hz. For better readability, most radios filter out the lower and upper audio frequencies. For example, my Yaesu FT857d has a frequency response of 400 Hz to 2600 Hz for SSB, effectively keeping 2200 Hz of usable signal. Another way to say this is that the bandwidth of my voice is about 2200 Hz, when I'm using single side band. That bandwidth is how much of the radio spectrum is used to transmit a signal. For comparison, a typical RTTY or radio teletype signal has a bandwidth of about 270 Hz. A typical Morse Code signal is about 100 Hz and a WSPR signal is about 6 Hz. Before I continue, I should point out that the standard for measuring in amateur radio is 2500 Hz. This is significant because when you're comparing wide and narrow signals to each other you'll end up with some interesting results like negative signal to noise ratios. This happens because you can filter out the unwanted noise before you even start to decode the signal. That means that the signal stays the same, but the average noise reduces in comparison to the 2500 Hz standard. This adds up quickly. For a Morse Code signal, it means that turning on your 100 Hz filter, will feel like improving the signal to noise ratio by 14 dB, that's a 25 fold increase in your desired signal. Similarly, filtering the WSPR signal before you start decoding will give you roughly a 26 dB improvement before you even start. But there's more, since I started off with claiming that WSPR can operate with an SNR of -29 dB. I'll note that -29 dB is only one of the many figures quoted. I have described testing the WSPR decoder on my system and it finally failed at about -34 dB. Even with a 26 dB gain from filtering we're still deep into negative territory, so our signal is still much weaker than the noise. There are several phenomena that affect the decoding of a signal. To give you a sense, consider using a limited vocabulary, like say the phonetic alphabet, or a Morse character, the higher the chance of figuring out which letter you meant. This is why it's important that everyone uses the same alphabet and why there's a standard for it. To send a message, WSPR uses an alphabet of four characters, that is, four different tones or symbols. Another is how long you send a symbol. A Morse dit sent at 6 words per minute or WPM lasts two tenths of a second, but sent at 25 WPM lasts less than 5 hundredth of a second This is why WSPR uses two minutes, actually 110.6 seconds, to send 162 bits of data, lasting just under one and a half seconds each. If that's not enough, there's a processing gain. One of the fun things about signal processing is that when you combine two noise signals, they don't reinforce each other, but when you combine two actual signals, they do. Said in another way, signal adds coherently and noise adds incoherently. To explain that, imagine that you have an unknown signal and you pretended that it said VK6FLAB. If you combined the unknown signal with your first guess of VK6FLAB and you were right, the unknown signal would be reinforced by your guess. If it was wrong, it wouldn't. If your vocabulary is small, like say four symbols, you could try each in turn to see what was reinforced and what wasn't. There's plenty more, things like adding error correction so you can detect any potentially incorrect words. Think of it as a human understanding Bravo when the person at the other end said Baker. If you knew when to expect a signal, it would make it easier to decode, which is why a WSPR signal starts at one second into each even minute and each symbol contains information about when that signal was sent, which is why it's so important to set your computer clock accurately. Another is to shuffle the bits in your message in such a way that specific types of noise don't obscure your entire message. For example, if you had two symbols side-by-side that when combined represented the power level of your message, a brief burst of noise could obliterate the power level, but if they were stored in different parts of your message, you'd have a better chance of decoding the power level. I've only scratched the surface of this, but behind every seemingly simple WSPR message lies a whole host of signal processing magic that underlies much of the software defined radio world. These same techniques and plenty more are used in Wi-Fi communications, in your mobile phone, across fibre-optic links and the high speed serial cable connected to your computer. Who said that Amateur Radio stopped at the antenna connected to your radio? I'm Onno VK6FLAB
Check out the Video --> GE Healthcare Revolutionary AIR™ Coils and AIR™ Recon DL | https://youtu.be/MbT_fTqL_oE Join Reggie and Robert in this exciting episode of Zone3Podcast as they sit down with Bradley Tomlinson, GE Healthcare's Rocky Mountain Region Manager of MR operations. Listen as Brad shares his inspiring career journey and introduces the hosts to GE's award-winning AIR™ Coils, an industry first in MRI coil design. The trio discusses the benefits of these ultra-light, flexible, and overlapping coil elements that deliver the exceptional image quality and a simplified, faster workflow.But that's not all! Brad also dives into the game-changing AIR™ Recon DL algorithm, which transforms the MRI image reconstruction process. With its ability to perform ringing suppression and SNR improvement, AIR™ Recon DL offers clinical, operational, and financial benefits over conventional image reconstruction, including sharper images, reduced scan time, greater tolerance of protocol variations, and easier-to-read images.Listen to this informative episode to learn about the latest advancements in MRI technology and how the AIR™ Coils and AIR™ Recon DL are changing the game for clinicians, patients, and the industry. Don't miss out on this exciting episode of Zone 3 Podcast!GE AIR Technology - https://www.gehealthcare.com/products...
Own The Moment: NBA Top Shot, NFL All Day, and Sports NFT Podcast
The SNR is back for episode two covering this week in the NFL and the NFL ALL DAY market, the upcoming Jolly Joker Sport Society mint, poker scandal latest, and with the NBA and NHL back in action, the greatest time of the year in sports! #OTM #JJSS #sports #NFTs Websites: https://otmnft.com https://jollyjokersnft.com
The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009
https://www.youtube.com/watch?v=7fGXg5EQEvM In this third Astronomy 101 video from AstroCamp, Dr. Jen (AKA Dr. Dust) takes a look at the most important list of deep sky objects for amateur astronomers: the Messier Catalogue. Created by French astronomer Charles Messier in the 18th Century to help him find more comets, this catalogue is the most helpful list of the brightest and easiest to find galaxies, star clusters, nebulae and even starts with an SNR, a supernova remnant. This makes it the catalogue of objects for anyone with their first telescope or pair of binoculars. But for more experienced astronomers it's also an opportunity to do the Messier Marathon - trying to find and observe all 110 objects in a single night! But please do help us out by subscribing to the show: https://www.youtube.com/awesomeastron... We've added a new way to donate to 365 Days of Astronomy to support editing, hosting, and production costs. Just visit: https://www.patreon.com/365DaysOfAstronomy and donate as much as you can! Share the podcast with your friends and send the Patreon link to them too! Every bit helps! Thank you! ------------------------------------ Do go visit http://www.redbubble.com/people/CosmoQuestX/shop for cool Astronomy Cast and CosmoQuest t-shirts, coffee mugs and other awesomeness! http://cosmoquest.org/Donate This show is made possible through your donations. Thank you! (Haven't donated? It's not too late! Just click!) ------------------------------------ The 365 Days of Astronomy Podcast is produced by the Planetary Science Institute. http://www.psi.edu Visit us on the web at 365DaysOfAstronomy.org or email us at info@365DaysOfAstronomy.org.