POPULARITY
Der Tag in NRW: Angriff in Bielefeld; NRW-FDP-Chef Höne zum Bundesparteitag; Neues Verfassungsschutzgesetz; Missbrauch im Rudersport, Das letzte, turbulente Bundesligaspieltag; Abpflastern im Münsterland; Strafe für ehrenamtliches Hacken; Moderation: Wolfgang Meyer. Von Christoph WDR 5 Ullrich.
Fendärboerg Either you take the risk or you work for somebody who did. Stammtischplatz / Nachbarn belauschen / Inspiration / Menschen beobachten / die Stadt - der Platz / Zytglogge/ Kaffee+Wermut / Anker / Wer geng am glichuuu Platz sitzt, U das gäng der glichu Ziiiit, Der gseht de gäng di glichu Liiiit. Der Obdachlose wo jede Abend där glich Satz liiirut. U die schwarz Frau mit dänäää wilduuu Lockäää, Ä Frau- ä Hülle voll Fülle, Prachtsweib, obwohl „WEIB“ nicht nein sorry, - nicht angebracht. Gäng chunt schiiii zär gliichu Ziiit vam Kornhaus - Richtung ? Abär äbu grad hittu chunt schi vor Richtung ? u geit zum Kornhaus. Jetzt chumii grad nimä druss. Gespräche am Nebentisch / die Musikstudentinnen und ja Klavier spielt man nicht nur mit den Händen, nein mit dem Körper, somit kann man Sehenentzündungen in Handgelenken vermeiden, gerade wenn viel geprobt werden muss. Und diese Aussage, habe ich mir zu Herzen genommen. Gleich am nächsten Tag zog es mich an den Flügel - ich übte mit dem Körper und nicht nur mit den Händen, am Flügel zu spielen. Socialy anxious, self sabotage, es fielen einige Wörter am Nebentisch, ein interessantes Gespräch der Musikstudentinnen und im „Kaffee+Wermut“ wo die Tische so eng zusammen gestellt sind, eignet sich das BELAUSCHEN der Gäste wunderbar. Fendärboerg - Määärchending-dising ? Ich habe eine wirklich clevere Stammkundschaft und die BEA war einfach genial, ein bisschen Schleichwerbung für Radio Supersaxo und anstehenden Projekte, ich meine wenn die Kunden:innen so fragen was denn gerade so läuft? Sicher haben wir auch Werbung für den Tag der offenen Weinkellereien Ende Monat gemacht, aber eben. Passion + Emotion = Friendship / ich meine jeder Winzer/inn könnte zb einem Mitarbeiter:in welcher so zb 10 Jahre beim ihm gearbeitet hat einen Jubiläumswein widmen Der-die Mitarbeiterin kreiert mit dem Winzer:in ihren Wein? Der Mitarbeiter:in steht hinter seiner Kreation und faikauft das was selber mitkriegt hat um so besser? Aber Godi, das Poster mit der Frau und dem roten Hintergrund erinnert zu sehr an SP - das geht ja gar nicht. Ideen wie Abhol-Event, zb: in Kombination mit dem Tag der offenen Weinkellereien? Pimp your Raclette and Great Sound ? Idee Konzept FBoerg u dü triiichsch nä ni allei. Das Parship der Weinindustrie. APP. Community. / BBQ - Eringer Wurst mit FBoerg Senf by Horny Chef, Kräuterbrot, Weinsauerkraut. Das Kräuterbrot wurde gemacht in der Bäckerei oberhalb Ried-Brig - Vater Biner aus Zermatt in der nähe vam Restaurant-Hotel Simplon ? Die nette und andersartige Holz-Ente im Showfenster wurde ersetzt - es steht nun ein netter Hund da. Ein ZEICHEN ? Skulptur der Monats und das hat mit gerade so inspiriert - Wolle und Blumen um Showfenster = Walliser Schwarznasenschaf im Blumenfeld. Eau du Chasselas und ja wir haben nicht alle den gleich grossen Käfig, aber alle einen Vogel. Zitat bei Werner Zurbriggen und was dies nun hier und jetzt zur Sache tut ? Und ja da war dieses nette Ehepaar am Stand - er ursprünglich aus der Belalp und sie aus Zandfort. Sie arbeiten ehrenamtlich in der Waldau und organisieren da Kunstausstellungen und Kunstherapie und ja das mit dem malen als normalmenschlicher - u ja der spinnt - das hatten wir ja schon. Und ja ob sie denn das Lied aus der Waldau kennen? Ja Sprützäää i di linggi oder rächtiii Füüüdlääpagge ? Wir haben gelacht. Ischl Gunti dä sitzut so hinär dä Gardine, nei nid de Schwedischu sundär däna va der IKEA u hät verscho va schinär Liebschta, as Pijama vam Temu ? U im Radio läuft grad z Lied vam Heino - La Montanara und Fujimata - Berge sind überall schön - meine lieben Leser und Leserinnen dieses doch so tollen Beitrags an die Weltliteratur, mini alleetreusten Zuhörerinnen u Hörer va Radio Supersaxo - wenn sie nun herausfinden, welches Lied von Heino im Radio gerade so läuft? - Sehnsucht - Alpenruh - herrliche Berge - Bergvagabunden - Herzen erglühen - frei atmen die Lungen - Seil und Hacken, den Tod im Nacken ….. Brüder auf Leben und Tod…! Feedback Urs Jossen zum Label vam Fendärboerg : Gletscherfee gseht emäl scho supper üs !!! U ja schiiiiis Lied - die Träne va der Gletscherfee….! Ja, ich gah mit dum Wind, / Schnee / Schmelzwasser / Ernte / Herbst / Jahreszeiten / Die Tränäää va der Gletscherfee - der Rottu am briii u dum Tal entlang inä See u dä ins Meer. Jede Tropfuuu isch ä Tränaaa va der Gletscherfee wa geit mit dum Wind. U ja viellicht sind där Gletscherfee entwichu as paar Tränä u äu schi där Wiiind de getreit is Gletscherwasser vam Elixier - vam Gletscherwasserkonzentrat wa verfeinert wordu isch mit as paar wildäää Alpuchrütär - alles i kochut ufum Saaser Lärchufiiiir ! Nei nei - der Wiii - der Wii va dä Alpäää - nei nei nid numu zum Tanz u Rock regt är a - nei äu zum na deichuuu - zum gmeinsamu hengert u ja natirli äu a Hommage a ischä Gletscher. Reflektieruuu u in schich ga u schich dä äu zär rächtuu Ziiit wieder verlahhh. Nie blibuu stah - witär ga, Krona richtuu, Kopf hoch - witär tanzu ! Abär äbu - der FENDäRBOERG - der Walliser Wiss Wiii war rockt - niemäää a lei Wii-trichu ! Z`Tal - Bergsteigen - Fluss - Winter - Schnee - Wälder - Bach - wa z Gletscherwasser fliesst. Felder - Wasser - Bach - Matten - Suonen - Sommer - Frucht - die Felder befliessen - begiessen - Wasserzufuhr - Herbst - Ernte ja u ich gaaaa mit dum Wiiiin - der Woodstock Dude - Gletscherfee. U ja dü bisch nie einsam, wenn du bei dir bist u i lueguuu i dini fendantblauiii Äugääää !!! #ischifinuwina - fini wina - ja ischi fiiinu Wiiina - di tricht schi am liebschtu im Bikini, nei natirliii nid in Rimini, sonder gäng bim Dimitri. In Jesolo as Picolo - o ho o ho - as Picolo i Jesolo u gäll hä äu scho gscheidärs gschribu u gedichtuttt ? Ja was haben wir gelacht, uns was ausgedacht, uns angelacht, aber nie umgebracht. Gewähr hittu zär Andacht? I der Wärchtagstracht? U derna la dü mi i di driii u dä isch äu wirkli äu nie eppis vergäbu gsi. Mein Auto - ganz selbstständig unterwegs reservierte für sich im Schloss Hünigen einen non binären Parkplatz. Nun gut das Thema Parkplätze hatten wir ja schon aber eben - Behindertenparklpätze und für mich ein NO-GO werden diese missbräuchlich genutzt - wie zb von der eigentlich noch jungen Frau hier im Dorf - aktive Skifahrerin - pflegt mit Ihrer Elektro-Karosse durch das Dorf zu gondeln - nun gut sie wohnt am anderen Ende der Dorfes - parkiert auf dem Behindertenparkplatz - pflegt von da aus einen kleinen Spaziergang zur Massage Anbieterin und holt dann später ihren Elektro auf dem Behinderten Parkplatz ab - LINKS und RECHTS hätte es genug freie Plätze ! This a NO-GO to me - da werde ich hässig……! Aber eben der Sprecher meines Navigationssystems outet sich als Veganer Transvestit, der Müdigkeitsassistent möchte nun doch wie Trump nun auch Papst werden. Wie innig ist Ihre Beziehung zu Ihrem Auto? Im Schloss Hünigen gönnte man sich eine feine Mahlzeit. Diese Location ausgezeichnet mit so einigen Auszeichnungen wie auch die Swiss Wine Award List und dies wohl auch den Weinen der Cave Fin Bec Sion und meiner persönlichen guten Beratung? Tolle Events im Schloss Hünigen und ich liebe den Salon du Vin und das Winzer- Dinner wo ich mich gerne mit den Winzern austausche und immer wieder feststellen muss, dass ich keine Ahnung von Wein habe. Lieblingswinzer - der Österreicher welcher in seinen Wander und Lehrjahren 2 Rolls Royce verfressen und versoffen hat ! Als Appetit Anreger gab es einen Tee aus Birche und Brennnessel, der Butter wurde verfeinert mit einem Akazien-Öl. Mensch und die Suppe - also ein kulinarisches Meisterwerk - Pilze gebraten und Pilzcreme mit einem Espuma von Bärlauch und einem feinen Blätterteiggebäck. Dazu gab es einen sehr feinen Sauvignon Blanc. Zum Hauptgang Flank Steak mit feinster Sauce und einem Kräuteröl. Das Steak in einer Thymian Kruste - mega und Bärlauch-Gnocci und knackiges Gemüse - dazu passte der Blaufränkische Rotwein mega gut. Immer wieder gerne. Eine riesen Kompliment ans Schloss Hünigen - ans TEAM - ihr seid Spitze. Und zum Abschluss. U mini isch nid dini, U dini isch nid mini, Üssär dini, welti jetzt mini si, Dä la ni dier mini - dini la si. Mini isch ja nid wirkli mini, U äu im Mini isch schi a fini. Wehri mini villicht gäru dini? U isch dä dini äu wirkli dini? Si mini u dini nid liebär im Bikini, U hätti villicht lieber äs bambini, vam Signore Fellini?
This week, I sat down with Yev Broshevan, co-founder and Chief Business Development Officer at Hacken, a leading blockchain security firm focused on making Web3 a safer, more trustworthy place for everyone. Even better? Yev's birthday is the same day this episode drops - happy birthday, Yev!Yev's story goes way back to her early days as a cybersecurity engineer in Ukraine, where weak institutions and a hacker's curiosity led her deep into the world of Bitcoin back in 2014. Since then, she and her Hacken co-founders have been building a mission-driven business around ethical hacking—using a mindset of creativity and values to secure everything from smart contracts to crypto exchanges.In this episode, we get into:Why the hacker mindset isn't just about code, it's a way of living—and solving problemsHow HackenProof, their bug bounty platform, now has over 45,000 ethical hackers keeping Web3 projects secureWhy smart contract audits are just one layer—operational security and key management matter just as muchHow AI is reshaping both sides of the security equation: as a powerful ally and a new threat vectorWhy Yev's now channeling her hacker brain into biohacking—optimizing health and performance like it's just another system to tuneWe also talk about Hacken's move into the US, how the security landscape has evolved since the early days of crypto, and what the industry still needs to get right to earn back trust.From Web2 to Web3 to Human 2.0, this one's got layers.Connect with Yev Broshevan and Hacken:Yev Broshevan: X/Twitter | LinkedInHacken: X/Twitter | LinkedIn | WebsiteLeave a review and subscribe onApple PodcastsSpotifyMoneyNeverSleeps (website)Email us: info@norioventures.comConnect with Pete Townsend and MoneyNeverSleeps:Pete Townsend: X/Twitter | LinkedIn MoneyNeverSleeps: X/Twitter | LinkedIn | Newsletter
In aanloop naar de NAVO-top in Den Haag scherpt de overheid de straffen voor digitale spionage aan. Dat roept de vraag op: wie zijn er eigenlijk uit op het achterhalen van onze staatsgeheimen of het binnendringen van vitale infrastructuur zoals het stroomnet, internet of stoplichten? En waarom? Dat leggen Dave en Harm, vergezeld door Jordi van den Breekel (ethisch hacker bij KPMG) je in deze aflevering uit door in de huid van een hacker te kruipen. De cybertips van deze week: Hack The BoxWired - ‘SolarWinds: The Untold Story of the Boldest Supply-Chain Hack Ever'Boek: ‘Careless People: A story of where I used to work' - Sarah Wynn-Williams Het Digitale Front wordt mede mogelijk gemaakt door KPMG. Zie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
Von wegen nur Bundesministerien und große Millionen-Konzerne – auch jede vierte Kommune in Deutschland, also kleinere Kreisstädte oder Gemeinden, sind in den vergangenen zwei Jahren Opfer einer Cyberattacke geworden. Und wir fragen uns: Warum? Auf was haben's die Hacker abgesehen? Und was kann dagegen getan werden? Klein, ländlich und leicht zu hacken - das ist das SWR3 Topthema mit Lisa Reister.
De Flixbus is niet zo groen als het bedrijf zelf beweert op zijn website. De FOD Economie monitort systematisch valse klimaatclaims of zogenoemde greenwashing. Lien Meurisse van de FOD is te gast. Iedereen kan vandaag een hacker worden. Het is poepsimpel geworden. Daarvoor waarschuwt expert cybercriminaliteit Peter Lahousse naar aanleiding van het undercoveronderzoek van VRT NWS bij pro-Russische saboteurs. Luc van Bakel checkt foto's van Oekraïners met geamputeerde ledematen.
Nach dem Eklat im letzten Jahr als Zermatt aus dem Weltcup-Kalender gestrichen wurde, haben sich die Bergbahnen mit Swiss-Ski und dem Weltverband FIS wieder geeinigt. Im März 2028 sollen Weltcuprennen stattfinden. Weiter in der Sendung: · Panne bei den Abstimmungen in der Stadt: Bern meldete unvollständige Resultate zu den kantonalen und eidgenössischen Abstimmungen. Der Grund war ein Hacken im Computerprogramm, der nicht angekreuzt wurde. · Prozess gegen das Szenelokal Brasserie Lorraine hat begonnen: Alle befragten Personen aus dem Umfeld der Genossenschaftsbeiz verweigern die Aussage. · Weltmeisterliche Stimmung in Boltigen: In der Gemeinde herrscht Freude nach dem WM-Abfahrtssieg von Franjo von Allmen.
In dieser Folge geht es um eine kleine große Revolution in unseren Klassenzimmern von der Professor Block vom IPN berichtet, er stellt uns das spezielle Messgerät "Laborino" vor. Thore Tiemann von der Uni Lübeck berichtet außerdem über Lühack25, das sehr spannende Informatik-Projekt, das ganze Schulgemeinschaften fesseln kann und dabei auch noch richtig viel Spaß macht.LaborinoLühackSchule AktuellTraumberuf Lehrerin / LehrerSchule aktuell erscheint als redaktioneller Beitrag des Ministeriums für Allgemeine und Berufliche Bildung, Wissenschaft, Forschung und Kultur des Landes Schleswig-Holstein. Wiedergegebene Meinungen entsprechen nicht zwangsläufig der Meinung der Ministerin, der Redaktion oder des Produktionsteams.Produktionsteam: Beate Hinse, Patricia Zimnik, Kai-Ole Nissen, David Ermes, Jan MartensenSprecher: Jan Martensen
Martin Tschirsich vom Chaos Computer Club konnte sich mit relativ wenig Aufwand Zugriff zu einer großen Zahl von Elektronischen Patientenakten verschaffen. Ihm sei wichtig, dass den Betroffenen die Risiken einer Einwilligung in die E-Akte bekannt seien. Von WDR 5.
Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4
Die häufigste Todesursache in der westlichen Welt? Herz- und Gefäßerkrankungen. Nun hat Stefans Gentest ausgerechnet in diesem Bereich ein besonders hohes Risiko ausgeworfen. Also aus gegebenem Anlass: Was empfiehlt Andreas, was tut Stefan? Ben Lynch „Schmutzige Gene“ ist ein Standardwerk für alle, die mehr über Genetik wissen wollen. (Und wirklich verständlich geschrieben.) Gibt's hier. Wer wissen möchte, wieso das Thema Herz und Gefäße für Stefan so wichtig ist: Hier geht's zur Folge der Biohacking-Praxis, in der Genetik-Experte Stefan Wöhrer den Gentest von Stefan analysiert hat. Gentest einfach, günstig und verständlich: Den Permedio Gentest sowie den Permedio MedikamenteCheck gibt es für alle Hörerinnen und Hörer der Biohacking-Praxis mit bis zu 299 Euro Rabatt. Wie so eine Gentest-Auswirkung aussieht? Stefans Gen-Analysen von Permedio zu den Themen Ernährung, Herz und Gefäße, Immunität und Entzündungen sowie Persönlichkeit gibt es hier zum Download. Den X3 kann man man hier kaufen (oder übrigens auch direkt bei Andreas). Den Klassiker Magnesium gibt es hier, hier und hier in empfehlenswerten Produkten. Ein sehr ordentliches B-Vitamin-Produkt ist dieses hier. D3 und K2 vertragen sich gut. Das hier ist ein gutes Produkt. Nur für die Bestrahlung von Reptilien oder technischen Einsatz geeignet, außer von kompetentem medizinischem Personal anders empfohlen: eine UV-Lampe. Diese hier von Osram ist ein erstklassiges Produkt. Achtung, starke Wirkung, wird sehr heiß! Nattokinase gibt es zum Beispiel hier. Q10 gibt's hier und hier. Und das angesprochene Paromit (Q10 als Spray) gibt es hier. Beim Thema Omega 3 gilt es, das optimale Verhältnis von EPA zu DHA zu erreichen. Das ergibt sich aus einem Mix aus Algenöl und Alaska Öl von Norsan. Hier gibt es 15% Rabatt bei der ersten Bestellung mit dem Code „biohacker15“. Taurin gibt es hier, mit dem Code „70033082“ gibt es 5% Rabatt. Taurin von Moleqlar gibt es hier. Citrullin gibt es hier. Arginin gibt es hier. Weidenrindenextrakt gibt es hier. Der Begriff „Baby Aspirin“, den Andreas angesprochen hat, meint mininmal dosiertes Aspirin, rund 70-100 mg täglich. Dieses hier ist mit 50 mg Acetylsalicylsäure sehr niedrig dosiert. Hier gibt es die Taurin-Folge der Biohacking-Praxis zum Nachhören. Theanin gibt es hier. Mit dem Code „70033082“ gibt es 5 % Rabatt. Tebonin (Gingkoblatt-Extrakt) gibt es hier. Ein Dankbarkeits-Journal wirkt Wunder. Wirklich. Kann man sich einfach im Web zusammen suchen oder man kauft ein nett gestaltetes. Zum Beispiel dieses. Wieso Andreas und Stefan immer wieder von der Inuspherese reden? Weil sie es selbst erlebt haben, was diese besondere Form der Blutwäsche bewirkt, nachzuhören hier und hier in früheren Folgen der Biohacking-Praxis. Stefan hatte seine Inuspheresen im New Energy Medical Center in Klagenfurt. HigherQi heißt die Rotlicht-Company, bei der Andreas die Qualität der Geräte überwacht. Hier gehts zu Infos und Shop. Mit dem Code „stefan10“ gibt es 10% Rabatt.
In dieser Folge erzählt Podcast-Queen Maria Lorenz-Bokelberg (Pool Artists) von der vier-Tage-Woche und Pornovideos, die man beim Hacken ihres Laptops finden würde. Nina aus meinem Team empfiehlt die Podcast-Serie “Legion” über das Hacking-Kollektiv Anonymous. Wir fragen beim Chaos Computer Club und einer Datenschutzexpertin nach, ob es wirklich sinnvoll ist, die Kamera am Laptop abzukleben. Und: »Um vergessen zu können, was es vergessen soll, darf das System nicht vergessen, was es vergessen muss!”Wie gefällt dir Jeannes Varieté? Welchen Podcast empfiehlst du mir?Schreib mir per E-Mail an jeanne@ohwow.eu oder auf Instagram an @jeanne_drach! Abonniere den Jeannes Varieté Newsletter: ohwow.eu/newsletter.Links zur FolgePodcastserie “Legion: Hacking Anonymous”Chaos Computer Club Wien c3w.atAutorin und Datenschutzexpertin Klaudia Zotzmann-Koch“Shut Up and Dance” Black Mirror (SE03EP03)“Niemand wird Verurteilt” Podcast von Maria Lorenz-Bokelberg und Nilz BokelbergMaria Lorenz-Bokelberg zu Gast bei “Alles Gesagt” vom ZEIT-MagazinMark-Uwe Kling: “Qualityland” (ullstein)In dieser Folge haben mitgewirkt: Jeanne Drach, Anna Muhr, Nina Schaefer, Jana Wiese; Trompete: Almut Schäfer-Kubelka. Foto: Christian Zagler. Grafik: Catharina Ballan. Strategische Beratung: Milo Tesselaar.Dieser Podcast wird präsentiert von OH WOW. Hosted on Acast. See acast.com/privacy for more information.
Erfahre hier mehr über unseren Partner Scalable Capital - dem Broker mit Flatrate und Zinsen. Alle weiteren Infos gibt's hier: scalable.capital/oaws. Aktien + Whatsapp = Hier anmelden. Lieber als Newsletter? Geht auch. Das Buch zum Podcast? Jetzt lesen. Zalando kauft About You, aber wann verkauft man am besten About You? Amazon verkauft jedenfalls Autos, GM stoppt den Taxifahrten-Verkauf und Broadcom verkauft an Apple. Außerdem knackt Musk die 400 Milliarden und Carl Zeiss leidet wie alle. Weihnachten steht an. Bei tonies (WKN: A3CM2W) regnet's Geld. Zumindest der Umsatz regnet. Mit Gewinn ist nicht viel. Mit Wachstum ist nur in den USA viel. Dort gibt's dafür Konkurrenz. Alphabet knackt Quantencomputer-Problem. Wann knackt Quantencomputer das Bitcoin-Problem? Wir klären auf. Diesen Podcast vom 12.12.2024, 3:00 Uhr stellt dir die Podstars GmbH (Noah Leidinger) zur Verfügung.
Das Aufrichten im Golfschwung Das Hacken im Golfschwung ist eher eine unvorteilhafte Bewegung. Wie Du das Hacken vermeidest und einen effektiven Golfschwung entwickelst, erfährst Du in dieser Folge. Außerdem geht es um die richtige Rotation, Gewichtsverlagerung und die Bedeutung von Lockerheit im Schwung. Links zur Folge Buch: Golfschwung DNA Video: Golfschwung ohne Technikgedanken Video: Richtig drehen im Golfschwung Video: Übung mit dem Stick Wir freuen uns über Feedback Wenn Dir unser Golf-Podcast gefällt, würden wir uns sehr über eine Bewertung in der Podcast-App Deines Vertrauens freuen. Für Fragen und Anregungen kannst Du uns gerne eine Mail schicken. Du möchtest deine ...Du möchtest deinen Podcast auch kostenlos hosten und damit Geld verdienen? Dann schaue auf www.kostenlos-hosten.de und informiere dich. Dort erhältst du alle Informationen zu unseren kostenlosen Podcast-Hosting-Angeboten. kostenlos-hosten.de ist ein Produkt der Podcastbude.Gern unterstützen wir dich bei deiner Podcast-Produktion.
Das Hacken im Golfschwung ist eher eine unvorteilhafte Bewegung. Wie Du das Hacken vermeidest und einen effektiven Golfschwung entwickelst, erfährst Du in dieser Folge. Außerdem geht es um die richtige Rotation, Gewichtsverlagerung und die Bedeutung von Lockerheit im Schwung.
Wie ihr durch legales Hacken ein tieferes Verständnis für Computersysteme bekommt, IT-Kompetenzen aufbaut und dabei noch Spaß habt, das erfahrt ihr in dieser Folge „Update verfügbar“. Ute und Michael sprechen mit der Leiterin der Hacker School und einer Schülerin darüber, warum auch die sogenannten Digital Natives ihre IT- Fähigkeiten weiter ausbauen sollten, welche Rolle das Thema Cybersicherheit bei der digitalen Bildung spielt und wie Hacken dazu passt. Ganz nach dem Motto der Hacker School: „Hack the world a better place“. Lasst euch die Folge nicht entgehen.
Hairless in the Cloud - Microsoft 365 - Security und Collaboration
Wir haben unser Version 100 gefeiert und schauen nun vorwärts. Jan hat sich heute mit dem Hacken der AI auseinander gesetzt und auch versucht den Microsoft 365 Copilot ausztutrickes. Marco hat versucht seine Stimme zu clonen und hat sich in den Tiefer der Anforderungen von Hardware und Software verloren. Wir müssen also noch eine Episode mehr alles selber machen ;) Links zur Themen in der Folge: https://www.youtube.com/@ThorstenMueller
If you are an active listener of this podcast, guests keep coming to this word decentralization. But the definition of decentralization is quite vague at the moment, with some companies using this term to avoid regulatory compliance oversight and others wanting to create a legal framework that allows for token issuance, smart contract enabled governance and prescribing token holder rights. In this episode, Ian Andrews (CMO, Chainalysis) tries to get more clarity on what decentralization really means to regulators, as he speaks to Dmitry Fedotov, who is Head of DLT Foundations Oversight at ADGM, the Abu Dhabi regulators who created the world's first framework for blockchain foundations, DAOs and web3 entities. His journey as an entrepreneur served him well as he transitioned to the regulators, even his first days working for the Virtual Assets Regulatory Authority (VARA), when the FTX collapsed and he had to provide a better understanding of the events to both the public and private sector. Dmitry and Ian talk about what decentralization really means and how the ADGM created the world's first purpose-built framework for blockchain foundations,DAOs and web3 entities, offering regulatory clarity and legal frameworks that are attracting global entities and fostering innovation in the blockchain space. Minute-by-minute episode breakdown 2 | Dmitry's early days of AI, decentralized system and buying Bitcoin in 2014 6 | From Bitcoin enthusiast and entrepreneur to Dubai's Virtual Asset Regulatory Authority (VARA) 8 | Introducing the world's first purpose built framework for blockchain foundations, DAOs and web3 entities 13 | Blockchain Foundation's journey to becoming fully decentralized autonomous organizations (DAOs) 16 | Progressive decentralization in blockchain and the regulatory challenges that ensue 19 | DAOs: To regulate or not to regulate that is the question 22 | What are the limitations of the EU MiCA regulations for Virtual Asset Service Providers (VASPs) 26 | Why Hacken and other DLT focused companies are eyeing ADGM as their new headquarters Related resources Check out more resources provided by Chainalysis that perfectly complement this episode of the Public Key. Website: ADGM: The Financial Centre of Abu Dhabi Framework: World's First Framework for Blockchain Foundations, DAOs, and Web3 Entities Regulation: ADGM: Distributed Ledger Technology Foundations Regulations Guidance: ADGM: Guidance on Permitted Activities for DLT Foundations Article: Cointelegraph: Tether plans to launch dirham stablecoin with UAE partners Announcement: ADGM, Hacken partner to set new benchmarks for blockchain security and compliance Blog: Middle East & North Africa: Regulatory Momentum and DeFi Fuel Adoption Blog: OFAC Designates Russian Exchange Cryptex and Fraud Shop Facilitator UAPS, FinCEN names PM2BTC YouTube: Chainalysis YouTube page Twitter: Chainalysis Twitter: Building trust in blockchain Speakers on today's episode Ian Andrews *Host* (Chief Marketing Officer, Chainalysis) Dmitry Fedotov (Head of DLT Foundations Oversight, ADGM) This website may contain links to third-party sites that are not under the control of Chainalysis, Inc. or its affiliates (collectively “Chainalysis”). Access to such information does not imply association with, endorsement of, approval of, or recommendation by Chainalysis of the site or its operators, and Chainalysis is not responsible for the products, services, or other content hosted therein. Our podcasts are for informational purposes only, and are not intended to provide legal, tax, financial, or investment advice. Listeners should consult their own advisors before making these types of decisions. Chainalysis has no responsibility or liability for any decision made or any other acts or omissions in connection with your use of this material. Chainalysis does not guarantee or warrant the accuracy, completeness, timeliness, suitability or validity of the information in any particular podcast and will not be responsible for any claim attributable to errors, omissions, or other inaccuracies of any part of such material. Unless stated otherwise, reference to any specific product or entity does not constitute an endorsement or recommendation by Chainalysis. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent. Views and opinions expressed by Chainalysis employees are those of the employees and do not necessarily reflect the views of the company.
A fully audio described programme for Arsenal Women v BK Hacken 26th September 2024 Hosted on Acast. See acast.com/privacy for more information.
In this edition of the Arsenal Women Arsecast, Tim and Jamie go back over the UWCL qualifying victories over Rangers and Rosenborg last week, the performances of Mariona Caldentey, as well as Laia Codina and Kyra Cooney-Cross and whether it will be difficult to dislodge them from the team, Tim and Jamie also look ahead to the next qualifying round against Hacken. Then there are listener questions about low defensive blocks and Arsenal's finishing, team selection and whether Arsenal will add before Friday's WSL transfer deadline.Get extra bonus content and help support Arseblog by becoming an Arseblog Member on Patreon: https://www.patreon.com/arseblog Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Arsenal Women Arsecast, Tim talks to Swedish journalist Amanda Zaza about new signing Rosa Kafaji from Hacken. Amanda talks about the type of player she is, her rise to prominence in Sweden and in the Swedish national team, her best position and why her attributes and her personality will be well suited to helping to solve Arsenal's issues against deep blocks.You can follow Tim @StillmanatorGet extra bonus content and help support Arseblog by becoming an Arseblog Member on Patreon: https://www.patreon.com/arseblog Hosted on Acast. See acast.com/privacy for more information.
Ethisch hacken? Ja ja, dat is een hele business tegenwoordig. Hoe goed ben jij als bedrijf bescherm bij een eventuele digitale aanval? Dit vraagstuk staat centraal vandaag in mijn gesprek met Kevin van den Eshof, mede-eigenaar van Tozetta. Luister naar de inzichten van Kevin en word je bewust van de risico's die er wel degelijk zijn. Tijd voor een audit? Meer weten over wat ik voor je kan betekenen? - Search Cobra: Google Ads uitbesteden - Search Cobra: SEO uitbesteden - Spice Rebels: de lekkerste kruidenmixen
Das Deutsche Forschungszentrum für Künstliche Intelligenz testet bei Osnabrück intelligentes Unkrautmanagement. Anhand von verschiedenen Kriterien analysiert die KI, welche Unkräuter die Nutzpflanze stören. Durch automatisierte Hacken oder Düsen werden die Unkräuter entfernt.
In dieser Folge erzählt Podcast-Queen Maria Lorenz-Bokelberg (Pool Artists) von der vier-Tage-Woche und Pornovideos, die man beim Hacken ihres Laptops finden würde. Nina aus meinem Team empfiehlt die Podcast-Serie “Legion” über das Hacking-Kollektiv Anonymous. Wir fragen beim Chaos Computer Club und einer Datenschutzexpertin nach, ob es wirklich sinnvoll ist, die Kamera am Laptop abzukleben. Und: »Um vergessen zu können, was es vergessen soll, darf das System nicht vergessen, was es vergessen muss!”Wie gefällt dir Jeannes Varieté? Welchen Podcast empfiehlst du mir?Schreib mir per E-Mail an jeanne@ohwow.eu oder auf Instagram an @jeanne_drach! Abonniere den Jeannes Varieté Newsletter: ohwow.eu/newsletter.Links zur FolgePodcastserie “Legion: Hacking Anonymous”Chaos Computer Club Wien c3w.atAutorin und Datenschutzexpertin Klaudia Zotzmann-Koch“Shut Up and Dance” Black Mirror (SE03EP03)“Niemand wird Verurteilt” Podcast von Maria Lorenz-Bokelberg und Nilz BokelbergMaria Lorenz-Bokelberg zu Gast bei “Alles Gesagt” vom ZEIT-MagazinMark-Uwe Kling: “Qualityland” (ullstein)In dieser Folge haben mitgewirkt: Jeanne Drach, Anna Muhr, Nina Schaefer, Jana Wiese; Trompete: Almut Schäfer-Kubelka. Foto: Christian Zagler. Grafik: Catharina Ballan. Strategische Beratung: Milo Tesselaar.Dieser Podcast wird präsentiert von OH WOW. Hosted on Acast. See acast.com/privacy for more information.
25th April: Crypto & Coffee at 8
Rusland is een bedreiging voor de EU, waarschuwen de Nederlandse geheime diensten. Hacken, de inzet van trollen, politieke beïnvloeding en andere soorten van (digitale) sabotage: Europa wordt steeds meer een schimmig strijdtoneel van een hybride oorlog. Hoe brutaal de Russen te werk gaan, vertelt onderzoeksjournalist Huib Modderkolk. Onze journalistiek steunen? Dat kan het beste met een (digitaal) abonnement op de Volkskrant, daarvoor ga je naar www.volkskrant.nl/podcastactie Presentatie: Pieter Klok, Esma LinnemannRedactie: Corinne van Duin, Lotte Grimbergen, Julia van Alem, Jasper Veenstra, Nathalie Denie en Pim HubertsMontage: Rinkie Bartels, Simone EleveldSee omnystudio.com/listener for privacy information.
Kleine financiële instellingen kunnen zich vanaf vandaag laten hacken om te zien waar de zwakke punten in hun cyberbeveiliging zitten. Ze kunnen zich aanmelden bij De Nederlandsche Bank (DNB), die vervolgens ethische hackers op de bedrijven loslaat. Michiel Jurrjens vertelt erover in de Tech Update. Het programma dat grote financiële instellingen zoals banken en verzekeraars moeten doorlopen, kan volgens DNB tot wel twaalf maanden duren. De kleinere instellingen doorlopen via het nieuwe programma een deel van dat proces. De centrale bank gaat bij zijn aanval net zo ver als buitenlandse actoren zouden doen. Wel benadrukt DNB dat de aanvallen stoppen voordat cruciale systemen op zwart zouden gaan. Verder in de Tech Update: De boete van circa 50 miljoen euro die Google in Rusland moet betalen, blijft ook na hoger beroep van het bedrijf staan. Volgens Rusland doet Google te weinig tegen inhoud die 'nep' of 'extremistisch' zou zijn, zoals informatie over de oorlog in Oekraïne of LHBTI-content. Google probeerde de boete, die in december werd opgelegd, aan te vechten maar boekte daarbij geen succes. ByteDance, het moederbedrijf van de populaire app TikTok, heeft in 2023 een megawinst van omgerekend 37 miljard euro geboekt. Dat is 60 procent meer dan de 25 miljard euro winst over 2022. Dat melden ingewijden binnen ByteDance aan persbureau Bloomberg. See omnystudio.com/listener for privacy information.
Aleksander Leonard Larsen er medgründer, styrelder og COO i kryptospillselskapt Sky Mavis. På høyden var Sky Mavis verdsatt til 25 milliarder kroner, men så ble de hacket.Tema:0:03:02 Erfaring og samarbeid i spillutvikling hos Nintendo0:09:29 Figurens styrke, spillbruk og historie som salgsargument0:10:36 Fordeler og ferdigheter i toppen av spillet0:14:16 Axie som en spillplattform og distribusjonskanal0:15:44 Økning i brukere etter hacken og tilpasninger til spill.0:21:52 Inntektskilder og verdi i markedsplassen0:27:12 Skape en spillplattform med stor verdi og nettverkseffekter0:33:29 Hacken som førte til 600 millioner dollar tap0:37:46 Oppdagelsen av hacken0:51:38 Utfordringer med å skape vekstselskaper i dagens Norge1:02:03 Fra staten til å bygge et verdensselskap i spillbransjen1:06:00 Fra Norge til Vietnam: Et unikt eventyr begynner1:10:22 Bitter og sur personlighet etter Big Five-test1:13:43 Fra å være avhengig av investorer til å velge dem1:15:24 Investering krever historie, marked, team og nettverk.1:19:17 Reduser risiko og gjør det enkelt for investoren å bli med.1:22:47 Vektlegging av rekruttering og å være "high agency" Hosted on Acast. See acast.com/privacy for more information.
Inside Wirtschaft - Der Podcast mit Manuel Koch | Börse und Wirtschaft im Blick
Im vierten Jahr in Folge gibt es in Deutschland mehr als 12 Millionen Bürger, die in Aktien, Aktienfonds und ETFs investieren. Trotz Krisen, Zinswende und Rezessionsängsten. Aber es gibt auch einen Hacken. “570.000 Aktion-Sparer sind 2023 aus dem Markt raus - davon eine halbe Million junge Anleger unter 40. Im Gegensatz sind bei den über 60-Jährigen 56.000 neue dazugekommen. Ich denke die Jungen haben sich verzockt”, sagt Manuel Koch (Chefredakteur von Inside Wirtschaft). Alle Details im Video von der Frankfurter Börse und auf https://inside-wirtschaft.de
Today on the Ether we have Patrick hosting Secret Spaces with Hacken. You'll hear from Alex Zaidelson, Lisa Loud,
Er gehört nicht nur zu den ganz großen deutschen Köchen, sondern ist einer, in dessen Berufsleben Kontinuität eine ganz wichtige Rolle spielt: Martin Herrmann ist Küchenchef im Hotel Dollenberg im Schwarzwald und dem Haus seit seiner Lehre 1982 treu geblieben. Warum Beständigkeit kein Widerspruch zur Kreativität ist, wie Spitzenküche heute Gäste nachhaltig glücklich macht, und was er jungen Köchen zum Start in die Karriere rät, darüber spricht Chefredakteurin Deborah Middelhoff mit Martin Herrmann in dieser Episode.
Kloiber, Manfredwww.deutschlandfunk.de, Forschung aktuell
Moderne Autos werden per Software gesteuert. Forschern ist es nun gelungen, den Autopiloten eines Tesla zu knacken. Dadurch bekamen sie Zugriff auf alle verarbeiteten Daten. Jetzt ist der Autobauer am Zug - er muss die Sicherheitslücke schließen. Kloiber, Manfredwww.deutschlandfunk.de, Forschung aktuell
Follow Rob! Twitter: https://twitter.com/Journalism_RP Follow The Blue Royalty Crew! Nick: https://twitter.com/nickverlaney Jessy: https://mobile.twitter.com/jessyjph Abdullah: https://mobile.twitter.com/KunAbd Follow London is Blue, Get in Touch!
In this episode the team are discussing their winners & losers from Chelsea's dramatic penalty shootout win over Newcastle United. The team also share their thoughts on Hacken 1-3 Chelsea in the Champions League as Emma Hayes' blues win their final game of 2023 and answer some listener questions. If you have a question for the team then contact them on Twitter at @AtTheBridgePod (This episode was recorded on 22nd December 2023) *** Please take the time to rate and review us on Apple Podcasts or wherever you get your pods. It means a great deal to the show and will make it easier for other potential listeners to find us. Thanks! *** Join our Discord: https://discord.gg/b3arBztQjn _______________________________________________ Get In Touch With Us: Twitter - twitter.com/AtTheBridgePod Instagram - Instagram.com/AtTheBridgePod #CFC #CHELSEA
Thanks to MANSCAPED for sponsoring today's video! Get The Lawn Mower® 5.0 Ultra for 20% OFF + Free International Shipping with promo code "LONDONISBLUE" at https://manscaped.com/londonisblue Follow The Blue Royalty Crew! Nick: https://twitter.com/nickverlaney Jessy: https://mobile.twitter.com/jessyjph Abdullah: https://mobile.twitter.com/KunAbd Follow London is Blue, Get in Touch!
Maryam and Andre breakdown the 3-0 victory over Bristol City, as well as reviewing and previewing the Champions League group stage fixture against Hacken...
Follow The Blue Royalty Crew! Nick: https://twitter.com/nickverlaney Jessy: https://mobile.twitter.com/jessyjph Abdullah: https://mobile.twitter.com/KunAbd Follow London is Blue, Get in Touch!
Putin simuliert Volksnähe, die Ampel killt kühl eine versprochene Förderung zum Heizungstausch – und Tesla hat einen Cyberfuck an den Hacken. Das ist die Lage am Donnerstagabend. Hier die Artikel zum Nachlesen: Was Putin alles gesagt hat – und was nicht Bundesregierung reduziert Zuschüsse für Heizungstausch Ex-Mitarbeiterin bekommt Post von Tesla – die Gehaltsabrechnungen von 1000 Kollegen +++ Alle Rabattcodes und Infos zu unseren Werbepartnern finden Sie hier: https://linktr.ee/spiegellage +++ Die SPIEGEL-Gruppe ist nicht für den Inhalt dieser Webseite verantwortlich.Alle SPIEGEL Podcasts finden Sie hier. Mehr Hintergründe zum Thema erhalten Sie bei SPIEGEL+. Jetzt für nur € 1,- für die ersten vier Wochen testen unter spiegel.de/abonnieren Informationen zu unserer Datenschutzerklärung.
Emotet is misschien wel het meest beruchte botnet uit de geschiedenis. Een botnet is een verzameling van met malware geïnfecteerde apparaten dat op afstand wordt bediend. Met de malware hebben criminelen een belangrijk hulpmiddel om bijvoorbeeld wachtwoorden te ontfutselen of om met ransomware cruciale netwerken van bijvoorbeeld ziekenhuizen te gijzelen. De malware-besmetting is verspreid over de hele wereld en maakt dan ook internationaal miljoenen slachtoffers. De geschatte schade die door Emotet is aangericht, heeft de onwerkelijke omvang van minimaal 2,5 miljard dollar. Wanneer Team High Tech Crime het botnet in het vizier krijgt, zien ze gelijk hoe complex het in elkaar zit. Met het botnet, de hele infrastructuur om het botnet heen en command-and-control servers in Rusland lijkt het bijna onbegonnen werk om Emotet uit de lucht te halen. Maar het botnet ongemoeid laten is ook geen optie...Meer weten over deze aflevering? Check het hier.Team High Tech Crime heeft als doel om Nederland veiliger en minder aantrekkelijk maken voor cybercriminelen. De politie is daarvoor altijd op zoek naar nieuwe collega's. Nieuwsgierig? Bekijk jouw mogelijkheden.Wij zijn benieuwd wat jij van deze podcast vindt, vul hier onze korte enquête in.Takedown is een XTR branded podcast in samenwerking met audio agency Airborne en de politie.
Phishing is één van de meest voorkomende vormen van cybercrime - en daarmee ook één van de meest geprofessionaliseerde vormen. Het is een zogenoemde vorm van ‘cybercrime as a service' waarbij criminele diensten gehuurd kunnen worden. Criminelen verleiden mensen naar valse bankwebsites te surfen om vervolgens hun inloggegevens en geld te stelen. Daarvoor gebruiken ze onder andere phishing panels. Team High Tech Crime wordt gealarmeerd als ze zien dat er een aanbieder actief is die zeer succesvol phishing panels verhuurd. Bij deze aanbieder kunnen cybercriminelen een abonnement afsluiten waarmee ze een zeer geavanceerd phishing-panel kunnen gebruiken en zo'n 40 procent van de phishing-aanvallen in Nederland blijken met deze specifieke panels te worden uitgevoerd. De zaak krijgt grote prioriteit bij Team High Tech Crime. Alleen... hoe vind je daders die er alles aan doen om hun identiteit te beschermen?Meer weten over deze aflevering? Check het hier.Team High Tech Crime heeft als doel om Nederland veiliger en minder aantrekkelijk maken voor cybercriminelen. De politie is daarvoor altijd op zoek naar nieuwe collega's. Nieuwsgierig? Bekijk jouw mogelijkheden.Laat ons weten wat jij van deze podcast vindt, vul hier onze korte enquête in.Takedown is een XTR branded podcast in samenwerking met audio agency Airborne en de politie.
Verschillende Nederlandse banken en de Belastingdienst worden in 2018 geconfronteerd met een uitzonderlijk grote hoeveelheid DDoS-aanvallen. Het leidt er zelfs toe dat een groot deel van het betalingsverkeer wordt platgelegd. Het is groot nieuws in de media, waarbij er ook aan allerlei experts wordt gevraagd wie er achter de aanvallen kan zitten. De kans dat het om hackers uit Rusland gaat is vrij groot, zo wordt er gespeculeerd.Achter de schermen houdt Team High Tech Crime zich ondertussen ook met de zaak bezig, al snel hebben zij een verdachte op het oog. Binnen het onderzoek gebeuren onverwachte dingen. Een alerte serverbeheerder van Tweakers kan de gespecialiseerde rechercheurs helpen door een aantal gegevens met elkaar te combineren en aan elkaar te verbinden.Meer weten over deze aflevering? Check het hier.Team High Tech Crime heeft als doel om Nederland veiliger en minder aantrekkelijk maken voor cybercriminelen. De politie is daarvoor altijd op zoek naar nieuwe collega's. Nieuwsgierig? Bekijk hier de mogelijkheden.Wij zijn benieuwd wat jij van deze podcast vindt! Laat het ons weten en vul hier onze korte enquête in.Takedown is een XTR branded podcast in samenwerking met audio agency Airborne en de politie.
Wat doe je als je jezelf als hacker toegang hebt verschaft tot een online account van één de machtigste personen ter wereld? Victor Gevers belandt in een bijzondere positie als hij doet wat hij altijd doet: als ethisch hacker zwaktes in computersystemen opsporen. En dat heeft alles te maken met zijn doelwit: de Amerikaanse president Donald Trump.Zonder al te veel moeite weet Victor in te breken in het Twitter-account van Trump, in een periode dat dit zo ongeveer het allerbelangrijkste communicatiekanaal voor het Witte Huis is. Daarmee belandt Victor onverwacht op het wereldtoneel, met alle risico's van dien. Vanaf dat moment wordt Team High Tech Crime bij de zaak betrokken. Hoe voorkomen de teamleden dat deze hack leidt tot allerlei ongewenste reacties vanuit de Verenigde Staten? En heeft hij binnen de legale kaders gebruikgemaakt van zijn hacking-skills? Meer weten over deze aflevering? Check het hier.Team High Tech Crime heeft als doel om Nederland veiliger en minder aantrekkelijk maken voor cybercriminelen. De politie is daarvoor altijd op zoek naar nieuwe collega's. Nieuwsgierig? Bekijk jouw mogelijkheden.Wij zijn benieuwd wat jij van deze podcast vindt, vul onze korte enquête in en laat het ons weten.Takedown is een XTR branded podcast in samenwerking met audio agency Airborne en de politie.
Welcome to Campfire Classics, a Literary Comedy Podcast!! Whoa! What happened! I had two whole weeks of feeling like life made sense, and now... Oh, the show's back, that's what... Ah well. This week your hosts cover a new author with...cute hairy buns...? That can't be right. No! His name is Henry Cuyler Bunner. Heather has selected his story "The Nice People" for Ken to read. And I tell you what, he really does read some words. During the story, Ken and Heather chat about antiquated bit jokes, ruminate on pigeon genocide, and balk at the serious shade thrown at Mothers-in-Law by Hank Bunner. "The Nice People" is from Bunner's Short Sixes and also appeared in his periodical Puck in the July 30, 1890 issue. Email us at 5050artsproduction@gmail.com. Remember to tell five friends to check out Campfire Classics. Like, subscribe, leave a review. Now sit back, light a fire (or even a candle), grab a drink, and enjoy.
Tijdens een golf van liquidaties rond 2015 – maar ook al in de tijd daarvoor – stuitte de politie op vele Blackberry-telefoons; een duidelijk geliefd communicatiemiddel onder criminelen. THTC startte later een zaak om die veelvuldig aangetroffen Blackberry's nader te onderzoeken. We hebben het over de zaak naar de wereldwijde cryptocommunicatiedienst Ennetcom.Ennetcom bleek gebruik te maken van een platform dat volledige privacy biedt waardoor men versleutelde berichten naar elkaar kon sturen. Een dienst waarbij berichten nooit achterhaald kunnen worden en waar je als crimineel dus altijd veilig en anoniem kunt communiceren. Althans, zo luidt de belofte. Totdat de cyber-rechercheurs van Team High Tech Crime ontdekken dat de beheerders één cruciale fout hebben gemaakt...THTC onderzocht - samen met diverse partners – de aanbieder van cryptocommunicatiedienst Ennetcom, ontmantelde de dienst, achterhaalde de data en wist deze te ontsleutelen. Hiermee kon de zware georganiseerde criminaliteit een slag toegebracht worden. Hoe de politie uiteindelijk het grote criminele cryptocommunicatienetwerk oprolt, welke rol internationale partners daarin spelen en wat de gevolgen van deze zaak zijn voor andere cryptocommunicatiediensten en het Marengo-proces? Dat hoor je in deze aflevering van Takedown.Meer weten over deze aflevering? Check dan deze pagina.Team High Tech Crime heeft als doel om Nederland veiliger en minder aantrekkelijk te maken voor cybercriminelen. De politie is daarvoor - en voor andere cybercrimeteams - altijd op zoek naar nieuwe collega's. Nieuwsgierig? Kijk dan op deze siteTakedown is een XTR branded podcast in samenwerking met audio agency Airborne en de politie.Wij zijn benieuwd wat jij van deze podcast vindt, vul onze korte enquête in en laat het ons weten.
Knüll hat die Nase voll. Nacht für Nacht muss er seine Hacke schwingen und neue Tunnel in den Keller graben, in dem er mit den Kellergeistern lebt. Doch was soll die Schufterei? Und was hat es mit der rätselhaften schwarzen Wand auf sich, an der alle Hacken zerspringen und die undurchdringlich ist? Das will Knüll herausfinden! Graboldin Luise hilft ihm dabei. Alle 7 Folgen der OHRENBÄR-Hörgeschichte: Kellergeist Knüll und das Rätsel der schwarzen Wand von Charlotte Richter-Peill. Es liest: Markus Meyer.
Wat als we de klimaatopwarming aanpakken op interplanetaire schaal, door bijvoorbeeld bepaalde stoffen de lucht in te sturen en nieuwe wolken te maken? Wetenschappers zijn het oneens over de gevolgen daarvan, en dat leidt tot discussie. In Windows maakt Clippy, de digitale assistent, een soort comeback. Nijlpaarden kunnen niet zo goed kauwen met hun gigantische bek, en Mark Zuckerberg overtuigt nu met zijn nieuwe VR-bril. See omnystudio.com/listener for privacy information.
Diesmal geht es um Respekt, Hacken im Film, Computer mit Turbotaste, Shooter, den Red Ring of Death, Nummer 5, die Xbox Series, Baldurs Gate 3, Wanda, Fahrradverleih, Elektroroller, zu viel Sport, Vorsorge, Benjamin Blümchen, Modern Family und Roller. Außerdem gehen wir auf eine kleine Live-Tour in Bielefeld, Essen und Köln 25.08 Bielefeld (Restkarten) 26.08 Köln 28.08 Essen 02.09 Berlin (verlegt) 05.09 Frankfurt (verlegt) Karten gibt es ab sofort bei [Eventim!](https://www.eventim.de/eventseries/bastian-bielendorfer-reinhard-remfort-alliteration-am-arsch-aaa-podcast-3447644/) Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/AlliterationAmArsch
Reaction from Richard Gordon and the team to a busy night of European action with Aberdeen getting a draw in Sweden while Hearts lost at home