POPULARITY
Categories
En esta edición de No Hay Derecho abordaremos, entre otros temas: - La UNMSM es resarcida: TC declara inconstitucional la intervención policial de 2023 y ordena que no se vuelva a repetir. - Esterilizaciones forzadas: Corte IDH ordena al Perú garantizar que ONG siga representando a víctimas. - Encuentran muertos a 13 mineros que habían sido secuestrados en Pataz. - Delia Espinoza respalda reposición de Tomás Gálvez: "Será una fortaleza para el Ministerio Público". - Luis Arce Córdova regresa: PJ anula su destitución y ordena que sea repuesto como fiscal supremo. - Caviar rojo, pulpa de cangrejo y otros alimentos gourmet requeridos por la MML para almuerzos de Rafael López Aliaga. - Dina Boluarte salió con el rostro vendado tras cirugía estética, revela su cirujano. - Morgan Quero: La presidenta Dina Boluarte ha recuperado la dignidad constitucional para el Perú. - Exclusivo: La repartija de cargos en el Congreso.
In a new season of the Oracle University Podcast, Lois Houston and Nikita Abraham dive into the world of Oracle GoldenGate 23ai, a cutting-edge software solution for data management. They are joined by Nick Wagner, a seasoned expert in database replication, who provides a comprehensive overview of this powerful tool. Nick highlights GoldenGate's ability to ensure continuous operations by efficiently moving data between databases and platforms with minimal overhead. He emphasizes its role in enabling real-time analytics, enhancing data security, and reducing costs by offloading data to low-cost hardware. The discussion also covers GoldenGate's role in facilitating data sharing, improving operational efficiency, and reducing downtime during outages. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. This time, we're focusing on the fundamentals of Oracle GoldenGate. Oracle GoldenGate helps organizations manage and synchronize their data across diverse systems and databases in real time. And with the new Oracle GoldenGate 23ai release, we'll uncover the latest innovations and features that empower businesses to make the most of their data. Nikita: Taking us through this is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. He's been doing database replication for about 25 years and has been focused on GoldenGate on and off for about 20 of those years. 01:18 Lois: In today's episode, we'll ask Nick to give us a general overview of the product, along with some use cases and benefits. Hi Nick! To start with, why do customers need GoldenGate? Nick: Well, it delivers continuous operations, being able to continuously move data from one database to another database or data platform in efficiently and a high-speed manner, and it does this with very low overhead. Almost all the GoldenGate environments use transaction logs to pull the data out of the system, so we're not creating any additional triggers or very little overhead on that source system. GoldenGate can also enable real-time analytics, being able to pull data from all these different databases and move them into your analytics system in real time can improve the value that those analytics systems provide. Being able to do real-time statistics and analysis of that data within those high-performance custom environments is really important. 02:13 Nikita: Does it offer any benefits in terms of cost? Nick: GoldenGate can also lower IT costs. A lot of times people run these massive OLTP databases, and they are running reporting in those same systems. With GoldenGate, you can offload some of the data or all the data to a low-cost commodity hardware where you can then run the reports on that other system. So, this way, you can get back that performance on the OLTP system, while at the same time optimizing your reporting environment for those long running reports. You can improve efficiencies and reduce risks. Being able to reduce the amount of downtime during planned and unplanned outages can really make a big benefit to the overall operational efficiencies of your company. 02:54 Nikita: What about when it comes to data sharing and data security? Nick: You can also reduce barriers to data sharing. Being able to pull subsets of data, or just specific pieces of data out of a production database and move it to the team or to the group that needs that information in real time is very important. And it also protects the security of your data by only moving in the information that they need and not the entire database. It also provides extensibility and flexibility, being able to support multiple different replication topologies and architectures. 03:24 Lois: Can you tell us about some of the use cases of GoldenGate? Where does GoldenGate truly shine? Nick: Some of the more traditional use cases of GoldenGate include use within the multicloud fabric. Within a multicloud fabric, this essentially means that GoldenGate can replicate data between on-premise environments, within cloud environments, or hybrid, cloud to on-premise, on-premise to cloud, or even within multiple clouds. So, you can move data from AWS to Azure to OCI. You can also move between the systems themselves, so you don't have to use the same database in all the different clouds. For example, if you wanted to move data from AWS Postgres into Oracle running in OCI, you can do that using Oracle GoldenGate. We also support maximum availability architectures. And so, there's a lot of different use cases here, but primarily geared around reducing your recovery point objective and recovery time objective. 04:20 Lois: Ah, reducing RPO and RTO. That must have a significant advantage for the customer, right? Nick: So, reducing your RPO and RTO allows you to take advantage of some of the benefits of GoldenGate, being able to do active-active replication, being able to set up GoldenGate for high availability, real-time failover, and it can augment your active Data Guard and Data Guard configuration. So, a lot of times GoldenGate is used within Oracle's maximum availability architecture platinum tier level of replication, which means that at that point you've got lots of different capabilities within the Oracle Database itself. But to help eke out that last little bit of high availability, you want to set up an active-active environment with GoldenGate to really get true zero RPO and RTO. GoldenGate can also be used for data offloading and data hubs. Being able to pull data from one or more source systems and move it into a data hub, or into a data warehouse for your operational reporting. This could also be your analytics environment too. 05:22 Nikita: Does GoldenGate support online migrations? Nick: In fact, a lot of companies actually get started in GoldenGate by doing a migration from one platform to another. Now, these don't even have to be something as complex as going from one database like a DB2 on-premise into an Oracle on OCI, it could even be simple migrations. A lot of times doing something like a major application or a major database version upgrade is going to take downtime on that production system. You can use GoldenGate to eliminate that downtime. So this could be going from Oracle 19c to Oracle 23ai, or going from application version 1.0 to application version 2.0, because GoldenGate can do the transformation between the different application schemas. You can use GoldenGate to migrate your database from on premise into the cloud with no downtime as well. We also support real-time analytic feeds, being able to go from multiple databases, not only those on premise, but being able to pull information from different SaaS applications inside of OCI and move it to your different analytic systems. And then, of course, we also have the ability to stream events and analytics within GoldenGate itself. 06:34 Lois: Let's move on to the various topologies supported by GoldenGate. I know GoldenGate supports many different platforms and can be used with just about any database. Nick: This first layer of topologies is what we usually consider relational database topologies. And so this would be moving data from Oracle to Oracle, Postgres to Oracle, Sybase to SQL Server, a lot of different types of databases. So the first architecture would be unidirectional. This is replicating from one source to one target. You can do this for reporting. If I wanted to offload some reports into another server, I can go ahead and do that using GoldenGate. I can replicate the entire database or just a subset of tables. I can also set up GoldenGate for bidirectional, and this is what I want to set up GoldenGate for something like high availability. So in the event that one of the servers crashes, I can almost immediately reconnect my users to the other system. And that almost immediately depends on the amount of latency that GoldenGate has at that time. So a typical latency is anywhere from 3 to 6 seconds. So after that primary system fails, I can reconnect my users to the other system in 3 to 6 seconds. And I can do that because as GoldenGate's applying data into that target database, that target system is already open for read and write activity. GoldenGate is just another user connecting in issuing DML operations, and so it makes that failover time very low. 07:59 Nikita: Ok…If you can get it down to 3 to 6 seconds, can you bring it down to zero? Like zero failover time? Nick: That's the next topology, which is active-active. And in this scenario, all servers are read/write all at the same time and all available for user activity. And you can do multiple topologies with this as well. You can do a mesh architecture, which is where every server talks to every other server. This works really well for 2, 3, 4, maybe even 5 environments, but when you get beyond that, having every server communicate with every other server can get a little complex. And so at that point we start looking at doing what we call a hub and spoke architecture, where we have lots of different spokes. At the end of each spoke is a read/write database, and then those communicate with a hub. So any change that happens on one spoke gets sent into the hub, and then from the hub it gets sent out to all the other spokes. And through that architecture, it allows you to really scale up your environments. We have customers that are doing up to 150 spokes within that hub architecture. Within active-active replication as well, we can do conflict detection and resolution, which means that if two users modify the same row on two different systems, GoldenGate can actually determine that there was an issue with that and determine what user wins or which row change wins, which is extremely important when doing active-active replication. And this means that if one of those systems fails, there is no downtime when you switch your users to another active system because it's already available for activity and ready to go. 09:35 Lois: Wow, that's fantastic. Ok, tell us more about the topologies. Nick: GoldenGate can do other things like broadcast, sending data from one system to multiple systems, or many to one as far as consolidation. We can also do cascading replication, so when data moves from one environment that GoldenGate is replicating into another environment that GoldenGate is replicating. By default, we ignore all of our own transactions. But there's actually a toggle switch that you can flip that says, hey, GoldenGate, even though you wrote that data into that database, still push it on to the next system. And then of course, we can also do distribution of data, and this is more like moving data from a relational database into something like a Kafka topic or a JMS queue or into some messaging service. 10:24 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there's a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting visit oracle.com/education. 10:58 Nikita: Welcome back! Nick, does GoldenGate also have nonrelational capabilities? Nick: We have a number of nonrelational replication events in topologies as well. This includes things like data lake ingestion and streaming ingestion, being able to move data and data objects from these different relational database platforms into data lakes and into these streaming systems where you can run analytics on them and run reports. We can also do cloud ingestion, being able to move data from these databases into different cloud environments. And this is not only just moving it into relational databases with those clouds, but also their data lakes and data fabrics. 11:38 Lois: You mentioned a messaging service earlier. Can you tell us more about that? Nick: Messaging replication is also possible. So we can actually capture from things like messaging systems like Kafka Connect and JMS, replicate that into a relational data, or simply stream it into another environment. We also support NoSQL replication, being able to capture from MongoDB and replicate it onto another MongoDB for high availability or disaster recovery, or simply into any other system. 12:06 Nikita: I see. And is there any integration with a customer's SaaS applications? Nick: GoldenGate also supports a number of different OCI SaaS applications. And so a lot of these different applications like Oracle Financials Fusion, Oracle Transportation Management, they all have GoldenGate built under the covers and can be enabled with a flag that you can actually have that data sent out to your other GoldenGate environment. So you can actually subscribe to changes that are happening in these other systems with very little overhead. And then of course, we have event processing and analytics, and this is the final topology or flexibility within GoldenGate itself. And this is being able to push data through data pipelines, doing data transformations. GoldenGate is not an ETL tool, but it can do row-level transformation and row-level filtering. 12:55 Lois: Are there integrations offered by Oracle GoldenGate in automation and artificial intelligence? Nick: We can do time series analysis and geofencing using the GoldenGate Stream Analytics product. It allows you to actually do real time analysis and time series analysis on data as it flows through the GoldenGate trails. And then that same product, the GoldenGate Stream Analytics, can then take the data and move it to predictive analytics, where you can run MML on it, or ONNX or other Spark-type technologies and do real-time analysis and AI on that information as it's flowing through. 13:29 Nikita: So, GoldenGate is extremely flexible. And given Oracle's focus on integrating AI into its product portfolio, what about GoldenGate? Does it offer any AI-related features, especially since the product name has “23ai” in it? Nick: With the advent of Oracle GoldenGate 23ai, it's one of the two products at this point that has the AI moniker at Oracle. Oracle Database 23ai also has it, and that means that we actually do stuff with AI. So the Oracle GoldenGate product can actually capture vectors from databases like MySQL HeatWave, Postgres using pgvector, which includes things like AlloyDB, Amazon RDS Postgres, Aurora Postgres. We can also replicate data into Elasticsearch and OpenSearch, or if the data is using vectors within OCI or the Oracle Database itself. So GoldenGate can be used for a number of things here. The first one is being able to migrate vectors into the Oracle Database. So if you're using something like Postgres, MySQL, and you want to migrate the vector information into the Oracle Database, you can. Now one thing to keep in mind here is a vector is oftentimes like a GPS coordinate. So if I need to know the GPS coordinates of Austin, Texas, I can put in a latitude and longitude and it will give me the GPS coordinates of a building within that city. But if I also need to know the altitude of that same building, well, that's going to be a different algorithm. And GoldenGate and replicating vectors is the same way. When you create a vector, it's essentially just creating a bunch of numbers under the screen, kind of like those same GPS coordinates. The dimension and the algorithm that you use to generate that vector can be different across different databases, but the actual meaning of that data will change. And so GoldenGate can replicate the vector data as long as the algorithm and the dimensions are the same. If the algorithm and the dimensions are not the same between the source and the target, then you'll actually want GoldenGate to replicate the base data that created that vector. And then once GoldenGate replicates the base data, it'll actually call the vector embedding technology to re-embed that data and produce that numerical formatting for you. 15:42 Lois: So, there are some nuances there… Nick: GoldenGate can also replicate and consolidate vector changes or even do the embedding API calls itself. This is really nice because it means that we can take changes from multiple systems and consolidate them into a single one. We can also do the reverse of that too. A lot of customers are still trying to find out which algorithms work best for them. How many dimensions? What's the optimal use? Well, you can now run those in different servers without impacting your actual AI system. Once you've identified which algorithm and dimension is going to be best for your data, you can then have GoldenGate replicate that into your production system and we'll start using that instead. So it's a nice way to switch algorithms without taking extensive downtime. 16:29 Nikita: What about in multicloud environments? Nick: GoldenGate can also do multicloud and N-way active-active Oracle replication between vectors. So if there's vectors in Oracle databases, in multiple clouds, or multiple on-premise databases, GoldenGate can synchronize them all up. And of course we can also stream changes from vector information, including text as well into different search engines. And that's where the integration with Elasticsearch and OpenSearch comes in. And then we can use things like NVIDIA and Cohere to actually do the AI on that data. 17:01 Lois: Using GoldenGate with AI in the database unlocks so many possibilities. Thanks for that detailed introduction to Oracle GoldenGate 23ai and its capabilities, Nick. Nikita: We've run out of time for today, but Nick will be back next week to talk about how GoldenGate has evolved over time and its latest features. And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course to learn more. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:33 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
En esta edición de No Hay Derecho abordaremos, entre otros temas: - Julio Diaz Zulueta juró como nuevo ministro del Interior. - José Santiváñez se despide del Mininter: "Nos vemos en el 2026". - Ejecutivo, Congreso y Defensoría del Pueblo critican la labor del Ministerio público y piden su reorganización. - Comisión de Venecia rechaza que órganos electorales en Perú se sometan a juicio político. - Eduardo Salhuana en contra de vacancia de Dina Boluarte planteada por Susel Paredes. - Funcionaria de la MML agrede a periodista de Latina durante desalojo en vía Ramiro Prialé. - Programa del Midis Wasi Mikuna cambia de formato y ahora padres harán las compras de alimentos sin ningún control. - Congreso amplía el plazo para formar alianzas para las elecciones 2026. - Poder Judicial sentencia a congresista Luis Picón a más de 4 años de pena suspendida por el delito de negociación incompatible.
Nach dem Super-Sieg gegen Leverkusen wartet Inter Mailand auf die Bayern. Geht da wirklich was mit dem Finale dahoam??? Matze hat die Glaskugel dabei und Calli die Super-Bayern nach dem Spiel im Luxus-Hotel beim Frühstück getroffen. Lautet das Erfolgsgeheimnis etwa Egg Benedict?? Matze hat Informationen aus der Weltstadt Lippstadt dabei und Tobi wähnt einen großen Bundesliga-Sponsor auf dem richtigen Weg in den USA. Dabei liegt die große Frage doch ganz nah: Wie geht der VfL Bochum mit dem Sieg bei den Bayern um? Hilft der Knaller-Sieg am Wochenende gegen Frankfurt? "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
"Tachchen", "Moin Moin", "Hallöchen"! So sagt man bei Copa TS, so sagt man bei Fußball MML, so sagt man bei ViscaTabak. Das ist nichts für uns. Wir sagen: "Gude" und "neben mir sitzt der Mann" bei 50+2, liebe Freunde! Wir sehen uns als klaren Wahlsieger und nehmen den Auftrag der Hörerschaft zur Bundesliga-Rückblicksbildung an. Viel Spaß wünschen Moderations-Minister Nico Heymer und Niklas Levinsohn, Bundesminister für Pressingwerte und erwartete Punkte!Unsere allgemeinen Datenschutzrichtlinien finden Sie unter https://art19.com/privacy. Die Datenschutzrichtlinien für Kalifornien sind unter https://art19.com/privacy#do-not-sell-my-info abrufbar.
Plötzlich war Alarm im Doppelpass-Studio, also dem Foyer vom Münchener Flughafen-Hotel, als bei Calli auf einmal die Uhr Alarm schlug. Wie es dazu kam, erzählt er Matze und Tobi in dieser Ausgabe. Ein Verein, der Calli wirklich am Herzen liegt, ist Schalke 04. Ein Hauptgrund, warum dort seit mehreren Jahren schon die nötige Qualität fehlt, hat immer für hervorragende Stimmung gesorgt (zum Beispiel mit Westernhagen-Songs). Die verrückteste Story der Woche kommt aber eindeutig aus der Sport-Bild: Was der Dildo in der Bundesliga-Kabine machte und wie er dort wohl hin kam, das besprechen die Champions mit viel guter Laune. Beim BVB haben sie davon momentan ja nicht ganz so viel - und kommt Steffen Baumgart mit Union. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Party-Calli war wieder unterwegs. Aber so schnell haut ihn nix um, wahrscheinlich wegen der kalten Dusche, die er sich regelmäßig gönnt. Tobi rätselt, wie kalt wohl die Dusche für Niko Kovac ausfällt Samstag in Dortmund. Die große Premiere der nächsten Top-Trainers steht an. Matze weiß, dass auch Robert Kovac (eigentlich ja der Erfolgreichere der Brüder) eine entscheidende Rolle spielt. Und noch ein dritter Co-Trainer ist am Start. Nur, die Kontinuität bringt das auch nicht zurück nach Dortmund. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Calli, Matze und Tobi freuen sich über immer mehr Hörer! Dann erfahren nämlich auch immer mehr, was wirklich los ist hinter den Kulissen der Bundesliga! Am Ende hält Lothar sowieso alle Strippen in der Hand, das ist jawohl klar!! Calli sagt, was bei Boniface Sache ist, CR7 ist ja eh einer seiner absoluten Lieblings-Spieler. Matze sorgt sich um die Nachfolge von Manuel Neuer bei den Bayern. Und Tobi weiß, warum Ralf Rangnick jetzt auf einmal ein Thema ist beim BVB. Es lebe, das ist auf jeden Fall Fakt, die heiligen Glaskugel von Lippstadt. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro
Während der Alltag die Champions im Griff hat, kocht Borussia Dortmund über. Welche Rolle spielt eigentlich Matthias Sammer genau? Tobi findet, dass der, in so einer Not-Situation wie jetzt dem Sahin-Absturz, ja auch mal Verantwortung übernehmen könnte. Matze erinnert sich an eine Begegnung mit Roger Schmidt, die ihm damals gar nicht so bewusst war. Zudem hat Matze hat auch eine absolute Wunsch-Lösung für den Trainerposten in Dortmund - und kein Geringerer als Calli hat ihn darauf gebracht. Der Stammesälteste organisert in Thailand schon wieder Sachen. Ob er Red Bull trinkt, was durchaus passen würde, ist nicht überliefert. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
The Dublin office of global law firm Taylor Wessing has advised on multiple high-profile deals in 2024, cementing its position as a leading law firm in Ireland for complex domestic and cross-border transactionalwork. Taylor Wessing Ireland was instrumental in delivering deals such as RIP.ie's sale to the Irish Times Group, Sherweb's acquisition of MicroWarehouse, and the acquisition of Mahon Retail Park. These market-leading transactions are a testament to the global law firm's commitment to expanding its Irish operations, while showcasing the depth of its legal expertise. The firm experienced strong growth in its banking and finance, corporate, disputes, IP, data, real estate, patents and tax capabilities, while solidifying its status as a pre-eminent law firm in its key sectors of technology, life sciences, healthcare and real estate. Highlight transactions on which Taylor Wessing Ireland advised include: Banking and Finance Global financial institution, ING Bank N.V's €55 million facility loan to an investment fund managed by Ireland's largest life insurance and pensions provider, Irish Life. The loan was used to refinance existing facilities in respect of Fernbank PRS, a residential development located in South Dublin. Corporate Mergers & Acquisitions Ireland's leading online platform for death notices, RIP.ie, on its sale to the Irish Times Group. Leading global cloud solutions provider, Sherweb, on its acquisition of MicroWarehouse, a prominent player in the Irish cloud technology market. Customer communications technology provider serving financial and regulation industries Mail Metrics, on its acquisition of Adare SEC and a simultaneous equity investment by MML and debt financing by AIB and Bank of Ireland. The management team of Kyte Powertech, a leading manufacturer of distribution transformer solutions, on its sale to R&S Group, a leading provider of electrical infrastructure products headquartered in Switzerland. Activist investor, Engine Capital, on its agreement with drinks company C&C. ATC, a leading energy efficiency, heating and related automated solutions business headquartered in Dublin, on LED Group's acquisition of the company (subject to CCPC approval). LED Group is a leading platform for energy transition solutions backed by Oaktree Capital Management. Real Estate and Real Estate Finance French real estate investment fund, Corum Origin, on its €50 million acquisition of Mahon Retail Park from Ireland's leading commercial property company, IPUT. A number of global real estate investment managers on several deals including: a €100 million refinancing of a landmark development in a prime South Dublin location; a €100+ million refinancing of a landmark mixed-use development in Dublin city centre; an €80+ million financing of a logistics park in Dublin; and a Dublin city centre landmark hotel refinancing. Leading Irish investment and development partner, Elkstone through its investment fund, Geminiville Limited, on the sale of its development lands in Barna, on the outskirts of Galway. Global investment firm, Investcorp, on the acquisition and financing by Capital flow, of two retail parks from US investment group, Davidson Kempner. Advised on lease or sale and leaseback agreements for various clients, including Corum (in relation to the lease of office space at George's Quay House to Personio) and Appeals Centre Europe (in relation to the lease of new office space from Kennedy Wilson). Commenting on the extent of Taylor Wessing Ireland's activity throughout 2024, Adam Griffiths, Partner and Head of the Dublin office said: "2024 has been a year of significant growth for Taylor Wessing Ireland. I want to take a moment to congratulate our clients on a successful year and the key milestones they have achieved. As we look to 2025, Taylor Wessing Ireland remains committed to supporting our clients in their future ventures, leveraging the expertise of our partners to help them prosper and grow." Founded and led ...
Spielmacher - Der EM-Talk mit Sebastian Hellmann und 360Media
Zum Start ins neue Jahr ist der Chef der drittgrößten Kraft der Bundesliga zu Gast: Axel Hellmann, Vorstandssprecher von Eintracht Frankfurt und Präsidiumsmitglied der DFL. Axel Hellmann blickt auf ein erfolgreiches Jahr 2024 und analysiert, was es braucht, damit die Eintracht diese Saison erfolgreich zu Ende spielen kann. Er gibt Einblicke in die Transferpolitik der Frankfurter rund um Topspieler Omar Marmoush und erklärt, wieso er wenig von Ausstiegsklauseln hält. „Ich hatte nie den Eindruck, dass Jürgen Klopp ein Fußballromantiker war“, sagt Axel Hellmann mit Blick auf die Zusammenarbeit von Klopp mit RB. Außerdem erzählt er von einer Begegnung mit Karl-Heinz Rummenigge, bei der der Bayern-Boss ihn zu seiner Schuh-Wahl maßregelte und verrät, wieso er trotzdem am liebsten Turnschuhe trägt. Sebastian Hellmann trifft auf Axel Hellmann – klar, dass derselbe Name im Fußballgeschäft auch mal zu Verwirrungen führen kann. Warum BVB-Geschäftsführer Aki Watzke das wohl am besten weiß und wie dieser schon das eine oder andere Mal beinahe Geheimnisse ausgeplaudert hätte – ihr hört es in der neuen Folge von Spielmacher! “SPIELMACHER - Fußball von allen Seiten" ist eine Gemeinschafts-Produktion von 360Media und der Podcastbande. Neue Folgen alle 14 Tage donnerstags, überall, wo es Podcasts gibt. Wer es auch sehen will: Als Video-Podcast erscheint „SPIELMACHER - Fußball von allen Seiten" in gekürzter Form bei Sky Sport News
We sit with the team at Bitmap Studios and discuss all things metaverse on Bitcoin and DMT. Bitmap Studios is launching a new avatar collection called The BitAvatars in a few days (Oct 24th) on Mscribe. This will be a UNAT collection, sharing the same element pattern as Natcats, giving the non-arbitrary supply of 8,064.BitAvatars are unique 3D models with interoperable use cases in mind. Already these avatars will be functional with FOXXI's Bitverse, Inscribed.Space, and The NATRIX. This focus gives tremendous ability to these avatars and their access to a wider range of experiences for holders. We discuss how Avatars can serve as a killer onboarding vehicle for new users of a Bitmap. We then discuss Bitmap Studios recent developments in the NATRIX which showcase the ability for creators to leverage MML and Bitmap parcels to produce engaging and showcase worthy experiences. We go deep on DMT application within the metaverse economy and highlight how blockdrops are being leveraged to amplify the metaverse economy. Topics: First up, the guys sit with the team at Bitmap Studios and discuss all things metaverse on Bitcoin and DMT Next, what is BitAvatars, and how will it impact the web3 space and Finally, how Avatars can serve as a killer onboarding vehicle for new users of a Bitmap Please like and subscribe on your favorite podcasting app! Sign up for a free newsletter: www.theblockrunner.com Follow us on: Youtube: https://bit.ly/TBlkRnnrYouTube Twitter: bit.ly/TBR-Twitter Discord: bit.ly/TBR-Discord
In this episode of the MML City View Podcast, we share some of the things cities can expect with next year's upcoming Missouri Legislative Session.MML's Executive Director, Richard Sheets, and Shanon Hawk, Executive Vice President of Government Affairs with AT Government Strategies, join us to share their insight.https://mocities.com/Web/Web/Advocacy/Legislative-Activity.aspx?hkey=01e1a69a-5c8f-48e8-b6c4-6ca259253f94Legislative ToolkitLegislative Contact FormBe sure to subscribe to Missouri City View and leave us a review in your favorite podcast app! Learn more at www.mocities.com.Follow MML!www.facebook.com/mocitieswww.twitter.com/mocitieswww.linkedin.com/company/mocities
0:24 Uhr, Sprecherkabine 4 – eine ganz normale Aufnahmezeit nach einem langen Übertragungstag! Wenn sich Elmar und Robbie mal wieder face-to-face sehen, kann nur die Weltmeisterschaft begonnen haben. Nach der getrübten Stimmung über den dramatischen Auftritt von Cameron Menzies überwiegt die Freude über den Sieg von Kai Gotthardt, dem selbst ein gebrochener Pfeil nicht aus der Ruhe bringen konnte. Während Robbie auch dem Darts-Panini-Hype verfallen ist und Elmar seine Bildschirmzeit besser kontrolliert, dürfen ein paar gewagte Prognosen für die kommenden Tage nicht fehlen. "Game on! Der Darts Podcast" ist eine Produktion der Podcastbande. Neue Folgen gibt's immer dienstags - überall, wo es Podcasts gibt.
Im Studio bei MML ist es doch einfach am Schönsten. Max hat der Deutschen Bahn sein Vertrauen geschenkt und ist für die Podcast-Aufnahme einfach mal nach Hamburg gekommen. Mit offenen Armen wurde er dort von Martin begrüßt und natürlich ging es direkt rein in die wichtigsten Themen der Woche.Das Nachholspiel gegen Delay. Handtaschen für Dilara. Und einen neuen Gin aus dem Hause Harnik.Ihr merkt schon: Es ist gerade viel los. Also keine Zeit verschwenden und direkt rein. In diese neue Folge. Let's Go!Sendet eure Sprachnachricht hier ans FLATTERFON: https://wa.me/message/GOIP54IPWDWBE1Weitere Infos zu uns und unseren Werbepartnern findest du hier: https://linktr.ee/flatterball Hosted on Acast. See acast.com/privacy for more information.
Vor dem Knaller BVB - Bayern ist richtig was los! Vincent Kompany beeindruckt die Experten, er hat einen Super-Weg für den Rekordmeister gefunden. Mit herausragenden Innenverteidigern, Kim und Upamecano. Beim HSV haben sie komplett andere Sorgen. Einfach schlimm und komplett zweitklassig. Und Pep Guardiola leidet auch optisch unter der Minus-Serie mit City gerade. Matze erzählt eine Geschichte, die ihm bei einem Comedy-Auftritt nicht nur nahe ging, sondern auch herausforderte. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Elmar und Robbie, alias „Romario“, melden sich nicht nur nach den Players Championship Finals, sondern erst recht frisch nach der WM-Auslosung! Damit darf der WM-Countdown mit Volldampf beginnen! Neben Mitgefühl für Paul Krohne und Jurjen van der Velde, die die WM knapp verpasst haben, geht es flugs durch den Turnierbaum, inklusive steiler Prognosen und „speziellem“ Humor. Zudem verrät Elmar, dass er vor Ende November nicht seine Winterreifen aufzogen hat und erste Details zur Road to Ally Pally, vor allem für alle rund um Köln! "Game on! Der Darts Podcast" ist eine Produktion der Podcastbande. Neue Folgen gibt's immer dienstags - überall, wo es Podcasts gibt.
Plötzlich war richtig Wirbel. Als Lothar Matthäus bei Sky90 von Hansi Flicks großem Interesse an Florian Wirtz berichtete, 2020 war das, zu Flicks Zeit als Bayern-Trainer, dauerte es nur einige Stunden, bis sich Brazzo Salihamidzic meldete! So sei es ja gar nicht gewesen. Calli war damals nah dran und hat sich auch noch mal schlau gemacht. Wie das damals wirklich lief, erzählt er Matze und Tobi. Matze kommt gerade etwas zur Ruhe, nach einem Kurz-Trip nach Griechenland. Wen er da traf und um welches Thema es ging? Alles in dieser Folge. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Die Bayern marschieren voraus und die Konkurrenz kann nicht Schritt halten. RB Leipzig hat zu viele Verletzte und der BVB macht mal wieder Dortmund-Dinge. Aber was ist mit Bayer Leverkusen los? "Das Spielglück ist momentan nicht da", analysiert Niels Babbel, Content Lead von Fußball MML, im Fever Pit'ch Podcast. Aber zusammen mit Malte Asmus macht er noch mehr Probleme bei Bayer aus. Der Noch-Meister ledet vor allem unter einer gleich doppelten "Führungskrise". Sie schaffen es nicht mehr, einen Vorsprung ins Ziel zu bringen - weil u.a. der Unterschiedsspieler der Meistersaison, Granit Xhaka, seiner Bestform derzeit hinterher läuft. Woran das liegt und was ...Du möchtest deinen Podcast auch kostenlos hosten und damit Geld verdienen? Dann schaue auf www.kostenlos-hosten.de und informiere dich. Dort erhältst du alle Informationen zu unseren kostenlosen Podcast-Hosting-Angeboten. kostenlos-hosten.de ist ein Produkt der Podcastbude.Gern unterstützen wir dich bei deiner Podcast-Produktion. ?>
Die Bayern marschieren voraus und die Konkurrenz kann nicht Schritt halten. RB Leipzig hat zu viele Verletzte und der BVB macht mal wieder Dortmund-Dinge. Aber was ist mit Bayer Leverkusen los? "Das Spielglück ist momentan nicht da", analysiert Niels Babbel, Content Lead von Fußball MML, im Fever Pit'ch Podcast. Aber zusammen mit Malte Asmus macht er noch mehr Probleme bei Bayer aus. Der Noch-Meister ledet vor allem unter einer gleich doppelten "Führungskrise". Sie schaffen es nicht mehr, einen Vorsprung ins Ziel zu bringen - weil u.a. der Unterschiedsspieler der Meistersaison, Granit Xhaka, seiner Bestform derzeit hinterher läuft. Woran das liegt und was das für Leverkusens Ambitionen bedeutet - heute im Podcast. Takeaways Dank Hacking könnte in Bochum doch was gehen. Die Belastung der Spieler ist ein zentrales Thema. RB Leipzig hat eine gute Saison, kämpft aber in der Champions League. Borussia Dortmund hat mit Verletzungspech zu kämpfen. Emre Can zeigt Übermotivation und schadet der Mannschaft. Die Qualität im Kader von Dortmund ist vorhanden. Die Neuzugänge bei Dortmund funktionieren teilweise gut. Die Kaderplanung von Dortmund wird kritisch betrachtet. Die Belastung durch viele Spiele ist für alle Teams spürbar. Die Entwicklung der jungen Spieler bei Dortmund ist entscheidend. Der Druck auf Emre Can ist enorm. Bayer Leverkusen zeigt weiterhin gute Leistungen. Führungsprobleme sind ein zentrales Thema bei Leverkusen. Granit Xhaka hat seine Form verloren. Die Defensive des FC Bayern ist stabil, aber nicht perfekt. Jamal Musiala entwickelt sich zu einem Schlüsselspieler. Florian Wirtz könnte ein interessanter Neuzugang für Bayern sein. Die Motivation gegen Leverkusen ist bei anderen Teams gestiegen. Bayer Leverkusen muss sich in der Champions League beweisen. Die Entwicklung von jungen Spielern ist entscheidend für den Erfolg. Chapters 00:00 Rückblick auf Spieltag 10 und die aktuelle Lage 02:57 Belastung und Verletzungen im Fußball 06:08 RB Leipzigs Saison und Titelambitionen 09:01 Borussia Dortmunds Herausforderungen und Verletzungspech 11:56 Kaderplanung und Umbruch bei Dortmund 17:56 Druck und Verantwortung: Die Rolle von Emre Can 19:24 Bayer Leverkusen: One-Hit-Wonder oder ernstzunehmender Konkurrent? 22:18 Führungsprobleme bei Bayer Leverkusen 24:31 Granit Xhaka: Ein Schatten seiner selbst? 27:33 Bayerns Defensive: Stabilität oder Schwäche? 30:04 Jamal Musiala: Der aufstrebende Star 33:14 Florian Wirtz: Ein möglicher Neuzugang für Bayern? AIS_AD_BREAK_1=1153877,333;
Drübergehalten – Der Ostfußballpodcast – meinsportpodcast.de
Die Bayern marschieren voraus und die Konkurrenz kann nicht Schritt halten. RB Leipzig hat zu viele Verletzte und der BVB macht mal wieder Dortmund-Dinge. Aber was ist mit Bayer Leverkusen los? "Das Spielglück ist momentan nicht da", analysiert Niels Babbel, Content Lead von Fußball MML, im Fever Pit'ch Podcast. Aber zusammen mit Malte Asmus macht er noch mehr Probleme bei Bayer aus. Der Noch-Meister ledet vor allem unter einer gleich doppelten "Führungskrise". Sie schaffen es nicht mehr, einen Vorsprung ins Ziel zu bringen - weil u.a. der Unterschiedsspieler der Meistersaison, Granit Xhaka, seiner Bestform derzeit hinterher läuft. Woran das liegt und was ...Du möchtest deinen Podcast auch kostenlos hosten und damit Geld verdienen? Dann schaue auf www.kostenlos-hosten.de und informiere dich. Dort erhältst du alle Informationen zu unseren kostenlosen Podcast-Hosting-Angeboten. kostenlos-hosten.de ist ein Produkt der Podcastbude.Gern unterstützen wir dich bei deiner Podcast-Produktion. ?>
Dieter Hecking heißt Bochums Feuerwehrmann und er übernimmt einen echten Höllenjob! Gut, dass Calli so freundlich ist und Tipps gibt, wie die Bochumer jetzt gegen Leverkusen antreten müssen - um gegen den Meister vielleicht sogar etwas zu reißen! Matze berichtet von heißen Geschichten auf der Autobahn-Raststätte, klar, da hat Calli mit dem Mannschaftbus auch schon was erlebt. Ob in Dortmund und mit Nuri Sahain jetzt die Krise überstanden ist? Es bleiben einige Fragezeichen. Stichwort Stallgeruch. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Die Sensationen gehen einfach so weiter. Vor allem, wenn Ritchie Edhouse als Nr. 39 der Welt die European Darts Championship gewinnt und fast so viel Preisgeld wie der Weltmeister im Iron Man erhält! Elmar und Robbie fragen sich, wieso die Topspieler kaum noch große Turniere gewinnen, inkl. einer Lageeinschätzung zu einigen großen Namen. Bei nur noch 47 Tagen bis zur WM im Ally Pally, der eine ganz besondere Kellerwelt haben muss, wirft Robbie einen interessierten Blick auf den indischen Qualifier. "Game on! Der Darts Podcast" ist eine Produktion der Podcastbande. Neue Folgen gibt's immer dienstags - überall, wo es Podcasts gibt.
Leute, weil uns ausgerechnet im Babylon die Sprache genommen wurde, hatten wir reichlich Wut aber noch mehr Werk im Tornister. Und kamen tatsächlich mit ordentlich Beschleunigung zurück aus Berlin. So zogen wir uns also den Trenchcoat über und legten uns mit der Muskete in den Heckenschützengraben der Generalabrechnung. Unfriendly fire, penibles Pulver im Lauf der Dinge. Das Wochenende hatte ja genug Patronen geliefert. Da war das Debüt des Außenrist-Außenministers, der Beginn einer italienischen Reise, die erstmal nur nach Zitronen schmeckte. Und da waren die Bayern in Bochum, wieder Grönemeyer im Duett mit dem Stadion, und die Melancholie in den Augen eines Jungen, der jede Zeile und jeden Grashalm kannte. Ein kurzer Blick, darin ein halbes Leben. Und natürlich waren da auch die Dortmunder, die sich mal wieder selbst über den Haufen geschossen hatten. Zinnsoldaten im Hochofen der Highperformance, die in Madrid den schnellen Tod auf der Schiene starben und in Augsburg wie Marionetten tanzten, die jeden Faden verloren hatten. Und weil Dortmund nie nur Gegenwart ist, sondern immer gleich Kabinett, Stallschatten und Polit-Prominenz, durfte hintenraus auch Matthias Sammer ans Mikro. Der Stehtisch-Stalin, der Thesen-Tito, der Ostbote mit dem ganz großen Päckchen. Und er nahm sich die Zeit, mit einem Monolog aus den Untiefen der Urangst, mit einem letzten Zwischenruf aus dem Führerbunker. Feuerkopfschütteln. Weshalb wir ihn, diesen Stoiber aus Sachsen, natürlich noch mal auf die Bühne holen mussten. In vollem Glanz und ganzer Länge. Aber hört selbst. Nur hier, in dieser neuen Folge. FUSSBALL MML - denn alles andere sind nur Wortgefechte mit Marcus Sorg!
Der BVB als auch die Bayern bieten gute Unterhaltung in der Champions League! Das kann man wohl so sagen. Wie sieht Calli eigentlich Barcelona im Vergleich mit den Bayern? Und warum scheint es mit Hansi Flick so gut zu passen? Tobi will von Matze wissen, ob der Klopp-Faktor schon jetzt einen Einfluss auf die RB-Welt haben kann, speziell auf RB Leipzig. Die müssen am Wochenende gegen Freiburg ran - und sind in der Bundesliga ja voll Augenhöhe mit den Bayern! "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Martin Schindler als Topgesetzter bei der European Darts Championship, ein 109er Average auf der Women's Series und eventuell sechs Deutsche bei der WM: Rekordverdächtiger kann eine Folge höchstens mit Günther Jauch und Markus Lanz beginnen, oder mit weiteren Rekorden von Luke Littler und Michael van Gerwen! Zuverlässiger als manch Autopilot sind die permanenten Überraschungen, allen voran von Kim Huybrechts. Elmar fühlt sich derweil geschmeichelt, als im Finale von Prag eine legendäre 1 geworfen wurde und Robbie ausgerechnet an ihn denken musste! "Game on! Der Darts Podcast" ist eine Produktion der Podcastbande. Neue Folgen gibt's immer dienstags - überall, wo es Podcasts gibt.
Typisch Knallhart-Calli! Während Tobi noch vom besten Steakhouse Brasiliens schwärmt, haut er Gute-Laune-Geschichten von Party-Matze raus. Dabei ist der doch ganz sicher noch der Fitteste der Echten Champions, kein Wunder, bei seinem Programm: Erst Poldi-Abschied, dann als Red-Bull-Kloppo zur Icon League. Noch mehr Performance bietet höchstens Simon Terodde, die alte Maschine! Von wegen Karriereende!! Begeistert von Nagelsmann, Stiller und Leweling ist jetzt aber Vorfreude auf die Bundesliga angesagt. "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Wenn Mike de Decker den World Grand Prix gewinnt, muss das Wort Sensation eigentlich überstrapaziert werden! Direkt nach dem Finale treibt Elmar und Robbie neben großen Aussagen zu Luke Humphries der Verlauf der gesamten Turnierwoche um. Bei so lobenden Worten finden sogar Größen wie Rafael Nadal oder Lukas Podolski Erwähnung, während Robbie als Insider zwei komplett neue Namen aufgefallen sind. Neben einem Ausblick auf die kommenden umfangreichen Wochen gibt Elmar ein Update zu einem möglichen Projekt vor der Weltmeisterschaft! "Game on! Der Darts Podcast" ist eine Produktion der Podcastbande. Neue Folgen gibt's immer dienstags - überall, wo es Podcasts gibt.
Das tut schon weh!! Klar, Red Bull holt sich einen großen Sympathisanten. Aber, Kloppo, ernsthaft, ist das nun der Super-Trumpf?? Ein ziemlicher Stich auf jeden Fall auch in die Herzen aller BVB-Fans, das hat Matze direkt nach der Verkündung der Riesen-News schon mitbekommen. Jürgen Klopp wird also "Head of Global Soccer", internationaler Fußballchef, beim Red Bull-Konzern. Und angeblich lief das Thema schon ziemlich lange, was zum einen Calli erfahren hat - was man aber auch an der einen oder anderen Personalie erkennen könnte. Ah, genau: Zwei Länderspiele liegen auch noch vor uns, erst Bosnien-Herzegowina und Montag dann die Holländer! "Echte Champions XXL" ist eine Produktion der Podcastbande. Neue Folgen immer donnerstags - überall, wo es Podcasts gibt.
Bei besonderen Anlässen sitzt Elmar erstmals auf dem Speicher und Robbie im Wohnzimmer. Mit so vielen aktuellen Turnieren kommt es glatt zu Verwechslungen zwischen Halbfinale und Finale, ohne den spektakulären Turniersieg von Martin Schindler in Basel zu schmälern, dem die ehrenwerteste Kategorie zu Teil wird! Während Robbie mit manch Schweizer Teilnehmer schon nachhaltige Erfahrungen machen musste, kann Elmar von sportlichen Höchstleistungen auf dem Oktoberfest berichten. Natürlich darf auch das „Fartgate“ des James Wade nicht fehlen und unser Umgang mit 86 Milliarden Neuronen. "Game on! Der Darts Podcast" ist eine Produktion der Podcastbande. Neue Folgen gibt's immer dienstags - überall, wo es Podcasts gibt.
Tue, 11 Jun 2024 12:34:21 +0000 https://fussballmml.podigee.io/376-jule-heute-schotten-schrotten-emml-01 6f656db6459b7c493c1a1a9c87bc6f5c Werbepartner dieser Folge: Vodafone: Pünktlich zur Sommersaison werden die Vodafone CallYa-Tarife mit mehr Highspeed-Datenvolumen ausgestattet und das für Neu- und Bestandskunden! Alles dazu findet ihr unter: vod.af/FussballMML! EWE: EWE liefert euch Highspeed-Internet mit Glasfaser für dein Zuhause. Schon ab monatlich 19,99 Euro. Mehr Infos unter: ewe.de/internet! Die NACHSPIELZEITEN Lesetour: Tickets für die NACHSPIELZEITEN Lesetour findet ihr unter tix.to/NACHSPIELZEITEN. Wir freuen uns auf Euch! Das NACHSPIELZEITEN Bundle (Buch + Stickerbogen) exklusiv bei uns im MML Shop! Gemeinsam mit unserem Partner FreeNow präsentieren wir euch AUFTAKTLOS - Die große #EMML Vorabendschau mit MML & Friends! Alle Infos und Tickets dafür findet ihr unter www.fussballmml.de ! 376 full no Micky Beisenherz, Maik Nöcker, Lucas Vogelsang
Bei Tonis letztem Spiel für Real Madrid wird es nochmal richtig sportlich. Die große Emotion der letzten Woche ist fürs erste abgeheftet. Stiff upper lip und jetzt geht es um nichts geringeres als den sechsten Champions League Sieg für Toni – zum krönenden Abschluss. Und darum, Geschichte zu schreiben und Dinge zu tun, die Real halt tut. Wir sind mit dabei. Zusammen mit Maik, Micky & Lena von Fußball MML (allesamt tapfere Borussen), erste Runde Krankenschein, dann die Omma tot. Scheißegal und trotzdem kein Tor gemacht, obwohl se hätten müssen. Und dann zieht sich die Schlinge der Blancos plötzlich enger. Und es kommt das, was kommen muss. Am Ende ist Toni jetzt auch ganz amtlich, mit Stempel, einer der Größten, vielleicht sogar der größte deutsche Spieler aller Zeiten? Uns allen bleiben die Worte weg. Und wir müssen an unseren Lieblings-Chuck-Norris Witz denken: “Chuck Norris lernt nicht aus Fehlern. Fehler lernen aus Chuck Norris.”. Heute Abend ist Toni Kroos größer als Chuck Norris. Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/EinfachmalLuppen
Download the MML Legislative Toolkit today to learn more about the legislative process, how to read a bill, important communication strategies for meeting with legislators, and a glossary of legislative terms. This guide will help you better understand the legislative process and how you can help ensure your municipality's voice is heard. MML has also added a form on the Legislative Activity page of our website where you can report your communication with legislators. This is crucial for the MML advocacy team as they coordinate follow-up materials and daily communication with state and federal legislators. Be sure to complete this form if you have called, emailed, or visited your state or federal legislator! Be sure to subscribe to Missouri City View and leave us a review in your favorite podcast app! Learn more at www.mocities.com.Follow MML!www.facebook.com/mocitieswww.twitter.com/mocitieswww.linkedin.com/company/mocities
In der heutigen Episode spreche ich mit dem Spiegel-Bestseller-Autor Lucas Vogelsang über sein neues Buch „Nachspielzeiten: Denn der Fußball schreibt die besten Geschichten“. Der Klappentext bringt es am besten auf den Punkt: Wie konnte Otto Rehhagel Europameister werden, Paul Gascoigne in einer einzigen Nacht ein ganzes Land verändern und Franz Beckenbauer Ende der Siebzigerjahre New York erobern? Lucas Vogelsang schaut noch einmal genau hin und erzählt von den langen Augenblicken nach dem Abpfiff, dem schnellen Leben nach der Karriere und den kleinen und großen Dramen des Spiels. Der Topseller ist längst ganz weit oben in den Sachbuch-Charts des Spiegels zu finden und gehört damit zu den meistverkauften Büchern beim lokalen Händler. Und das zu Recht: Das Buch ist, neben den tollen Geschichten, auch sehr unterhaltsam erzählt. In dieser Episode sprechen wir über die Entstehungsgeschichte des Buches und den besonderen Reiz einer Lesereise, die ihn oft zu den Schauplätzen des Geschehens bringt. Der Journalist Lucas Vogelsang ist unter Fußballfans schon lange kein Unbekannter mehr. Er ist das „L“ der großen Fußballshow „Fußball MML“ zusammen mit Micky Beisenherz und Maik Nöcker. Wer mehr darüber hören möchte, dem empfehle ich die Episoden #126 „Es wird alles noch absurder“ und #63 „Stubenhocker-Session“. Termine für und Tickets die Lesungen: https://tix.to/NACHSPIELZEITEN Diese Folge wird präsentiert von: **RABOT Charge** Erlass der Servicegebühr von Rabot Charge in Höhe von 4,99€/pro Monat für die ersten 6 Monaten mit dem Code: RABOTxDASZIEL https://t1p.de/rabotxdasziel Mehr über den Podcast: FB: https://www.facebook.com/daszielistimweg Instagram: https://www.instagram.com/andreas.loff
Tue, 28 May 2024 12:55:17 +0000 https://fussballmml.podigee.io/373-teufel-thesen-temperamente-e39-saison-2324 074e283222f0cee8ac39146fd9c1a169 Werbepartner dieser Folge: NordVPN: Holt euch jetzt den exklusiven NordVPN Deal unter nordVPN.com/mml. Es ist völlig risikofrei mit der 30 Tage Geld zurück Garantie. Der Markt führende VPN Anbieter der Welt hat über 14 Millionen Nutzer weltweit und heißt NordVPN. Die NACHSPIELZEITEN Lesetour: Tickets für die NACHSPIELZEITEN Lesetour findet ihr unter tix.to/NACHSPIELZEITEN. Wir freuen uns auf Euch! Das NACHSPIELZEITEN Bundle (Buch + Stickerbogen) exklusiv bei uns im MML Shop! Gemeinsam mit unserem Partner FreeNow präsentieren wir euch AUFTAKTLOS - Die große #EMML Vorabendschau mit MML & Friends! Alle Infos und Tickets dafür findet ihr unter www.fussballmml.de ! 373 full no Micky Beisenherz, Maik Nöcker, Lucas Vogelsang
Tue, 21 May 2024 13:03:32 +0000 https://fussballmml.podigee.io/372-munster-energy-die-platzsturmflut-e38-saison-2324 f679d570d0d0ab8c1aa3c306e92c190f Werbepartner dieser Folge: JobRad: Such dir dein Traumrad als Dienstrad mit JobRad aus und spare dabei bares Geld, denn: ein JobRad ist bis zu 40% günstiger als das Fahrrad zu kaufen. Alle Infos findest du bei deinem Arbeitgeber oder auf sc.jobrad.org. NordVPN: Holt euch jetzt den exklusiven NordVPN Deal unter nordVPN.com/mml. Es ist völlig risikofrei mit der 30 Tage Geld zurück Garantie. Der Markt führende VPN Anbieter der Welt hat über 14 Millionen Nutzer weltweit und heißt NordVPN. Gebaut für Fußball: Gebaut für Fußball ist ein neuer fünfteiliger BVB-Podcast zum 50- jährigen Stadionjubiläum und ganz nebenbei eine MML Produktion. Also hört gerne mal rein. Überall dort, wo es Podcasts gibt. Die NACHSPIELZEITEN Lesetour: Tickets für die NACHSPIELZEITEN Lesetour findet ihr unter tix.to/NACHSPIELZEITEN. Wir freuen uns auf Euch! Das NACHSPIELZEITEN Bundle (Buch + Stickerbogen) exklusiv bei uns im MML Shop! Gemeinsam mit unserem Partner FreeNow präsentieren wir euch AUFTAKTLOS - Die große #EMML Vorabendschau mit MML & Friends! Alle Infos und Tickets dafür findet ihr unter www.fussballmml.de ! 372 full no Micky Beisenherz, Maik Nöcker, Lucas Vogelsang
00:00 Introduction and Background02:17 The Impact of Remote Work03:05 The Evolution of Marketing05:16 First Mentor and Lessons Learned06:17 Early Career and Creative Aspirations09:10 Immigrant Background and Family11:02 Breaking Through Roadblocks and Glass Ceilings12:15 Leadership Lessons and Aha Moments13:46 Navigating Layoffs and Fair Exchange of Value19:52 The Importance of Listening and AI in Marketing20:19 The Current State of AI22:07 BrandGuard AI23:12 Brand Governance and Hyper-Targeting25:26 Staying Sharp in the Industry26:14 Vetting Expertise in AI27:26 Unintended Consequences of AI28:35 Exciting Developments in the Tech Space30:52 Concerns and Training Data31:58 Working on BrandGuard34:53 The Power of Curiosity37:25 Leaving a Positive Impact
Una pataleta de López Aliaga podría terminar costándole a cada peruano 120 dólares. Explicamos la telenovela de los bonos de la MML. MIENTRAS TANTO: La JNJ queda efectivamente incapacitada para seguir viendo el caso de Patricia Benavides. ADEMÁS: Otro contrabando en la Constitución para favorecer corruptos de alto vuelo. Y... ¿Llegó el otoño y necesitas medias de ligero algodón? Este y otros emprendimientos de hoy vienen con DESCUENTOS para ti. **** ¿Te gustó este episodio? ¿Buscas las fuentes de los datos mencionados hoy? SUSCRÍBETE en http://patreon.com/ocram para acceder a nuestros GRUPOS EXCLUSIVOS de Telegram y WhatsApp. También puedes hacerte MIEMBRO de nuestro canal de YouTube aquí https://www.youtube.com/channel/UCP0AJJeNkFBYzegTTVbKhPg/join **** Únete a nuestro CANAL de WhatsApp aquí https://whatsapp.com/channel/0029VaAgBeN6RGJLubpqyw29 **** Para más información legal: http://laencerrona.pe
Join host Craig Smith on episode #174 of Eye on AI as he sits down with Tianmin Shu, Assistant Professor at Hawkins in both the Computer Science and Cognitive Science departments. In this episode, Tianmin unravels the fascinating concept of world models and their intersection with artificial and human intelligence. Discover how these models, rooted in cognitive science, offer a blueprint for understanding our environment and enhancing AI's ability to interpret, predict, and interact within it. Explore the intricate dance between AI and cognitive science as Tianmin shares insights from his rich academic journey from UCLA to MIT, leading to his innovative work on social aspects of AI and the development of agents capable of human-level cooperation and understanding. Dive deep into the discussion on the integration of world models with large language models, and how this synergy could revolutionize AI's predictive capabilities and reasoning, paving the way for more intuitive and interactive systems across various domains, especially in household environments. Don't miss this riveting exploration of the next frontier in artificial intelligence research and its potential to transform our interaction with technology. Remember to rate us on Apple Podcast and Spotify if you're intrigued by the insights shared in this episode! This episode is sponsored by 1Password. 1Password combines industry-leading security with award-winning design to bring private, secure, and user-friendly password management to everyone. Companies lose hours every day just from employees forgetting and resetting passwords. A single data breach costs millions of dollars. 1Password secures every sign-in to save you time and money. Right now, my listeners get a free 2-week trial at: 1password.com/eyeonai Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
In this episode of the Skift Ideas Podcast, Colin sits down with Larry McGuire, Co-Founder and Owner of MML Hospitality. Larry McGuire was born and raised in Austin where his culinary career began at age 16, today, with 23 restaurants, hotels, and retail shops under MML Hospitality, Larry focuses on project development, finance, and operations at MML while co-leading concept and interior design at Lambert McGuire Design. Listen in as Colin's and Larry explore how the design of a space can not only provide an enriching experience for guests, but also for the local community. Host: Colin Nagy Guest: Larry McGuire Producer: Jose Marmolejos
Seit 60 Jahren bringt die Bundesliga Menschen in ganz Deutschland zusammen. Sechs Jahrzehnte geballte Fußballgeschichte. Auf und neben dem Platz. Im Stadion und vor den Fernsehgeräten. Legenden und legendäre Momente. Die Bundesliga ist dabei immer Drama. Verlorene Meisterschaften, gewonnene Titel. Abstiegstrauer und Aufstiegshelden. Die Bundesliga bedeutet Euphorie, Enttäuschung und an jedem Wochenende dann doch wieder die Hoffnung auf einen Sieg. In diesem Podcast erzählen wir die Geschichten der Bundesliga. Wir sprechen mit Wegbegleitern über große Momente und reisen mit den Podcastern von FUSSBALL MML in sechs Folgen durch die letzten 60 Jahre Bundesliga. Neue Folgen gibt's jeden Donnerstag. Abonniert diesen Kanal, um keine Episode zu verpassen. Der Podcast wird präsentiert von der DFL und MML.
Una imagen de una moto con el logo del Serenazgo y el precio de S/54,307 en letras grandes y rojas. Al lado, una foto del alcalde López Aliaga con una expresión de sorpresa o indignación. El fondo podría ser una calle de Lima con el logo de la MML.
In this episode we join MML Executive Director Richard Sheets and MML Legislative Consultant Shanon Hawk for an overview of what to expect as the Missouri Legislative Session begins in January. Learn about the top issues cities will face and hear about MML's new legislative guide that will help you build a meaningful relationship with your legislators and make sure your city's voice is heard.MML Legislative Guide and Activity Report FormMML Capitol ReportIf your city is not receiving the MML Capitol Report, please contact us at (573) 635-9134 so we can help you. Be sure to subscribe to Missouri City View and leave us a review in your favorite podcast app! Learn more at www.mocities.com.Follow MML!www.facebook.com/mocitieswww.twitter.com/mocitieswww.linkedin.com/company/mocities
Leute, diesmal war es ein klassisches Sandwich. Vorne mussten wir unbedingt noch mal über Beckham sprechen, dem wir gewissermaßen auf den Leim gegangen sind, gleich ordentlich Honig ums Maul. Mit so viel Sticky Stuff, da war selbst Micky baff. Nun ja, also haben wir uns doch noch mal ernüchternd erinnert, an die Koffer aus Katar, an das Millionengesicht der Blut-WM. Danach aber ging es hinein, in die Liga. Gleich mal nach Köpenick, wo gerade sehr viel Realität ist und sehr wenig Märchen. Und wo noch einmal eindrücklich zu besichtigen war, wie sich die Verhältnisse innerhalb eines halben Jahres verkehrt haben, weil Union plötzlich Stuttgart ist und Stuttgart plötzlich Union. Und der Fischer eher Packungsbeilage als Weltalmanach, während sich sein Gegenüber als Trainer so sehr entwickelt hat, dass er es sich leisten konnte, 70 Minuten für den Afrika-Cup zu üben. Und, ja, Undav ist der Welten Lohn! Im Westen, einen Tag später, war dann mal wieder Rhein-Derby. Und weil die Effzeh-Offensive den Prinzen im Ohr und damit den Schalk im Nacken hatte, wurde dort tatsächlich der Bock umgestoßen. Schlecht für Hennes, gut für Steffen. Und auf Gladbacher Seite durfte Julian Weigl mal wieder sein rhetorisches Talent unter Beweis stellen. Irgendwo zwischen Fake Love und Abfuck, zwischen lahmenden Fohlen und lähmenden Fehlern. Da es in diesen Tagen aber durchaus größere Probleme gibt als den Strömungsabriss am Niederrhein, mussten wir hintenraus noch mal einen Blick auf die Bayern werfen, die ja am Wochenende mal wieder mit Watte geworfen und damit wie erwartet geliefert hatten. Mit einem Papier, das nicht mal das Papier wert war, und nicht nur vom Zentralrat kritisiert, sondern auch von Stefan Effenberg, dem Tiger, in der immer sehr heißen Luft des Doppelpasses zerrissen wurde. So gab es die angemessenen Watschn, von allen Seiten. Was uns natürlich gefällt, auch wenn wir an dieser Stelle versichern wollen, dass wir friedliebende Menschen sind. Und es selbstverständlich schon einmal vorab bedauern, wenn diese Folge zu Irritationen führen sollte. Naja. In diesem Sinne: Viel Spaß!
Leute, wir haben in den vergangenen Monaten gerne und oft von den bleiernen Zeiten gesprochen. Bleiern, weil sich die Nationalmannschaft, erst unter dem späten Jogi Löw, dann unter dem viel zu frühen Hansi Flick, durch die Länderspiele schleppte, ohne Hoffnung, Lust oder Esprit. Bleiern, weil da vor allem Graugänse waren, aber keine Paradiesvögel mehr. Bleiern, weil diese Mannschaft ohne Chance war, gegen Japan oder Belgien. Bleiern, weil Oliver Bierhoff und Marcus Sorg. Aber das ist alles nur Fußball. Ein Spiel. Und ein Zeitvertreib. Seit vergangener Woche aber, seit dem Beginn des Terrors in Israel, seitdem nun wieder Krieg ist, im Nahen Osten, liegt diese Zeit tatsächlich bleiern auf unseren Schultern. Sie beschäftigt, erdrückt, zieht runter. Mit Bildern und Schlagzeilen, die Tonnen wiegen. Und auch der Fußball, noch immer Spiel, aber gerade nicht mehr nur Zeitvertreib, kann sich nicht freimachen davon. Die Konfliktlinien, von Hass und Trauer kalibriert, sie verlaufen auch hier mitten hindurch. Weil er Teil ist, nicht unpolitisch sein kann, wenn sich seine Protagonisten durchaus politisch positionieren. Er muss damit umgehen. Und wir müssen es auch. Weshalb es diesmal auch um Fußball ging, aber eben nicht nur. In einer Folge, irgendwo zwischen Hartford und Hamas, USA und Palästina, Flanell und Fanatismus, in der auch die tiefen Töne klingen mussten. Der nötige Power-Ernst, nach diesem Gewaltschuss Gegenwart. Gut nur, dass wir am Ende noch mal posh werden und David Beckham huldigen durften, der Spiceboy als dringend notwendige Ablenkungsdusche, ein bisschen Glitter im sonst grauen Haar, ein bisschen Konfetti auf gesenkten Häuptern, so entlassen wir euch doch noch lachend in diese Woche. On a high note, wie der Amerikaner sagen würde. Und wünschen euch, tatsächlich trotz allem, viel Spaß!
This is an essay readout using an AI voice clone of NFX general partner, James Currier. Every 14 years there's a tech revolution, and we have now entered the Generative Tech revolution. Anything you're interested in, anything you're working on is going to be impacted by generative tech and generative AI. In addition to the AI model companies, many unicorns and decacorns will be built, starting now. We're already seeing three types of winners. If you can wrap your mind around what is happening with these new capabilities and see the future better than others, there is no better time to build something huge. Read the full essay here - https://www.nfx.com/post/3-waves-generative-ai-startups
We spend some time in Msquared testing metaverse environment called The Construct to see for ourselves what the developers are testing out in there with MML. While we were inside that virtual world we got to witness a developer building and deploying his creations in real-time and configuring its size right in front of us. We then discuss the continued progress of the Bitmap ecosystem and predict what has to be developed next in order for the movement not to become another passing fad within the web3 space. We also discuss our next project that will contribute to fulfill some of the market needs within the Bitmap ecosystem. This new platform will be called Mscribed, the first Bitmap-focused Ordinal inscription platform designed for metaverse users on Bitcoin. Topics: First up, we dive into the world of MML and discuss the future of metaverse development Next, Iman and I outline the roadmap for the metaverse Then, we examine BITMAP the protocol for the Metaverse on Bitcoin and Finally, We announce our new project, “MSCRIBE.io” - Find out how we'll be supporting the Bitmap metaverse. Please like and subscribe on your favorite podcasting app! Website: www.theblockrunner.com Follow us on: Youtube: https://bit.ly/TBlkRnnrYouTube Twitter: bit.ly/TBR-Twitter Telegram: bit.ly/TBR-Telegram Discord: bit.ly/TBR-Discord Music by OfDream - Thelema
God is a Meat Stretcha! In this episode, the guys are joined my Mack and Kayla as they discuss gaudy prom send offs, the perilous trials of being ugly, and meat extension surgery (otherwise known as MML's).Be sure to stay connected with us on Social Media!IG, TikTok, & YouTube: @WholesomeHousePodcast
Hello, and welcome to Beauty and the Biz where we talk about the business and marketing side of plastic surgery and insights from Grant Stevens, MD. I'm your host, Catherine Maley, author of Your Aesthetic Practice – What your patients are saying, as well as consultant to plastic surgeons, to get them more patients, more profits and stellar reputations. Now, today's episode is called "Insights From Grant Stevens, MD". Dr. Grant Stevens is the who's-who of plastic surgeons and one of the most trusted voices in beauty, and will have lots of insights to share. He's a board-certified plastic surgeon who is the founder and medical director of Marina Plastic Surgery by Athenix and Marina Med Spa in Marina Del Rey, California. Not only is he the past president of The Aesthetic Society, Dr. Grant Stevens also actively speaks, writes, researches, teaches, consults and participates with national and international medical societies, journals, hospitals, universities, industry, pharma, PR outlets and even government. This week's Beauty and Biz Podcast is my interview with Grant Stevens, MD where we talked about: Insights on new business models available to those who want to simplify Insights on equity deals to invest in to shore up your financial future Insights on how to differentiate from everyone else Insights on how cosmetic patients have changed You may need to listen to this Beauty and the Biz episode several times since it's packed with pearls on how to market, scale and exit a cosmetic practice. You'll hear how differently Dr. Grant Stevens thinks about business, marketing and the plastic surgery industry. Visit Dr. Steven's website P.S. Please review!