Bohemian novelist and short-story writer (1883â€“1924)
Discuție înregistrată la 19 noiembrie, 2023. „Metamorfoza” (în germană Die Verwandlung) este o nuvelă scrisă de Franz Kafka în 1912 și publicată prima oară în octombrie 1915 în revista Die Weißen Blätter. A fost republicată în volum în luna decembrie a aceluiași an la editura Kurt Wolff. Fraza de început a nuvelei, care confruntă subit cititorul cu un eveniment șocant și suprarealist, a rămas celebră: „Într-o bună dimineață, cînd Gregor Samsa se trezi în patul lui, după o noapte de vise zbuciumate, se pomeni metamorfozat într-o gânganie înspăimântătoare.” La nivel stilistic, Stanley Corngold consideră lucrarea o literalizare a unei metafore, deoarece Kafka transformă limbajul figurat (de exemplu, expresia „a se simți ca un parazit”) în limbaj propriu. Nuvela a rămas una din cele mai enigmatice opere ale secolului al XX-lea, generând numeroase interpretări, de la cele psihanalitice sau existențialiste, la cele religioase, marxiste sau etnico-istorice. Dacă vrei să mențin regularitatea acestor întâlniri, susține-mă pe Patreon: www.patreon.com/meditatii ▶DISCORD: – Comunitatea amatorilor de filosofie și literatură: discord.gg/meditatii ▶DIALOGURI FILOSOFICE: – Română: soundcloud.com/meditatii/sets/dialoguri-pe-discord – Engleză: www.youtube.com/playlist?list=PLL…NYNkbJjNJeXrNHSaV ▶PODCAST INFO: – Website: podcastmeditatii.com – Newsletter: podcastmeditatii.com/aboneaza – YouTube: youtube.com/c/meditatii – Apple Podcasts: podcasts.apple.com/us/podcast/medi…ii/id1434369028 – Spotify: open.spotify.com/show/1tBwmTZQHKaoXkDQjOWihm – RSS: feeds.soundcloud.com/users/soundclo…613/sounds.rss ▶SUSȚINE-MĂ: – Patreon: www.patreon.com/meditatii – PayPal: paypal.me/meditatii ▶TWITCH: – LIVE: www.twitch.tv/meditatii – Rezumate: www.youtube.com/channel/UCK204s-jdiStZ5FoUm63Nig ▶SOCIAL MEDIA: – Instagram: www.instagram.com/meditatii.podcast – Facebook: www.facebook.com/meditatii.podcast – Goodreads: goodreads.com/avasilachi – Telegram (jurnal): t.me/andreivasilachi – Telegram (chat): t.me/podcastmeditatii ▶EMAIL: email@example.com
Former Secretary of State Lord Peter Hain's remarkable life was forged in crisis. His parents' peaceful but determined activism against apartheid – and the drama that surrounded his family as a result – was the backdrop to Peter's upbringing in South Africa. The Hains were constantly harassed – and at one stage jailed by the South African security services.When a close family friend was convicted and executed for the bombing of a railway station – an attack which his family condemned – it was Peter, aged just 15, who spoke at the funeral. Peter's parents moved to the UK in 1966 … exiled from the country they loved. He joined the British anti-apartheid movement and aged just 19 became the Chairman of the infamous Stop the 70 Tour which organised direct action against South Africa's proposed cricket tour of England. A major success for the anti-apartheid movement. Peter's campaigning led to him being followed and bugged by Mi5, receiving death threats and becoming the subject of an assassination attempt. A life in British politics beckoned for Peter … but not before more extraordinary drama and crisis. As a Labour politician he held office as Welsh Secretary, Secretary of State for Work and Pensions, and Northern Ireland Secretary, playing a key role in negotiating the power sharing settlement in 2007.His time in politics also brought more personal crisis – a donations scandal that he described as a ‘soul searing experience' Now in the Lords, Peter continues to campaign and as an author he's written 29 books including biographies of Mandela, his own brilliant biography A Pretoria Boy and a series of novels focused on the crisis of animal conservation. The latest, The Elephant Conspiracy (see link below) has just been released. A fascinating conversation with someone who has lived, breathed and experienced crisis from so many different angles.Links:Stream/Buy ‘Allies' by Some Velvet Morning: https://ampl.ink/qp6bmSome Velvet Morning Website: www.somevelvetmorning.co.ukYour Daily Practice: Sleep by Myndstream: https://open.spotify.com/track/5OX9XgJufFz9g63o2Dv2i5?si=b2f9397c92084682Buy Peter's latest book, The Elephant Conspiracy: Volume 2 – https://www.amazon.co.uk/Elephant-Conspiracy-Peter-Hain/dp/1739966058Also read his earlier book, A Pretoria Boy: The Story of South Africa's 'Public Enemy Number One' – https://www.amazon.co.uk/Pretoria-Boy-Africas-Public-Number/dp/1785787632Host – Andy CoulsonCWC production team: Louise Difford and Jane SankeyWith special thanks to Global For all PR and guest approaches please contact – firstname.lastname@example.org
Back in August of 2022, we spoke with Matt Littrell, a picker at the Amazon warehouse in Campbellsville, Kentucky, and one of the lead organizers in an effort to unionize Amazon facilities in Kentucky. When we spoke with Matt, Amazon had just fired him in suspected retaliation for his organizing activities, citing "performance" issues. Since then, Matt has been dragged through a Kafka-esque legal process to hold Amazon, the second largest private employer in the US, accountable for violating workers' rights. In this episode of Working People, TRNN Editor-in-Chief Maximillian Alvarez checks back in with Matt to discuss recent developments in that process, including reaching a settlement with Amazon, which the National Labor Relations Board is now challenging, leaving Matt in legal limbo.Matt's LinkTree Read the transcript of this podcast here.Post-Production: Jules TaylorHelp us continue producing radically independent news and in-depth analysis by following us and becoming a monthly sustainer:Donate: https://therealnews.com/donate-podSign up for our newsletter: https://therealnews.com/newsletter-podLike us on Facebook: https://facebook.com/therealnewsFollow us on Twitter: https://twitter.com/therealnews
In this episode, we revisit the Safe Ecto Migrations guide and get an update on improvements. We also discuss the role and importance of OpenSource AI models. We cover updates in the Elixir LangChain library, the advantages of self-hosted AI models like Mistral, and learning how to run Bumblebee on Fly.io GPUs. Tune in for an insightful blend of database best practices and the cutting-edge of AI in Elixir, plus more! Show Notes online - http://podcast.thinkingelixir.com/178 (http://podcast.thinkingelixir.com/178) Elixir Community News - https://www.youtube.com/playlist?list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY (https://www.youtube.com/playlist?list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY?utm_source=thinkingelixir&utm_medium=shownotes) – Playlist of 44+ ElixirConf US talks now available on YouTube. - https://www.youtube.com/watch?v=eCnfdHtgAN4&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=39 (https://www.youtube.com/watch?v=eCnfdHtgAN4&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=39?utm_source=thinkingelixir&utm_medium=shownotes) – Owen Bickford's talk on Elixir's Secret Ingredient at ElixirConf. - https://www.youtube.com/watch?v=gtCJ56GxKf0&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=43 (https://www.youtube.com/watch?v=gtCJ56GxKf0&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=43?utm_source=thinkingelixir&utm_medium=shownotes) – Jeffery Utter's ElixirConf presentation on Scaling Teams with Kafka on the BEAM. - https://www.youtube.com/watch?v=VLO0ma-1uD4&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=44 (https://www.youtube.com/watch?v=VLO0ma-1uD4&list=PLqj39LCvnOWbHaZldxw_g02RaTQ4vQ1eY&index=44?utm_source=thinkingelixir&utm_medium=shownotes) – Andrew Bennett discusses Erlang Dist Filtering and the WhatsApp Runtime System at ElixirConf. - https://www.youtube.com/watch?v=bBaZDAynM08 (https://www.youtube.com/watch?v=bBaZDAynM08?utm_source=thinkingelixir&utm_medium=shownotes) – Michael Lubas's insights into Elixir Security from a Business and Technical Perspective. - https://dockyard.com/blog/2023/11/01/the-road-toward-live-view-native-v-0-2-part-2 (https://dockyard.com/blog/2023/11/01/the-road-toward-live-view-native-v-0-2-part-2?utm_source=thinkingelixir&utm_medium=shownotes) – Update on the progress of LiveView Native, including multi-character sigils and Phoenix layouts. - https://sessionize.com/lambda-days-2024 (https://sessionize.com/lambda-days-2024?utm_source=thinkingelixir&utm_medium=shownotes) – Call for talks for the Lambda Days 2024 conference focused on functional programming in Kraków, Poland. - https://twitter.com/germsvel/status/1722221427112456533 (https://twitter.com/germsvel/status/1722221427112456533?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir 1.16 introduces the ability to run multiple tests with line numbers as shown by German Velasco. - https://www.youtube.com/watch?v=bfrzGXM-Z88 (https://www.youtube.com/watch?v=bfrzGXM-Z88?utm_source=thinkingelixir&utm_medium=shownotes) – Theo's livestream with José Valim, discussing various topics for 2.5 hours. - https://peterullrich.com/test-an-external-read-only-repository-in-phoenix (https://peterullrich.com/test-an-external-read-only-repository-in-phoenix?utm_source=thinkingelixir&utm_medium=shownotes) – Peter Ullrich's method for testing an external, read-only repository in Phoenix. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at email@example.com (mailto:firstname.lastname@example.org) Discussion Resources - 7:43 - David introduces and explains Safe Ecto migrations. - Updates on Safe Ecto for additional safety features and latest improvements. - Review of the performance of using text columns in databases showing that they have the same performance as VARCHAR types. - Examples provided of non-immutable expressions within database contexts. - Highlighting an error that can occur when backfilling data without a sort order. - Suggestion that Common Table Expressions (CTE) offers a more reliable method for certain database operations. - David's call for a library to assist with running database operations through a UI, indicating the desire for tooling improvements. - Consider the use-cases in the development and implementation of safety tools for databases. - 18:47 - Mark discusses new Fly.io GPU hardware, model improvements, and the Bumblebee tool. - Mistral LLM and its capabilities in the AI space. - Insights into running Bumblebee on GPUs and performance considerations. - Importance of Mistral being self-hosted. - Explanation of why self-hosting AI models like Mistral is significant for developers and users. - OpenAI's outage interrupted Mark's AI-powered workout trainer. - Outlining the Elixir LangChain goals, its roadmap, and potential impact on AI and data processing. - Discussion on how Large Language Models (LLMs) are effectively used for data extraction tasks. - Discussion on what an AI router is and what problem it solves. Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - email@example.com (mailto:firstname.lastname@example.org) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @email@example.com (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @firstname.lastname@example.org (https://genserver.social/dbern) - Cade Ward - @cadebward (https://twitter.com/cadebward) - Cade Ward on Fediverse - @email@example.com (https://genserver.social/cadebward)
Dan and Nick break down the All-22 coaches film of the Giants Week 11 win over the Commanders with an extended focus on the game plan situational play calling, Tommy Devito analysis, offensive line analysis (end of the show) and superlatives (end of the show) while running through and breaking down the All-22 tape of some of the biggest game changing plays of the week. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Back in August of 2022, we spoke with Matt Littrell, a picker at the Amazon warehouse in Campbellsville, Kentucky, and one of the lead organizers in an effort to unionize Amazon facilities in Kentucky. When we spoke with Matt, Amazon had just fired him in suspected retaliation for his organizing activities, citing "performance" issues. Since then, Matt has been dragged through a Kafka-esque legal process to hold Amazon, the second largest private employer in the US, accountable for violating workers' rights. In this mini-cast, we check back in with Matt and discuss recent developments in that process, including reaching a settlement with Amazon, which the National Labor Relations Board is now challenging, leaving Matt in legal limbo. Additional links/info below… Matt's Twitter/X page and LinkTree Maximillian Alvarez, The Real News Network / Working People, "Amazon Fires Another Organizer to Allegedly Stop Kentucky Facilities Unionizing" Permanent links below... Working People Patreon page Leave us a voicemail and we might play it on the show! Labor Radio / Podcast Network website, Facebook page, and Twitter page In These Times website, Facebook page, and Twitter page The Real News Network website, YouTube channel, podcast feeds, Facebook page, and Twitter page Featured Music (all songs sourced from the Free Music Archive: freemusicarchive.org) Jules Taylor, "Working People" Theme Song
Nachdem ich im Alter von 12 Jahren meine Stadtbezirksbibliothek “ausgelesen” hatte (natürlich nicht die komplette, für mich zählte nur das utopische Regal!), stolperte ich in dem, was man in der DDR so Feuilleton nannte, über den gerade erschienenen Roman “Der fremde Freund” von Christoph Hein. Den Zeitpunkt kann ich deshalb so genau bestimmen, weil ich jetzt, in meinem fünften Lebensjahrzent, so langsam passabel Kopfrechnen kann und mir Wikipedia das Erscheinungsdatum des Romans mit 1982 angibt. Dass ich ein Buch von Christoph Hein gelesen hatte und enorm fasziniert von dessen Sprache war, hatte ich noch im Hinterkopf, aber mein fortlaufender Erinnerungshorizont von exakt sieben Jahren verwehrt mir, mich zu erinnern, worum es konkret ging. Auch hier hilft mir die Freiwilligenenzyklopädie auf die Sprünge und die Synopsis von “Der fremde Freund” lässt mir gleichzeitig die Erinnerungssynapsen knallen als auch mich kopfschüttelnd zurück: was ein wunderlicher Teenager ich gewesen sein muss!Im Buch, geschrieben aus der Ich-Perspektive einer 30-jährigen Ärztin, geht es um Liebe und Entfremdung und um Fotografie. Die Liebe war mir zu diesem Zeitpunkt noch nicht untergekommen, die Entfremdung als Wort kein Begriff, aber retrospektiv und küchenpsychologisch macht das alles Sinn. Das Einzige im Buch, womit ich wirklich, und zwar richtig was am Hut hatte, war die Photographie. Und so wie die Protagonistin im Buch, Claudia, ob ihrer Entfremdung von den ihr seltsam vorkommenden Menschen nur leblosen Kram fotografiert, praktizierte ich die Kunst auch und erkannte ich mich wohl ziemlich wieder.Wie gesagt, all das reime ich mir elektronisch unterstützt zusammen, denn das Einzige, woran ich mich wirklich erinnere, war die seltsam unprätentiöse, klare, unaufgeregte Sprache Christoph Heins, die mich in ihrer Sparsamkeit, ihrer Affektlosigkeit an Kafka erinnerte. Sicher ein bisschen zu hoch gegriffen, aber ich war ein äußerlich gestörter und innerlich begeisterbarer Teenager.Es sollte das letzte Buch bleiben, was ich von Christoph Hein gelesen habe. Die zwei, drei noch in der DDR erschienen Werke blieben unter meinem Radar und danach gab's Westbücher. Doch irgendetwas spülte mir kürzlich Heins jüngstes Werk in den Sichtkreis und es schloss sich ein solcher. Es heißt “Unterm Staub der Zeit” und wieder ist es ein Buch, welches mich sujettechnisch nicht wirklich interessieren sollte. Und auch hier ist es die Sprache, über die es wenig mehr zu sagen gibt, als dass sie “exakt” ist, “unaufgeregt” und “genau”, die mich, und ich weiß zum Teufel nicht warum, fasziniert.Der Inhalt des Romans ist die Geschichte des 13-jährigen Daniel aus der Ostzone, wie er 1958 von seinem Vater ins Internat eines Westberliner Gymnasiums gebracht wird. In der DDR wurde ihm die Erweiterte Oberschule verweigert, also wurde er wie viele talentierte Teenager von seinen Eltern in den Westen geschickt, um ein Abitur zu bekommen. Das passierte so häufig, dass die Westberliner Gymnasien spezielle “C-Klassen” hatten, die den Lehrplänen in den Schulen in der Ostzone Rechnung trugen um die neuen Schüler an das Abitur heranzuführen.Für die jüngeren Leser: 1958 ist vier Jahre vor dem Bau der Berliner Mauer und so folgen wir auf den 200 Seiten im Buch Daniel zunächst bis zu diesem 13. August 1961. Die DDR versuchte schon vor dem Bau der Mauer den Strom von Unzufriedenen in die BRD zu stoppen: mit Kontrollen, Entzug von Ausweisen und dem Erteilen von Anweisungen, den Wohnort nicht zu verlassen. Und so waren die quasigeflüchteten Jugendlichen in einem seltsamen Limbo, in dem sie zwar jederzeit nach Ostberlin fahren konnten, schon weil dort mit Ostmark alles um den Faktor 5 billiger war, sie aber Gefahr liefen, geschnappt zu werden und damit ihr Abitur und ihre Zukunft zu verspielen.Das der Erzähler im Buch, Daniel, Christoph Hein im real life ist, wird nicht explizit erwähnt, aber ich Fresse einen Besen wenn nicht. Das macht das Buch zu einem “Opa erzählt vom Krieg” eines 79-jährigen Schriftsteller. Was will man mehr? Und wenn man mehr will, dann lest euren Actionquatsch - das hier ist das wahre Leben und es wird genauso berichtet, wie man es sich von einem ernsten, guten Erzähler ohne Kapriolen wünscht. Hein berichtet Episoden aus einer Jugend in einer Zeit, die ein bisschen uninteressant sein mag. Nicht weit genug von der Gegenwart entfernt, nicht besonders aufregend, verglichen mit einem 2. Weltkrieg, der damals auch schon lang vorbei war. Über den kann man was erzählen: Gewalt, Heldentum, Befreiung! Die Ende der Fünfziger Jahre in Berlin waren sicher spannend, aber der größte Gewaltausbruch im Buch ist eine Prügelei beim BillHaley-Konzert im Sportpalast und das Heldenhafteste der Schmuggel von Musikinstrumenten aus dem Osten in den Westen for fun and profit. Und Befreiung: not so much. Im Gegenteil. Während die, ein bisschen belanglosen, Anekdoten des etwas nerdigen, theaterbegeisterten Daniel dahin plätschern, verändert sich die Weltpolitik. Dass ihr Abitur prekär ist und an ihrer Fähigkeit hängt, die poröse Grenze zwischen Ost- und Westberlin unauffällig und möglichst selten zu überqueren, wissen die Schüler. Was sie nicht ahnen ist, dass ein US-Senator im fernen Washington den Russen durch eine verhängnisvolle Rede, das Signal gibt, dass es ok sei, die sowjetische Zone von denen der westlichen Siegermächte abzuschneiden.Die Nachricht davon erreicht Daniel in den Sommerferien, ausgerechnet in Dresden (Lob- und Verriss wird von dort ausgestrahlt, wem das nicht klar ist..) und er eilt nach Berlin zurück. Dort sieht es noch ein paar Tage lang so aus, als wäre das ein zeitweilige Maßnahme. Es gibt doch hunderte Straßen und Kilometer Grün um Westberlin, all das abzusperren erscheint unvorstellbar. Doch innerhalb von Wochen ist genau das passiert. Ein paar verständnisvolle Beamte im Ostteil, die den Schülern Hoffnung machen, ihr Abitur fortsetzen zu können, werden von Hardlinern abgelöst und zum Schulbeginn im September ‘61 ist Daniel und seinem zwei Jahre älteren Bruder, mit dem er auf dem Gymnasium war, klar, dass sie sich eine Lehre im Osten suchen müssen.Ganz Christoph Hein erzählt er diese dramatisch und traumatisch klingenden Ereignisse mit stoischer Gelassenheit, dass man sich die Frage stellt, ob das so angebracht sei? Immerhin verändert sich durch den Mauerbau das Leben von ein paar Millionen Menschen, beispielhaft vertreten durch die zwei Teenager, grundlegend und nach allgemeinem Konsens zum Negativen. Ja, die Gespräche mit den neu eingesetzten linientreuen Kaderschmieden die dem jungen Daniel, dem “Westflüchtling”, dem “Intellektuellen” das Leben schwer machen, sind frustrierend und machen jemandem, der den Scheiß dreißig Jahre später mitgemacht hat immer noch wütend. Doch Daniel fügt sich mit der Flexibilität, die nur ein Jugendlicher hat ein. Er passt sich nicht an, Weiß Gott nicht, er ist ein paar Monate lang sogar Fluchthelfer, aber er bleibt in der DDR, aus Gründen. Er lernt Buchhändler und aus dem kleinen Daniel wird ein großer Christoph Hein. Dieser verweigert, zumindest in diesem Buch, die Bitterkeit ob eines Lebens, das er nicht gelebt hat. Ob des Faktes, dass er sie nicht spürt oder dass sie in diesem Werk keinen Platz hat, darüber nachzudenken lädt die kleine Nouvelle “Unterm Staub der Zeit” ein und, wichtiger, dazu, das Lebenswerk von Christoph Hein, jetzt wo es fast komplett ist, nochmal von vorn zu lesen. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lobundverriss.substack.com
In this week's "Giants vs. Commanders Preview Show," Drew and Rob break down the critical factors for the New York Giants as they gear up for their matchup against the Commanders. Before that Rob and Drew discuss Tony Pauline's report on Sportskeeda, that says significant changes are anticipated in the coaching staff of the yet-to-be-competitive Giants, with offensive coordinator Mike Kafka and defensive coordinator Don Martindale slated to be relieved of their duties by the end of the season if not sooner.On the offensive front, the hosts emphasize the importance of capitalizing on favorable field positions, highlighting the need for efficient clock control and a robust running game. Stressing the significance of ball security, the Giants must avoid turnovers to secure a competitive edge.Shifting focus to the defensive strategy, the spotlight is on Dexter Lawrence, with the hosts underlining the pivotal role he needs to play in taking control of the game. The Giants' pass rush is expected to shine against the Commanders' offensive line, and the prevention of big plays is identified as a key defensive priority to avoid getting entangled in a high-scoring shootout.In the final thoughts segment, the hosts candidly discuss the critical importance of near-perfect execution for the Giants to secure a victory. A bold suggestion is made regarding quarterback Tommy DeVito, hinting at the possibility of a change if his current performance trajectory persists. Tune in for a comprehensive analysis, strategic insights, and bold predictions as Drew and Rob navigate the nuances of the upcoming Giants vs. Commanders showdown.#giants #commanders #nflSupport the showAll Episodes are shot LIVE with fan interactions on Youtube, Facebook, Twitter, & TwitchSponsor the show at: https://www.buymeacoffee.com/2giantgoofballsInterested in starting a podcast. We recommend using buzzsprout: https://www.buzzsprout.com/?referrer_id=2012368
In the last year or two I started hearing a lot about cell-based architectures. Usually in the form of “We had a lot of issues scaling our infrastructure, but then we moved to cell-based architectures” and “I wish I've learned about cell based architectures earlier, it would have saved me a lot of pain”. As a result, I've wanted to share knowledge about cell-based architectures with this community for a while now. I was lucky that Eno Thereska called me and suggested to do just that! Eno, currently at Alcion, is one of the most impressive technical leaders I've head the pleasure of working with. He has deep theoretical knowledge that he knows how to use for very practical technical solutions. And in this presentation and discussion, he shares both theory and practical advice. We discussed everything from the basics of cell based architectures, their benefits all the way to different heuristics for assigning tenants to cells. Papers and talks we discussed: AWS Fargate under the hood: https://www.youtube.com/watch?v=Hr-zOaBGyEA Doordash - Journey to cell-based micro services architecture: https://www.youtube.com/watch?v=ReRrhU-yRjg Slack's Migration to Cellular Architecture: https://slack.engineering/slacks-migration-to-a-cellular-architecture/ Kora: A cloud-native event streaming platform for Kafka: https://www.vldb.org/pvldb/vol16/p3822-povzner.pdf
Dany Hoyos es el creador de Suso y autor de El árbol de Guayacán (https://bukz.co/products/el-arbol-de-guayacan-9786287634169) Libros mencionados: El cantar de los Nibelungos https://bukz.co/products/los-nibelungos-9788491043447El otoño del patriarca - Gabriel Garcia Marquez (https://bukz.co/products/el-otono-del-patriarca-estuche) Fernanda Melchor - Temporada de huracanes https://bukz.co/products/temporada-de-huracanes-mapa-de-las-lenguas-9788439733904Gilmer Mesa - Aranjuez https://bukz.co/products/aranjuez-9786287638167Stefan Zweig - momentos estelares de la humanidad https://bukz.co/products/momentos-estelares-de-la-humanidadMaria estuardo - Zweig https://bukz.co/products/biografias-estuche-con-dos-volumenes-9788418370601Maria Antonieta - Zweig https://bukz.co/products/biografias-estuche-con-dos-volumenes-9788418370601Fouche - Zweig https://bukz.co/products/biografias-estuche-con-dos-volumenes-9788418370601 El mundo de ayer - Zweig https://bukz.co/products/el-mundo-de-ayer-9788495359490La historia interminable - Michael Ende https://bukz.co/products/la-historia-interminable-9788491220787La metamorfosis - Kafka https://bukz.co/products/la-metamorfosis-y-otros-relatos-de-animales-9788467043648Un hombre - Oriana FallaciA sangre fría - Truman Capote https://bukz.co/products/a-sangre-fria-2 Ensayo sobre la ceguera - Jose Saramago https://bukz.co/products/copia-de-ensayo-sobre-la-ceguera
Join the guys as they dissect Franz Kafka's In the Penal Colony. Dive into the intricacies of the story, exploring its symbolism, existential themes, and the eerie apparatus, as the guys unravel the mysteries of justice, guilt, and societal structures in Kafka's masterpiece.
Hörmann, Andiwww.deutschlandfunkkultur.de, LesartDirekter Link zur Audiodatei
Hörmann, Andiwww.deutschlandfunkkultur.de, LesartDirekter Link zur Audiodatei
Ett samtal om Kafka, Freud och det judiska ursprunget. Litteraturkritikern Mikaela Blomqvist och psykoanalytikern Per Magnus Johansson i samtal om två judiska författare: Franz Kafka (1883-1924) och Sigmund Freud (1856-1939). Båda har haft ett avgörande inflytande över vitt förgrenade konstnärliga, vetenskapliga och litterära områden och i stora delar av världen. Inspelat på Bokmässan 2023.
"Ordesa" de Manuel Vilas (Alfaguara) se reedita 5 años después con un capítulo inédito, con un nuevo final si es que la novela tenía un final. Y es esta reedición la que entra en las estanterías de la Biblioteca de Martínez Asensio en Hoy por Hoy. En el nuevo capítulo final, el post scriptum, se sitúa en el Hostal Don Juan de Cambrils, la localidad tarraconense en la que los padres de Manuel fueron más felices la última semana de julio y la primera de agosto de los veranos de los 70. El autor vuelva casi 50 años después al lugar que ya es como un Titanic de su memoria. Manuel Vilas nos ha confesado que escribió "Ordesa' justo para decir "que todos necesitamos que nos quieran, eso es Ordesa". También nos ha donado dos libros, "El castillo" de Kafka (DeBolsillo) y "Hojas de hierba" de Walt Whitman (Espasa). Además han entrado esta semana en nuestra biblioteca dos novelas del nuevo premio Cervantes, el leonés Luis Mateo Díez. Son "La fuente de la edad" (Alfaguara) y "El limbo de los cines" (Nórdica) . Dos novedades, "Maniac" de Nejamin Labatut (Anagrama) y "La ciudad de la piel de plata" de Félix G, Modroño (Destino) y por motivos de actualidad Antonio Martínez Asensio ha sacado de sus estanterías "M., el hijo del siglo" de Antonio Scurati (Alfaguara), para recordarnos que el fascismo no es una broma, y "Memorial drive (memorias de una hija)" de Natasha Tretheway (Errata Naturae)
Claire Keegan: Kleine Dinge wie diese | Gelesen von Stefan Wilkening | 2 Std. 22 Min. | Bonnevoice || ab 9:12 - Franz Kafka: Die große Hörspiel-Edition | Hörspiele mit Bruno Ganz , Gert Westphal , Gustl Halenke u.v.a. | 7 Std. 23 Min. | DAV / NDR, SWR, SRF, Radio Bremen, WDR || ab 17:46 - Paolo Giordano: Tasmanien | Gelesen von Torben Kessler | 8 Std. 49 Min. | Random House Audio || ab 23:56 - Kirsten Boie: Thabo und Emma | Gelesen von Karl Menrad | 1 Std. 30 Min. | ab 6 Jahren | Jumbo
Highlights from this week's conversation include:Johnny and David's background in working together (1:56)The background story of Estuary (4:15)The challenges of ad tech and the need for low latency (5:44)Use cases for moving data at scale (10:35)Real-time data replication methods (11:54)Challenges with Kafka and the birth of Gazette (13:54)Comparing Kafka and Gazette (20:22)The importance of existing streaming tools (22:28)Challenges of managing Kafka and the need for a different approach (23:40)The role of compaction in streaming applications (26:54)The challenge of relaxing state management (34:01)Replication and the problem of data synchronization (36:48)Incremental Back Fills and Risk-Free Production Database (46:03)Estuary as a Platform and Connectors (47:45)The challenges of real-time streaming (57:56)Orchestration in real-time streaming (1:00:51)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Season 4 Episode 7 - Analog Lens Review: 1974 Nikkor 35mm 2.8 AI (converted) K lens Bringing It All Back Home is back with a lens review, two analog combos, and a search for Kafka in all the wrong places. Join us & tune in as this episode sings the praises of 1970s AI lenses, explores the perfect combo of deep yellow filters and a camera that meters pefectly, and continues the search for understanding how to compose wirth a 35mm lens intuitively. Topics: Nikon F100, Nikkormat FT3, K-lenses, Yellow filters, ID-11, Tmax 100, FP4 bulk roll. Links: Is it Pre-AI? https://youtu.be/yYFbIzI7m3E?si=K2--qvL7qavsCwoT Nikkor Serial lenses: http://www.photosynthesis.co.nz/nikon/lenses.html --- Send in a voice message: https://podcasters.spotify.com/pod/show/charles-kershenblatt8/message
Gennaro Serio"Ludmilla e il corvo"L'Orma Editorewww.lormaeditore.itNel settembre del 1923, mentre passeggiavano in un parco a Berlino, Dora e Franz si imbatterono in una bambina in lacrime, inconsolabile all'offerta di una carezza e persino di un gelato. Kafka le chiese cosa potesse darle tanto dispiacere. La bambina disse di non trovare più la sua bambola, quella che tante ore di felicità aveva condiviso con lei. Credeva di averla smarrita al parco. Kafka quasi pianse, dice Dora, ma senza farsi notare dalla bambina. Disse, so io dove si trova la tua bambola. Come fai a saperlo, chiese la bambina. Mi ha scritto una lettera per te, disse Kafka, ce l'ho a casa, se vuoi vado a prenderla. Sì? chiese la bambina, davvero? prendila, per piacere. Io mi chiamo Franz, si presentò Kafka. Io Ludmilla, disse la bambina.Uno studioso islandese siede all'ombra di una veranda affacciata sui vitigni di Coimbra. Tiene la mano poggiata su un plico di fogli ingialliti, che si credeva esistesse soltanto nelle fantasie più spericolate dei critici letterari di mezzo mondo. Se fosse ciò che sembra, vi si troverebbe raccontato il lungo viaggio di una bambola braccata da elusivi figuri che la tengono lontana dall'amore della sua vita, un corvo.Alla ricerca di quelle pagine fantasma – inseguite invano fra tendoni da circo, casseforti inviolabili e traduzioni approssimative – si sono lanciati per decenni cacciatori di manoscritti e fanatici pronti a tutto. Si vocifera possa essere il leggendario romanzo che Franz Kafka avrebbe scritto per consolare una bambina in lacrime, incontrata durante una passeggiata al parco nel settembre del 1923.Gennaro Serio prende spunto da questo episodio reale della vita del grande scrittore praghese e, con una prosa iridescente e un'inventiva densa di umorismo, lo trasforma in un implacabile gioco narrativo. Ludmilla e il corvo è un romanzo fiabesco, avvincente e caparbiamente inverosimile, una festa della finzione che celebra il potere immaginifico della letteratura.Gennaro Serio è nato a Napoli nel 1989. Lavora nella redazione di «Alias D», supplemento libri del «manifesto», e collabora con varie testate e inserti culturali. La sua opera prima, Notturno di Gibilterra (L'orma editore 2020), un giallo letterario e parodistico, oltre ad aggiudicarsi il Premio Italo Calvino, è stata accostata alle atmosfere di Bolaño e di Eco e salutata dalla critica come uno degli esordi più sorprendenti degli ultimi anni.Ludmilla e il corvo è il suo secondo romanzo.IL POSTO DELLE PAROLEascoltare fa pensarewww.ilpostodelleparole.itQuesto show fa parte del network Spreaker Prime. Se sei interessato a fare pubblicità in questo podcast, contattaci su https://www.spreaker.com/show/1487855/advertisement
What you do not need to do “real time”, you should try not to do real time for a variety of reasons. In this course, we'll look at a specific implementation that uses RabbitMQ as a Message Broker to better understand the pros and cons of various alternatives, including but not limited to whether or not you need to use messaging at all to solve such a problem. We'll touch upon Kafka a tiny, tiny bit but keep our focus primarily to Messaging Architecture in general, and RabbitMQ as a broker in particular. By the end of this course, you should be in a position to tell when you need to use a Message Broker, which one you may want to use, and how you should go about using it. While what we'll look at is a Ruby Microservice implementation, the learning would be just as applicable to other brokers and other languages. Purchase course in one of 2 ways: 1. Go to https://getsnowpal.com, and purchase it on the Web 2. On your phone: (i) If you are an iPhone user, go to http://ios.snowpal.com, and watch the course on the go. (ii). If you are an Android user, go to http://android.snowpal.com.
Mit seinem zweiten Film wurde Regisseur Timm Kröger für den Wettbewerb in Venedig eingeladen: "Die Theorie von allem" liefert als Film Noir im Hitchcock-Look einen außergewöhnlichen und spannenden Beitrag zur Filmgeschichte / Die Welt im Ohr: 100 Jahre Radio - Ein Gespräch mit dem Medienwissenschaftler Golo Föllmer über die Zukunft des Radios / Das Unbehagen in der Welt: Die Münchner Villa Stuck spürt in der Ausstellung "Kafka 1924" dem Prager Literaten und kafkaesken Themen in der Bildenden Kunst nach / Das wohl präparierte Klavier: Oscar-Gewinner Volker Bertelmann veröffentlicht unter seinem Künstlernamen Hauschka sein neues Solo-Album "Philanthropy"
DEVOCIÓN MATUTINA PARA ADULTOS 2023“YO ESTOY CONTIGO”Narrado por: Roberto NavarroDesde: Montreal, CanadáUna cortesía de DR'Ministries y Canaan Seventh-Day Adventist Church23 DE OCTUBRE "MI PRESENCIA TE ACOMPAÑARÁ"Jehová le dijo: "Mi presencia te acompañará y te daré descanso". Moisés respondió: "Si tu presencia no ha de acompañarme, no nos saques de aquí" (Éxodo 33:14, 15). En sus Meditaciones, Franz Kafka escribió: "Todos los fallos humanos son impaciencia, una prematura interrupción de lo metódico, una aparente clasificación de la cosa aparente". * Si un grupo ha cumplido al pie de la letra lo dicho por Kafka, estos fueron los israelitas que se asentaron en la falda del monte Sinaí.Cuando el pueblo comenzó a sentir "que Moisés tardaba en descender del monte, se acercaron entonces a Aarón y le dijeron: 'Levántate, haznos dioses que vayan delante de nosotros, porque a Moisés, ese hombre que nos sacó de la tierra de Egipto, no sabemos qué le haya acontecido"" (Éxodo 32:1, 2). Impacientes por la ausencia de Moisés, el pueblo cayó en un terrible acto idolátrico, declarando al becerro de oro como el dios que los había sacado de Egipto. Tras una gran discusión entre el Señor y Moisés, el pueblo fue perdonado y se le permitió continuar su viaje a Canaán.Después de ese terrible acontecimiento, Moisés le preguntó al Señor quién lo acompañaría durante el largo viaje hacia la tierra prometida; entonces "Jehová le dijo: "Mi presencia te acompañará, y te daré descanso". Moisés respondió: 'Si tu presencia no ha de acompañarme, no nos saques de aquí"" (Éxodo 33:14, 15). Lo que Dios le está diciendo es: "Yo mismo iré contigo". El pueblo quería que un becerro de oro los acompañara; sin embargo, Dios pasa por alto la impaciencia del pueblo y decide no solo acompañarlos, sino ir delante de ellos como el Dios que, como dice literalmente Éxodo 34:7, "carga el pecado" de su pueblo. El Señor no va como un simple espectador; va como el compañero que hará que el viaje sea más liviano para su pueblo.Quizá como Israel, algunos de nosotros queremos un dios visible, uno que se parezca a lo que tenemos en las manos, un becerro de oro al que podemos atribuirle nuestros éxitos. Sin embargo, lo que realmente necesitamos es creer que Dios nos ha dicho: "Mi presencia te acompañará". Esa Presencia invisible a los ojos, silenciosa a los oídos, pero que se siente en el alma; esa Presencia que llena nuestros espacios vacíos, que nos habla cuando todas las voces se callan; esa Presencia que toma tu carga y aligera tu viaje.* Cartas al padre, Meditaciones y otras obras (Madrid: Edimat Libros, 2005), p. 107.
Victoria is joined by guest co-host Joe Ferris, CTO at thoughtbot, and Seif Lotfy, the CTO and Co-Founder of Axiom. Seif discusses the journey, challenges, and strategies behind his data analytics and observability platform. Seif, who has a background in robotics and was a 2008 Sony AIBO robotic soccer world champion, shares that Axiom pivoted from being a Datadog competitor to focusing on logs and event data. The company even built its own logs database to provide a cost-effective solution for large-scale analytics. Seif is driven by his passion for his team and the invaluable feedback from the community, emphasizing that sales validate the effectiveness of a product. The conversation also delves into Axiom's shift in focus towards developers to address their need for better and more affordable observability tools. On the business front, Seif reveals the company's challenges in scaling across multiple domains without compromising its core offerings. He discusses the importance of internal values like moving with urgency and high velocity to guide the company's future. Furthermore, he touches on the challenges and strategies of open-sourcing projects and advises avoiding platforms like Reddit and Hacker News to maintain focus. Axiom (https://axiom.co/) Follow Axiom on LinkedIn (https://www.linkedin.com/company/axiomhq/), X (https://twitter.com/AxiomFM), GitHub (https://github.com/axiomhq), or Discord (https://discord.com/invite/axiom-co). Follow Seif Lotfy on LinkedIn (https://www.linkedin.com/in/seiflotfy/) or X (https://twitter.com/seiflotfy). Visit his website at seif.codes (https://seif.codes/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido, and with me today is Seif Lotfy, CTO and Co-Founder of Axiom, the best home for your event data. Seif, thank you for joining me. SEIF: Hey, everybody. Thanks for having me. This is awesome. I love the name of the podcast, given that I used to compete in robotics. VICTORIA: What? All right, we're going to have to talk about that. And I also want to introduce a guest co-host today. Since we're talking about cloud, and observability, and data, I invited Joe Ferris, thoughtbot CTO and Director of Development of our platform engineering team, Mission Control. Welcome, Joe. How are you? JOE: Good, thanks. Good to be back again. VICTORIA: Okay. I am excited to talk to you all about observability. But I need to go back to Seif's comment on competing with robots. Can you tell me a little bit more about what robots you've built in the past? SEIF: I didn't build robots; I used to program them. Remember the Sony AIBOs, where Sony made these dog robots? And we would make them compete. There was an international competition where we made them play soccer, and they had to be completely autonomous. They only communicate via Bluetooth or via wireless protocols. And you only have the camera as your sensor as well as...a chest sensor throws the ball near you, and then yeah, you make them play football against each other, four versus four with a goalkeeper and everything. Just look it up: RoboCup AIBO. Look it up on YouTube. And I...2008 world champion with the German team. VICTORIA: That sounds incredible. What kind of crowds are you drawing out for a robot soccer match? Is that a lot of people involved with that? SEIF: You would be surprised how big the RoboCup competition is. It's ridiculous. VICTORIA: I want to go. I'm ready. I want to, like, I'll look it up and find out when the next one is. SEIF: No more Sony robots but other robots. Now, there's two-legged robots. So, they make them play as two-legged robots, much slower than four-legged robots, but works. VICTORIA: Wait. So, the robots you were playing soccer with had four legs they were running around on? SEIF: Yeah, they were dogs [laughter]. VICTORIA: That's awesome. SEIF: We all get the same robot. It's just a competition on software, right? On a software level. And some other competitions within the RoboCup actually use...you build your own robot and stuff like that. But this one was...it's called the Standard League, where we all have a robot, and we have to program it. JOE: And the standard robot was a dog. SEIF: Yeah, I think back then...we're talking...it's been a long time. I think it started in 2001 or something. I think the competition started in 2001 or 2002. And I compete from 2006 to 2008. Robots back then were just, you know, simple. VICTORIA: Robots today are way too complicated [laughs]. SEIF: Even AI is more complicated. VICTORIA: That's right. Yeah, everything has gotten a lot more complicated [laughs]. I'm so curious how you went from being a world-champion robot dog soccer player [laughs] programmer [laughs] to where you are today with Axiom. Can you tell me a little bit more about your journey? SEIF: The journey is interesting because it came from open source. I used to do open source on the side a lot–part of the GNOME Project. That's where I met Neil and the rest of my team, Mikkel Kamstrup, the whole crowd, basically. We worked on GNOME. We worked on Ubuntu. Like, most of them were working professionally on it. I was working for another company, but we worked on the same project. We ended up at Xamarin, which was bought by Microsoft. And then we ended up doing Axiom. But we've been around each other professionally since 2009, most of us. It's like a little family. But how we ended up exactly in observability, I think it's just trying to fix pain points in my life. VICTORIA: Yeah, I was reading through the docs on Axiom. And there's an interesting point you make about organizations having to choose between how much data they have and how much they want to spend on it. So, maybe you can tell me a little bit more about that pain point and what you really found in the early stages that you wanted to solve. SEIF: So, the early stages of what we wanted to solve we were mainly dealing with...so, the early, early stage, we were actually trying to be a Datadog competitor, where we were going to be self-hosted. Eventually, we focused on logs because we found out that's what was a big problem for most people, just event data, not just metric but generally event data, so logs, traces, et cetera. We built out our own logs database completely from scratch. And one of the things we stumbled upon was; basically, you have three things when it comes to logging, which is low cost, low latency, and large scale. That's what everybody wants. But you can't get all three of them; you can only get two of them. And we opted...like, we chose large scale and low cost. And when it comes to latency, we say it should be just fast enough, right? And that's where we focused on, and this is how we started building it. And with that, this is how we managed to stand out by just having way lower cost than anybody else in the industry and dealing with large scale. VICTORIA: That's really interesting. And how did you approach making the ingestion pipeline for masses amount of data more efficient? SEIF: Just make it coordination-free as possible, right? And get rid of Kafka because Kafka just, you know, drains your...it's where you throw in money. Like maintaining Kafka...it's like back then Elasticsearch, right? Elasticsearch was the biggest part of your infrastructure that would cost money. Now, it's also Kafka. So, we found a way to have our own internal way of queueing things without having to rely on Kafka. As I said, we wrote everything from scratch to make it work. Like, every now and then, I think that we can spin this out of the company and make it a new product. But now, eyes on the prize, right? JOE: It's interesting to hear that somebody who spent so much time in the open-source community ended up rolling their own solution to so many problems. Do you feel like you had some lessons learned from open source that led you to reject solutions like Kafka, or how did that journey go? SEIF: I don't think I'm rejecting Kafka. The problem is how Kafka is built, right? Kafka is still...you have to set up all these servers. They have to communicate, et cetera, etcetera. They didn't build it in a way where it's stateless, and that's what we're trying to go to. We're trying to make things as stateless as possible. So, Kafka was never built for the cloud-native era. And you can't really rely on SQS or something like that because it won't deal with this high throughput. So, that's why I said, like, we will sacrifice some latency, but at least the cost is low. So, if messages show after half a second or a second, I'm good. It doesn't have to be real-time for me. So, I had to write a couple of these things. But also, it doesn't mean that we reject open source. Like, we actually do like open source. We open-source a couple of libraries. We contribute back to open source, right? We needed a solution back then for that problem, and we couldn't find any. And maybe one day, open source will have, right? JOE: Yeah. I was going to ask if you considered open-sourcing any of your high latency, high throughput solutions. SEIF: Not high latency. You make it sound bad. JOE: [laughs] SEIF: You make it sound bad. It's, like, fast enough, right? I'm not going to compete on milliseconds because, also, I'm competing with ClickHouse. I don't want to compete with ClickHouse. ClickHouse is low latency and large scale, right? But then the cost is, you know, off the charts a bit sometimes. I'm going the other route. Like, you know, it's fast enough. Like, how, you know, if it's under two, three seconds, everybody's happy, right? If the results come within two, three seconds, everybody is happy. If you're going to build a real-time trading system on top of it, I'll strongly advise against that. But if you're building, you know, you're looking at dashboards, you're more in the observability field, yeah, we're good. VICTORIA: Yeah, I'm curious what you found, like, which customer personas that market really resonated with. Like, is there a particular, like, industry type where you're noticing they really want to lower their cost, and they're okay with this just fast enough latency? SEIF: Honestly, with the current recession, everybody is okay with giving up some of the speed to reduce the money because I think it's not linear reduction. It's more exponential reduction at this point, right? You give up a second, and you're saving 30%. You give up two seconds, all of a sudden, you're saving 80%. So, I'd say in the beginning, everybody thought they need everything to be very, very fast. And now they're realizing, you know, with limitations you have around your budget and spending, you're like, okay, I'm okay with the speed. And, again, we're not slow. I'm just saying people realize they don't need everything under a second. They're okay with waiting for two seconds. VICTORIA: That totally resonates with me. And I'm curious if you can add maybe a non-technical or a real-life example of, like, how this impacts the operations of a company or organization, like, if you can give us, like, a business-y example of how this impacts how people work. SEIF: I don't know how, like, how do people work on that? Nothing changed, really. They're still doing the, like...really nothing because...and that aspect is you run a query, and, again, as I said, you're not getting the result in a second. You're just waiting two seconds or three seconds, and it's there. So, nothing really changed. I think people can wait three seconds. And we're still like–when I say this, we're still faster than most others. We're just not as fast as people who are trying to compete on a millisecond level. VICTORIA: Yeah, that's okay. Maybe I'll take it back even, like, a step further, right? Like, our audience is really sometimes just founders who almost have no formal technical training or background. So, when we talk about observability, sometimes people who work in DevOps and operations all understand it and kind of know why it's important [laughs] and what we're talking about. So, maybe you could, like, go back to -- SEIF: Oh, if you're asking about new types of people who've been using it -- VICTORIA: Yeah. Like, if you're going to explain to, like, a non-technical founder, like, why your product is important, or, like, how people in their organization might use it, what would you say? SEIF: Oh, okay, if you put it like that. It's more of if you have data, timestamp data, and you want to run analytics on top of it, so that could be transactions, that could be web vitals, rather than count every time somebody visits, you have a timestamp. So, you can count, like, how many visitors visited the website and what, you know, all these kinds of things. That's where you want to use something like Axiom. That's outside the DevOps space, of course. And in DevOps space, there's so many other things you use Axiom for, but that's outside the DevOps space. And we actually...we implemented as zero-config integration with Vercel that kind of went viral. And we were, for a while, the number one enterprise for self-integration because so many people were using it. So, Vercel users are usually not necessarily writing the most complex backends, but a lot of things are happening on the front-end side of things. And we would be giving them dashboards, automated dashboards about, you know, latencies, and how long a request took, and how long the response took, and the content type, and the status codes, et cetera, et cetera. And there's a huge user base around that. VICTORIA: I like that. And it's something, for me, you know, as a managing director of our platform engineering team, I want to talk more to founders about. It's great that you put this product and this app out into the world. But how do you know that people are actually using it? How do you know that people, like, maybe, are they all quitting after the first day and not coming back to your app? Or maybe, like, the page isn't loading or, like, it's not working as they expected it to. And, like, if you don't have anything observing what users are doing in your app, then it's going to be hard to show that you're getting any traction and know where you need to go in and make corrections and adjust. SEIF: We have two ways of doing this. Right now, internally, we use our own tools to see, like, who is sending us data. We have a deployment that's monitoring production deployment. And we're just, you know, seeing how people are using it, how much data they're sending every day, who stopped sending data, who spiked in sending data sets, et cetera. But we're using Mixpanel, and Dominic, our Head of Product, implemented a couple of key metrics to that for that specifically. So, we know, like, what's the average time until somebody starts going from building its own queries with the builder to writing APL, or how long it takes them from, you know, running two queries to five queries. And, you know, we just start measuring these things now. And it's been going...we've been growing healthy around that. So, we tend to measure user interaction, but also, we tend to measure how much data is being sent. Because let's keep in mind, usually, people go in and check for things if there's a problem. So, if there's no problem, the user won't interact with us much unless there's a notification that kicks off. We also just check, like, how much data is being sent to us the whole time. VICTORIA: That makes sense. Like, you can't just rely on, like, well, if it was broken, they would write a [chuckles], like, a question or something. So, how do you get those metrics and that data around their interactions? So, that's really interesting. So, I wonder if we can go back and talk about, you know, we already mentioned a little bit about, like, the early days of Axiom and how you got started. Was there anything that you found in the early discovery process that was surprising and made you pivot strategy? SEIF: A couple of things. Basically, people don't really care about the tech as much as they care [inaudible 12:51] and the packaging, so that's something that we had to learn. And number two, continuous feedback. Continuous feedback changed the way we worked completely, right? And, you know, after that, we had a Slack channel, then we opened a Discord channel. And, like, this continuous feedback coming in just helps with iterating, helps us with prioritizing, et cetera. And that changed the way we actually developed product. VICTORIA: You use Slack and Discord? SEIF: No. No Slack anymore. We had a community Slack. We had a community [inaudible 13:19] Slack. Now, there's no community Slack. We only have a community Discord. And the community Slack is...sorry, internally, we use Slack, but there's a community Discord for the community. JOE: But how do you keep that staffed? Is it, like, everybody is in the Discord during working hours? Is it somebody's job to watch out for community questions? SEIF: I think everybody gets involved now just...and you can see it. If you go on our Discord, you will just see it. Just everyone just gets involved. I think just people are passionate about what they're doing. At least most people are involved on Discord, right? Because there's, like, Discord the help sections, and people are just asking questions and other people answering. And now, we reached a point where people in the community start answering the questions for other people in the community. So, that's how we see it's starting to become a healthy community, et cetera. But that is one of my favorite things: when I see somebody from the community answering somebody else, that's a highlight for me. Actually, we hired somebody from that community because they were so active. JOE: Yeah, I think one of the biggest signs that a product is healthy is when there's a healthy ecosystem building up around it. SEIF: Yeah, and Discord reminds me of the old days of open sources like IRC, just with memes now. But because all of us come from the old IRC days, being on Discord and chatting around, et cetera, et cetera, just gives us this momentum back, gave us this momentum back, whereas Slack always felt a bit too businessy to me. JOE: Slack is like IRC with emoji. Discord is IRC with memes. SEIF: I would say Slack reminds me somehow of MSN Messenger, right? JOE: I feel like there's a huge slam on MSN Messenger here. SEIF: [laughs] What do you guys use internally, Slack or? I think you're using Slack, right? Or Teams. Don't tell me you're using Teams. JOE: No, we're using Slack. SEIF: Okay, good, because I shit talk. Like, there is this, I'll sh*t talk here–when I start talking about Teams, so...I remember that one thing Google did once, and that failed miserably. JOE: Google still has, like, seven active chat products. SEIF: Like, I think every department or every, like, group of engineers just uses one of them internally. I'm not sure. Never got to that point. But hey, who am I to judge? VICTORIA: I just feel like I end up using all of them, and then I'm just rotating between different tabs all day long. You maybe talked me into using Discord. I feel like I've been resisting it, but you got me with the memes. SEIF: Yeah, it's definitely worth it. It's more entertaining. More noise, but more entertaining. You feel it's alive, whereas Slack is...also because there's no, like, history is forever. So, you always go back, and you're like, oh my God, what the hell is this? VICTORIA: Yeah, I have, like, all of them. I'll do anything. SEIF: They should be using Axiom in the background. Just send data to Axiom; we can keep your chat history. VICTORIA: Yeah, maybe. I'm so curious because, you know, you mentioned something about how you realized that it didn't matter really how cool the tech was if the product packaging wasn't also appealing to people. Because you seem really excited about what you've built. So, I'm curious, so just tell us a little bit more about how you went about trying to, like, promote this thing you built. Or was, like, the continuous feedback really early on, or how did that all kind of come together? SEIF: The continuous feedback helped us with performance, but actually getting people to sign up and pay money it started early on. But with Vercel, it kind of skyrocketed, right? And that's mostly because we went with the whole zero-config approach where it's just literally two clicks. And all of a sudden, Vercel is sending your data to Axiom, and that's it. We will create [inaudible 16:33]. And we worked very closely with Vercel to do this, to make this happen, which was awesome. Like, yeah, hats off to them. They were fantastic. And just two clicks, three clicks away, and all of a sudden, we created Axiom organization for you, the data set for you. And then we're sending it...and the data from Vercel is being forwarded to it. I think that packaging was so simple that it made people try it out quickly. And then, the experience of actually using Axiom was sticky, so they continued using it. And then the price was so low because we give 500 gigs for free, right? You send us 500 gigs a month of logs for free, and we don't care. And you can start off here with one terabyte for 25 bucks. So, people just start signing up. Now, before that, it was five terabytes a month for $99, and then we changed the plan. But yeah, it was cheap enough, so people just start sending us more and more and more data eventually. They weren't thinking...we changed the way people start thinking of “what am I going to send to Axiom” or “what am I going to send to my logs provider or log storage?” To how much more can I send? And I think that's what we wanted to reach. We wanted people to think, how much more can I send? JOE: You mentioned latency and cost. I'm curious about...the other big challenge we've seen with observability platforms, including logs, is cardinality of labels. Was there anything you had to sacrifice upfront in terms of cardinality to manage either cost or volume? SEIF: No, not really. Because the way we designed it was that we should be able to deal with high cardinality from scratch, right? I mean, there's open-source ways of doing, like, if you look at how, like, a column store, if you look at a column store and every dimension is its own column, it's just that becomes, like, you can limit on the amount of columns you're creating, but you should never limit on the amount of different values in a column could be. So, if you're having something like stat tags, right? Let's say hosting, like, hostname should be a column, but then the different hostnames you have, we never limit that. So, the cardinality on a value is something that is unlimited for us, and we don't really see it in cost. It doesn't really hit us on cost. It reflects a bit on compression if you get into technical details of that because, you know, high cardinality means a lot of different data. So, compression is harder, but it's not repetitive. But then if you look at, you know, oh, I want to send a lot of different types of fields, not values with fields, so you have hostname, and latency, and whatnot, et cetera, et cetera, yeah, that's where limitation starts because then they have...it's like you're going to a wide range of...and a wider dimension. But even that, we, yeah, we can deal with thousands at this point. And we realize, like, most people will not need more than three or four. It's like a Postgres table. You don't need more than 3,000 to 4000 columns; else, you know, you're doing a lot. JOE: I think it's actually pretty compelling in terms of cost, though. Like, that's one of the things we've had to be most careful about in terms of containing cost for metrics and logs is, a lot of providers will...they'll either charge you based on the number of unique metric combinations or the performance suffers greatly. Like, we've used a lot of Prometheus-based solutions. And so, when we're working with developers, even though they don't need more than, you know, a few dozen metric combinations most of the time, it's hard for people to think of what they need upfront. It's much easier after you deploy it to be able to query your data and slice it retroactively based on what you're seeing. SEIF: That's the detail. When you say we're using Prometheus, a lot of the metrics tools out there are using, just like Prometheus, are using the Gorilla data structure. And the real data structure was never designed to deal with high cardinality labels. So, basically, to put it in a simple way, every combination of tags you send for metrics is its own file on disk. That's, like, the very simple way of explaining this. And then, when you're trying to search through everything, right? And you have a lot of these combinations. I actually have to get all these files from this conversion back together, you know, and then they're chunked, et cetera. So, it's a problem. Generally, how metrics are doing it...most metrics products are using it, even VictoriaMetrics, et cetera. What they're doing is they're using either the Prometheus TSDB data structure, which is based on Gorilla. Influx was doing the same thing. They pivoted to using more and more like the ones we use, and Honeycomb uses, right? So, we might not be as fast on metrics side as these highly optimized. But then when it comes to high [inaudible 20:49], once we start dealing with high cardinality, we will be faster than those solutions. And that's on a very technical level. JOE: That's pretty cool. I realize we're getting pretty technical here. Maybe it's worth defining cardinality for the audience. SEIF: Defining cardinality to the...I mean, we just did that, right? JOE: What do you think, Victoria? Do you know what cardinality is now? [laughs] VICTORIA: All right. Now I'm like, do I know? I was like, I think I know what it means. Cardinality is, like, let's say you have a piece of data like an event or a transaction. SEIF: It's like the distinct count on a property that gives you the cardinality of a property. VICTORIA: Right. It's like how many pieces of information you have about that one event, basically, yeah. JOE: But with some traditional metrics stores, it's easy to make mistakes. For example, you could have unbounded cardinality by including response time as one of the labels -- SEIF: Tags. JOE: And then it's just going to -- SEIF: Oh, no, no. Let me give you a better one. I put in timestamp at some point in my life. JOE: Yeah, I feel like everybody has done that one. [laughter] SEIF: I've put a system timestamp at some point in my life. There was the actual timestamp, and there was a system timestamp that I would put because I wanted to know when the...because I couldn't control the timestamp, and the only timestamp I had was a system timestamp. I would always add the actual timestamp of when that event actually happened into a metric, and yeah, that did not scale. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. VICTORIA: Yeah. I wonder if you could maybe share, like, a story about when it's gone wrong, and you've suddenly charged a lot of money [laughs] just to get information about what's happening in the system. Any, like, personal experiences with observability that kind of informed what you did with Axiom? SEIF: Oof, I have a very bad one, like, a very, very bad one. I used to work for a company. We had to deploy Elasticsearch on Windows Servers, and it was US-East-1. So, just a combination of Elasticsearch back in 2013, 2014 together with Azure and Windows Server was not a good idea. So, you see where this is going, right? JOE: I see where it's going. SEIF: Eventually, we had, like, we get all these problems because we used Elasticsearch and Kibana as our, you know, observability platform to measure everything around the product we were building. And funny enough, it cost us more than actually maintaining the infrastructure of the product. But not just that, it also kept me up longer because most of the downtimes I would get were not because of the product going down. It's because my Elasticsearch cluster started going down, and there's reasons for that. Because back then, Microsoft Azure thought that it's okay for any VM to lose connection with the rest of the VMs for 30 seconds per day. And then, all of a sudden, you have Elasticsearch with a split-brain problem. And there was a phase where I started getting alerted so much that back then, my partner threatened to leave me. So I bought a...what I think was a shock bracelet or a shock collar via Bluetooth, and I connected it to phone for any notification. And I bought that off Alibaba, by the way. And I would charge it at night, put it on my wrist, and go to sleep. And then, when alert happens, it will fully discharge the battery on me every time. JOE: Okay, I have to admit, I did not see where that was going. SEIF: Yeah, did that for a while; definitely did not save my relationship either. But eventually, that was the point where, you know, we started looking into other observability tools like Datadog, et cetera, et cetera, et cetera. And that's where the actual journey began, where we moved away from Elasticsearch and Kibana to look for something, okay, that we don't have to maintain ourselves and we can use, et cetera. So, it's not about the costs as much; it was just pain. VICTORIA: Yeah, pain is a real pain point, actual physical [chuckles] and emotional pain point [laughter]. What, like, motivates you to keep going with Axiom and to keep, like, the wind in your sails to keep working on it? SEIF: There's a couple of things. I love working with my team. So, honestly, I just wake up, and I compliment my team. I just love working with them. They're a lot of fun to work with. And they challenge me, and I challenge them back. And I upset them a lot. And they can't upset me, but I upset them. But I love working with them, and I love working with that team. And the other thing is getting, like, having this constant feedback from customers just makes you want to do more and, you know, close sales, et cetera. It's interesting, like, how I'm a very technical person, and I'm more interested in sales because sales means your product works, the product, the technical parts, et cetera. Because if technically it's not working, you can't build a product on top of it. And if you're not selling it, then what's the point? You only sell when the product is good, more or less, unless you're Oracle. VICTORIA: I had someone ask me about Oracle recently, actually. They're like, "Are you considering going back to it?" And I'm maybe a little allergic to it from having a federal consulting background [laughs]. But maybe they'll come back around. I don't know. We'll see. SEIF: Did you sell your soul back then? VICTORIA: You know, I feel like I just grew up in a place where that's what everyone did was all. SEIF: It was Oracle, IBM, or HP back in the day. VICTORIA: Yeah. Well, basically, when you're working on applications that were built in, like, the '80s, Oracle was, like, this hot, new database technology [laughs] that they just got five years ago. So, that's just, yeah, interesting. SEIF: Although, from a database perspective, they did a lot of the innovations. A lot of first innovations could have come from Oracle. From a technical perspective, they're ridiculous. I'm not sure from a product perspective how good they are. But I know their sales team is so big, so huge. They don't care about the product anymore. They can still sell. VICTORIA: I think, you know, everything in tech is cyclical. So, you know, if they have the right strategy and they're making some interesting changes over there, there's always a chance [laughs]. Certain use cases, I mean, I think that's the interesting point about working in technology is that you know, every company is a tech company. And so, there's just a lot of different types of people, personas, and use cases for different types of products. So, I wonder, you know, you kind of mentioned earlier that, like, everyone is interested in Axiom. But, you know, I don't know, are you narrowing the market? Or, like, how are you trying to kind of focus your messaging and your sales for Axiom? SEIF: I'm trying to focus on developers. So, we're really trying to focus on developers because the experience around observability is crap. It's stupid expensive. Sorry for being straightforward, right? And that's what we're trying to change. And we're targeting developers mainly. We want developers to like us. And we'll find all these different types of developers who are using it, and that's the interesting thing. And because of them, we start adding more and more features, like, you know, we added tracing, and now that enables, like, billions of events pushed through for, you know, again, for almost no money, again, $25 a month for a terabyte of data. And we're doing this with metrics next. And that's just to address the developers who have been giving us feedback and the market demand. I will sum it up, again, like, the experience is crap, and it's stupid expensive. I think that's the [inaudible 28:07] of observability is just that's how I would sum it up. VICTORIA: If you could go back in time and talk to yourself when you were still a developer, now that you're CTO, what advice would you give yourself? JOE: Besides avoiding shock collars. VICTORIA: [laughs] Yes. SEIF: Get people's feedback quickly so you know you're on the right track. I think that's very, very, very, very important. Don't just work in the dark, or don't go too long into stealth mode because, eventually, people catch up. Also, ship when you're 80% ready because 100% is too late. I think it's the same thing here. JOE: Ship often and early. SEIF: Yeah, even if it's not fully ready, it's still feedback. VICTORIA: Ship often and early and talk to people [laughs]. Just, do you feel like, as a developer, did you have the skills you needed to be able to get the most out of those feedback and out of those conversations you were having with people around your product? SEIF: I still don't think I'm good enough. You're just constantly learning, right? I just accepted I'm part of a team, and I have my contributions. But as an individual, I still don't think I know enough. I think there's more I need to learn at this point. VICTORIA: I wonder, what questions do you have for me or Joe? SEIF: How did you start your podcast, and why the name? VICTORIA: Oh, man, I hope I can answer. So, the podcast was started...I think it's, like, we're actually about to be at our 500th Episode. So, I've only been a host for the last year. Maybe Joe even knows more than I do. But what I recall is that one person at thoughtbot thought it would be a great idea to start a podcast, and then they did it. And it seems like the whole company is obsessed with robots. I'm not really sure where that came from. There used to be a tiny robot in the office, is what I remember. And people started using that as, like, the mascot. And then, yeah, that's it, that's the whole thing. SEIF: Was the robot doing anything useful or just being cute? JOE: It was just cute, and it's hard to make a robot cute. SEIF: Was it a real robot, or was it like a -- JOE: No, there was, at one point, a toy robot. The name...I actually forget the origin–origin of the name, but the name Giant Robots comes from our blog. So, we named the podcast the same as the blog: Giant Robots Smashing Into Other Giant Robots. SEIF: Yes, it's called transformers. VICTORIA: Yeah, I like it. It's, I mean, now I feel like -- SEIF: [laughs] VICTORIA: We got to get more, like, robot dogs involved [laughs] in the podcast. SEIF: Like, I wanted to add one thing when we talked about, you know, what gets me going. And I want to mention that I have a six-month-old son now. He definitely adds a lot of motivation for me to wake up in the morning and work. But he also makes me wake up regardless if I want to or not. VICTORIA: Yeah, you said you had invented an alarm clock that never turns off. Never snoozes [laughs]. SEIF: Yes, absolutely. VICTORIA: I have the same thing, but it's my dog. But he does snooze, actually. He'll just, like, get tired and go back to sleep [laughs]. SEIF: Oh, I have a question. Do dogs have a Tamagotchi phase? Because, like, my son, the first three months was like a Tamagotchi. It was easy to read him. VICTORIA: Oh yeah, uh-huh. SEIF: Noisy but easy. VICTORIA: Yes, yes. SEIF: Now, it's just like, yeah, I don't know, like, the last month he has opinions at six months. I think it's because I raised him in Europe. I should take him back to the Middle East [laughs]. No opinions. VICTORIA: No, dogs totally have, like, a communication style, you know, I pretty much know what he, I mean, I can read his mind, obviously [laughs]. SEIF: Sure, but that's when they grow a bit. But what when they were very...when the dog was very young? VICTORIA: Yeah, they, I mean, they also learn, like, your stuff, too. So, they, like, learn how to get you to do stuff or, like, I know she'll feed me if I'm sitting here [laughs]. SEIF: And how much is one dog year, seven years? VICTORIA: Seven years. SEIF: Seven years? VICTORIA: Yeah, seven years? SEIF: Yeah. So, basically, in one year, like, three months, he's already...in one month, he's, you know, seven months old. He's like, yeah. VICTORIA: Yeah. In a year, they're, like, teenagers. And then, in two years, they're, like, full adults. SEIF: Yeah. So, the first month is basically going through the first six months of a human being. So yeah, you pass...the first two days or three days are the Tamagotchi phase that I'm talking about. VICTORIA: [chuckles] I read this book, and it was, like, to understand dogs, it's like, they're just like humans that are trying to, like, maximize the number of positive experiences that they have. So, like, if you think about that framing around all your interactions about, like, maybe you're trying to get your son to do something, you can be like, okay, how do I, like, I don't know, train him that good things happen when he does the things I want him to do? [laughs] That's kind of maybe manipulative but effective. So, you're not learning baby sign language? You're just, like, going off facial expressions? SEIF: I started. I know how Mama looks like. I know how Dada looks like. I know how more looks like, slowly. And he already does this thing that I know that when he's uncomfortable, he starts opening and closing his hands. And when he's completely uncomfortable and basically that he needs to go sleep, he starts pulling his own hair. VICTORIA: [laughs] I do the same thing [laughs]. SEIF: You pull your own hair when you go to sleep? I don't have that. I don't have hair. VICTORIA: I think I do start, like, touching my head though, yeah [inaudible 33:04]. SEIF: Azure took the last bit of hair I had! Went away with Azure, Elasticsearch, and the shock collar. VICTORIA: [laughs] SEIF: I have none of them left. Absolutely nothing. I should sue Elasticsearch for this shit. VICTORIA: [laughs] Let me know how that goes. Maybe there's more people who could join your lawsuit, you know, with a class action. SEIF: [laughs] Yeah. Well, one thing I wanted to also just highlight is, right now, one of the things that also makes the company move forward is we realized that in a single domain, we proved ourselves very valuable to specific companies, right? So, that was a big, big thing, milestone for us. And now we're trying to move into a handful of domains and see which one of those work out the best for us. Does that make sense? VICTORIA: Yeah. And I'm curious: what are the biggest challenges or hurdles that you associate with that? SEIF: At this point, you don't want just feedback. You want constructive criticism. Like, you want to work with people who will criticize the applic...and you iterate with them based on this criticism, right? They're just not happy about you and trying to create design partners. So, for us, it was very important to have these small design partners who can work with us to actually prove ourselves as valuable in a single domain. Right now, we need to find a way to scale this across several domains. And how do you do that without sacrificing? Like, how do you open into other domains without sacrificing the original domain you came from? So, there's a lot of things [inaudible 34:28]. And we are in the middle of this. Honestly, I Forrest Gumped my way through half of this, right? Like, I didn't know what I was doing. I had ideas. I think it's more of luck at this point. And I had luck. No, we did work. We did work a lot. We did sleepless nights and everything. But I think, in the last three years, we became more mature and started thinking more about product. And as I said, like, our CEO, Neil, and Dominic, our head of product, are putting everything behind being a product-led organization, not just a tech-led organization. VICTORIA: That's super interesting. I love to hear that that's the way you're thinking about it. JOE: I was just curious what other domains you're looking at pushing into if you can say. SEIF: So, we are going to start moving into ETL a bit more. We're trying to see how we can fit in specific ML scenarios. I can't say more about the other, though. JOE: Do you think you'll take the same approaches in terms of value proposition, like, low cost, good enough latency? SEIF: Yes, that's definitely one thing. But there's also...so, this is the values we're bringing to the customer. But also, now, our internal values are different. Now it's more of move with urgency and high velocity, as we said before, right? Think big, work small. The values in terms of values we're going to take to the customers it's the same ones. And maybe we'll add some more, but it's still going to be low-cost and large-scale. And, internally, we're just becoming more, excuse my French, agile. I hate that word so much. Should be good with Scrum. VICTORIA: It's painful, but everyone knows what you're talking about [laughs], you know, like -- SEIF: See, I have opinions here about Scrum. I think Scrum should be only used in terms of iceScrum [inaudible 36:04], or something like that. VICTORIA: Oh no [laughter]. Well, it's a Rugby term, right? Like, that's where it should probably stay. SEIF: I did not know it's a rugby term. VICTORIA: Yeah, so it should stay there, but -- SEIF: Yes [laughs]. VICTORIA: Yeah, I think it's interesting. Yeah, I like the being flexible. I like the just, like, continuous feedback and how you all have set up to, like, talk with your customers. Because you mentioned earlier that, like, you might open source some of your projects. And I'm just curious, like, what goes into that decision for you when you're going to do that? Like, what makes you think this project would be good for open source or when you think, actually, we need to, like, keep it? SEIF: So, we open source libraries, right? We actually do that already. And some other big organizations use our libraries; even our competitors use our libraries, that we do. The whole product itself or at least a big part of the product, like database, I'm not sure we're going to open source that, at least not anytime soon. And if we open source, it's going to be at a point where the value-add it brings is nothing compared to how well our product is, right? So, if we can replace whatever's at the back with...the storage engine we have in the back with something else and the product doesn't get affected, that's when we open source it. VICTORIA: That's interesting. That makes sense to me. But yeah, thank you for clarifying that. I just wanted to make sure to circle back. Since you have this big history in open source, yeah, I'm curious if you see... SEIF: Burning me out? VICTORIA: Burning you out, yeah [laughter]. Oh, that's a good question. Yeah, like, because, you know, we're about to be in October here. Do you have any advice or strategies as a maintainer for not getting burned out during the next couple of weeks besides, like, hide in a cave and without internet access [laughs]? SEIF: Stay away from Reddit and Hacker News. That's my goal for October now because I'm always afraid of getting too attached to an idea, or too motivated, or excited by an idea that I drift away from what I am actually supposed to be doing. VICTORIA: Last question is, is there anything else you would like to promote? SEIF: Yeah, check out our website; I think it's at axiom.co. Check it out. Sign up. And comment on Discord and talk to me. I don't bite, sometimes grumpy, but that's just because of lack of sleep in the morning. But, you know, around midday, I'm good. And if you're ever in Berlin and you want to hang out, I'm more than willing to hang out. VICTORIA: Whoo, that's awesome. Yeah, Berlin is great. I was there a couple of years ago but no plans to go back anytime soon, but maybe I'll keep that in mind. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at firstname.lastname@example.org. And you could find me on Twitter @victori_ousg. And this podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at email@example.com with any questions. Special Guests: Joe Ferris and Seif Lotfy.
Seif Lotfy, Co-Founder and CTO at Axiom, joins Corey on Screaming in the Cloud to discuss how and why Axiom has taken a low-cost approach to event data. Seif describes the events that led to him helping co-found a company, and explains why the team wrote all their code from scratch. Corey and Seif discuss their views on AWS pricing, and Seif shares his views on why AWS doesn't have to compete on price. Seif also reveals some of the exciting new products and features that Axiom is currently working on. About SeifSeif is the bubbly Co-founder and CTO of Axiom where he has helped build the next generation of logging, tracing, and metrics. His background is at Xamarin, and Deutche Telekom and he is the kind of deep technical nerd that geeks out on white papers about emerging technology and then goes to see what he can build.Links Referenced: Axiom: https://axiom.co/ Twitter: https://twitter.com/seiflotfy TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by my friends, and soon to be yours, over at Axiom. Today I'm talking with Seif Lotfy, who's the co-founder and CTO of Axiom. Seif, how are you?Seif: Hey, Corey, I am very good, thank you. It's pretty late here, but it's worth it. I'm excited to be on this interview. How are you today?Corey: I'm not dead yet. It's weird, I see you at a bunch of different conferences, and I keep forgetting that you do in fact live half a world away. Is the entire company based in Europe? And where are you folks? Where do you start and where do you stop geographically? Let's start there. We over—everyone dives right into product. No, no, no. I want to know where in the world people sit because apparently, that's the most important thing about a company in 2023.Seif: Unless you ask Zoom because they're undoing whatever they did. We're from New Zealand, all the way to San Francisco, and everything in between. So, we have people in Egypt and Nigeria, all around Europe, all around the US… and UK, if you don't consider it Europe anymore.Corey: Yeah, it really depends. There's a lot of unfortunate naming that needs to get changed in the wake of that.Seif: [laugh].Corey: But enough about geopolitics. Let's talk about industry politics. I've been a fan of Axiom for a while and I was somewhat surprised to realize how long it had been around because I only heard about you folks a couple of years back. What is it you folks do? Because I know how I think about what you're up to, but you've also gone through some messaging iteration, and it is a near certainty that I am behind the times.Seif: Well, at this point, we just define ourselves as the best home for event data. So, Axiom is the best home for event data. We try to deal with everything that is event-based, so time-series. So, we can talk metrics, logs, traces, et cetera. And right now predominantly serving engineering and security.And we're trying to be—or we are—the first cloud-native time-series platform to provide streaming search, reporting, and monitoring capabilities. And we're built from the ground up, by the way. Like, we didn't actually—we're not using Parquet [unintelligible 00:02:36] thing. We're completely everything from the ground up.Corey: When I first started talking to you folks a few years back, there were two points to me that really stood out, and I know at least one of them still holds true. The first is that at the time, you were primarily talking about log data. Just send all your logs over to Axiom. The end. And that was a simple message that was simple enough that I could understand it, frankly.Because back when I was slinging servers around and you know breaking half of them, logs were effectively how we kept track of what was going on, where. These days, it feels like everything has been repainted with a very broad brush called observability, and the takeaway from most company pitches has been, you must be smarter than you are to understand what it is that we're up to. And in some cases, you scratch below the surface and realize it no, they have no idea what they're talking about either and they're really hoping you don't call them on that.Seif: It's packaging.Corey: Yeah. It is packaging and that's important.Seif: It's literally packaging. If you look at it, traces and logs, these are events. There's a timestamp and just data with it. It's a timestamp and data with it, right? Even metrics is all the way to that point.And a good example, now everybody's jumping on [OTel 00:03:46]. For me, OTel is nothing else, but a different structure for time series, for different types of time series, and that can be used differently, right? Or at least not used differently but you can leverage it differently.Corey: And the other thing that you did that was interesting and is a lot, I think, more sustainable as far as [moats 00:04:04] go, rather than things that can be changed on a billboard or whatnot, is your economic position. And your pricing has changed around somewhat, but I ran a number of analyses on your cost that you were passing on to customers and my takeaway was that it was a little bit more expensive to store data for logs in Axiom than it was to store it in S3, but not by much. And it just blew away the price point of everything else focused around logs, including AWS; you're paying 50 cents a gigabyte to ingest CloudWatch logs data over there. Other companies are charging multiples of that and Cisco recently bought Splunk for $28 billion because it was cheaper than paying their annual Splunk bill. How did you get to that price point? Is it just a matter of everyone else being greedy or have you done something different?Seif: We looked at it from the perspective of… so there's the three L's of logging. I forgot the name of the person at Netflix who talked about that, but basically, it's low costs, low latency, large scale, right? And you will never be able to fulfill all three of them. And we decided to work on low costs and large scale. And in terms of low latency, we won't be low as others like ClickHouse, but we are low enough. Like, we're fast enough.The idea is to be fast enough because in most cases, I don't want to compete on milliseconds. I think if the user can see his data in two seconds, he's happy. Or three seconds, he's happy. I'm not going to be, like, one to two seconds and make the cost exponentially higher because I'm one second faster than the other. And that's, I think, that the way we approached this from day one.And from day one, we also started utilizing the idea of existence of Open—Object Storage, we have our own compressions, our own encodings, et cetera, from day one, too, so and we still stick to that. That's why we never converted to other existing things like Parquet. Also because we are a Schema-On-Read, which Parquet doesn't allow you really to do. But other than that, it's… from day one, we wanted to save costs by also making coordination free. So, ingest has to be coordination free, right, because then we don't run a shitty Kafka, like, honestly a lot—a lot of the [logs 00:06:19] companies who running a Kafka in front of it, the Kafka tax reflects in what they—the bill that you're paying for them.Corey: What I found fun about your pricing model is it gets to a point that for any reasonable workload, how much to log or what to log or sample or keep everything is no longer an investment decision; it's just go ahead and handle it. And that was originally what you wound up building out. Increasingly, it seems like you're not just the place to send all the logs to, which to be honest, I was excited enough about that. That was replacing one of the projects I did a couple of times myself, which is building highly available, fault-tolerant, rsyslog clusters in data centers. Okay, great, you've gotten that unlocked, the economics are great, I don't have to worry about that anymore.And then you started adding interesting things on top of it, analyzing things, replaying events that happen to other players, et cetera, et cetera, it almost feels like you're not just a storage depot, but you also can forward certain things on under a variety of different rules or guises and format them as whatever on the other side is expecting them to be. So, there's a story about integrating with other observability vendors, for example, and only sending the stuff that's germane and relevant to them since everyone loves to charge by ingest.Seif: Yeah. So, we did this one thing called endpoints, the number one. Endpoints was a beginning where we said, “Let's let people send us data using whatever API they like using, let's say Elasticsearch, Datadog, Honeycomb, Loki, whatever, and we will just take that data and multiplex it back to them.” So, that's how part of it started. This allows us to see, like, how—allows customers to see how we compared to others, but then we took it a bit further and now, it's still in closed invite-only, but we have Pipelines—codenamed Pipelines—which allows you to send data to us and we will keep it as a source of truth, then we will, given specific rules, we can then ship it anywhere to a different destination, right, and this allows you just to, on the fly, send specific filter things out to, I don't know, a different vendor or even to S3 or you could send it to Splunk. But at the same time, you can—because we have all your data, you can go back in the past, if the incident happens and replay that completely into a different product.Corey: I would say that there's a definite approach to observability, from the perspective of every company tends to visualize stuff a little bit differently. And one of the promises of OTel that I'm seeing that as it grows is the idea of oh, I can send different parts of what I'm seeing off to different providers. But the instrumentation story for OTel is still very much emerging. Logs are kind of eternal and the only real change we've seen to logs over the past decade or so has been instead of just being plain text and their positional parameters would define what was what—if it's in this column, it's an IP address and if it's in this column, it's a return code, and that just wound up being ridiculous—now you see them having schemas; they are structured in a variety of different ways. Which, okay, it's a little harder to wind up just cat'ing a file together and piping it to grep, but there are trade-offs that make it worth it, in my experience.This is one of those transitional products that not only is great once you get to where you're going, from my playing with it, but also it meets you where you already are to get started because everything you've got is emitting logs somewhere, whether you know it or not.Seif: Yes. And that's why we picked up on OTel, right? Like, one of the first things, we now support… we have an OTel endpoint natively bec—or as a first-class citizen because we wanted to build this experience around OTel in general. Whether we like it or not, and there's more reasons to like it, OTel is a standard that's going to stay and it's going to move us forward. I think of OTel as will have the same effect if not bigger as [unintelligible 00:10:11] back of the day, but now it just went away from metrics, just went to metrics, logs, and traces.Traces is, for me, very interesting because I think OTel is the first one to push it in a standard way. There were several attempts to make standardized [logs 00:10:25], but I think traces was something that OTel really pushed into a proper standard that we can follow. It annoys me that everybody uses a different bits and pieces of it and adds something to it, but I think it's also because it's not that mature yet, so people are trying to figure out how to deliver the best experience and package it in a way that it's actually interesting for a user.Corey: What I have found is that there's a lot that's in this space that is just simply noise. Whenever I spend a protracted time period working on basically anything and I'm still confused by the way people talk about that thing, months or years later, I'm starting to get the realization that maybe I'm not the problem here. And I'm not—I don't mean this to be insulting, but one of the things I've loved about you folks is I've always understood what you're saying. Now, you can hear that as, “Oh, you mean we talk like simpletons?” No, it means what you're talking about resonates with at least a subset of the people who have the problem you solve. That's not nothing.Seif: Yes. We've tried really hard because one of the things we've tried to do is actually bring observability to people who are not always busy or it's not part of their day to day. So, we try to bring into [Versal 00:11:37] developers, right, with doing a Versal integration. And all of a sudden, now they have their logs, and they have a metrics, and they have some traces. So, all of a sudden, they're doing the observability work. Or they have actual observability, for their Versal based, [unintelligible 00:11:54]-based product.And we try to meet the people where they are, so we try to—instead of actually telling people, “You should send us data.”—I mean, that's what they do now—we try to find, okay, what product are you using and how can we grab data from there and send it to us to make your life easier? You see that we did that with Versal, we did that with Cloudflare. AWS, we have extensions, Lambda extensions, et cetera, but we're doing it for more things. For Netlify, it's a one-click integration, too, and that's what we're trying to do to actually make the experience and the journey easier.Corey: I want to change gears a little bit because something that we spent a fair bit of time talking about—it's why we became friends, I would think anyway—is that we have a shared appreciation for several things. One of which, at most notable to anyone around us is whenever we hang out, we greet each other effusively and then immediately begin complaining about costs of cloud services. What is your take on the way that clouds charge for things? And I know it's a bit of a leading question, but it's core and foundational to how you think about Axiom, as well as how you serve customers.Seif: They're ripping us off. I'm sorry [laugh]. They just—the amount of money they make, like, it's crazy. I would love to know what margins they have. That's a big question I've always had. I'm like, what are the margins they have at AWS right now?Corey: Across the board, it's something around 30 to 40%, last time I looked at it.Seif: That's a lot, too.Corey: Well, that's also across the board of everything, to be clear. It is very clear that some services are subsidized by other services. As it should be. If you start charging me per IAM call, we're done.Seif: And also, I mean, the machine learning stuff. Like, they won't be doing that much on top of it right now, right, [else nobody 00:13:32] will be using it.Corey: But data transfer? Yeah, there's a significant upcharge on that. But I hear you. I would moderate it a bit. I don't think that I would say that it's necessarily an intentional ripoff. My problem with most cloud services that they offer is not usually that they're too expensive—though there are exceptions to that—but rather that the dimensions are unpredictable in advance. So, you run something for a while and see what it costs. From where I sit, if a customer uses your service and then at the end of usage is surprised by how much it cost them, you've kind of screwed up.Seif: Look, if they can make egress free—like, you saw how Cloudflare just did the egress of R2 free? Because I am still stuck with AWS because let's face it, for me, it is still my favorite cloud, right? Cloudflare is my next favorite because of all the features that are trying to develop and the pace they're picking, the pace they're trying to catch up with. But again, one of the biggest things I liked is R2, and R2 egress is free. Now, that's interesting, right?But I never saw anything coming back from S3 from AWS on S3 for that, like you know. I think Amazon is so comfortable because from a product perspective, they're simple, they have the tools, et cetera. And the UI is not the flashiest one, but you know what you're doing, right? The CLI is not the flashiest one, but you know what you're doing. It is so cool that they don't really need to compete with others yet.And I think they're still dominantly the biggest cloud out there. I think you know more than me about that, but [unintelligible 00:14:57], like, I think they are the biggest one right now in terms of data volume. Like, how many customers are using them, and even in terms of profiles of people using them, it's very, so much. I know, like, a lot of the Microsoft Azure people who are using it, are using it because they come from enterprise that have been always Microsoft… very Microsoft friendly. And eventually, Microsoft also came in Europe in these all these different weird ways. But I feel sometimes ripped off by AWS because I see Cloudflare trying to reduce the prices and AWS just looking, like, “Yeah, you're not a threat to us so we'll keep our prices as they are.”Corey: I have it on good authority from folks who know that there are reasons behind the economic structures of both of those companies based—in terms of the primary direction the traffic flows and the rest. But across the board, they've done such a poor job of articulating this that, frankly, I think the confusion is on them to clear up, not us.Seif: True. True. And the reason I picked R2 and S3 to compare there and not look at Workers and Lambdas because I look at it as R2 is S3 compatible from an API perspective, right? So, they're giving me something that I already use. Everything else I'm using, I'm using inside Amazon, so it's in a VPC, but just the idea. Let me dream. Let me dream that S3 egress will be free at some point.Corey: I can dream.Seif: That's like Christmas. It's better than Christmas.Corey: What I'm surprised about is how reasonable your pricing is in turn. You wind up charging on the basis of ingest, which is basically the only thing that really makes sense for how your company is structured. But it's predictable in advance, the free tier is, what, 500 gigs a month of ingestion, and before people think, “Oh, that doesn't sound like a lot,” I encourage you to just go back and think how much data that really is in the context of logs for any toy project. Like, “Well, our production environment spits out way more than that.” Yes, and by the word production that you just used, you probably shouldn't be using a free trial of anything as your critical path observability tooling. Become a customer, not a user. I'm a big believer in that philosophy, personally. For all of my toy projects that are ridiculous, this is ample.Seif: People always tend to overestimate how much logs they're going to be sending. Like so, there's one thing. What you said it right: people who already have something going on, they already know how much logs they'll be sending around. But then eventually they're sending too much, and that's why we're back here and they're talking to us. Like, “We want to ttry your tool, but you know, we'll be sending more than that.” So, if you don't like our pricing, go find something else because I think we are the cheapest out there right now. We're the competitive the cheapest out there right now.Corey: If there is one that is less expensive, I'm unaware of it.Seif: [laugh].Corey: And I've been looking, let's be clear. That's not just me saying, “Well, nothing has skittered across my desk.” No, no, no, I pay attention to this space.Seif: Hey, where's—Corey, we're friends. Loyalty.Corey: Exactly.Seif: If you find something, you tell me.Corey: Oh, if I find something, I'll tell everyone.Seif: Nononon, you tell me first and you tell me in a nice way so I can reduce the prices on my site [laugh].Corey: This is how we start a price was, industry-wide, and I would love to see it.Seif: [laugh]. But there's enough channels that we share at this point across different Slacks and messaging apps that you should be able to ping me if you find one. Also, get me the name of the CEO and the CTO while you're at it.Corey: And where they live. Yes, yes, of course. The dire implications will be awesome.Seif: That was you, not me. That was your suggestion.Corey: Exactly.Seif: I will not—[laugh].Corey: Before we turn into a bit of an old thud and blunder, let's talk about something else that I'm curious about here. You've been working on Axiom for something like seven years now. You come from a world of databases and events and the like. Why start a company in the model of Axiom? Even back then, when I looked around, my big problem with the entire observability space could never have been described as, “You know what we need? More companies that do exactly this.” What was it that you saw that made you say, “Yeah, we're going to start a company because that sounds easy.”Seif: So, I'll be very clear. Like, I'm not going to, like, sugarcoat this. We kind of got in a position where it [forced counterweighted 00:19:10]. And [laugh] by that I mean, we came from a company where we were dealing with logs. Like, we actually wrote an event crash analytics tool for a company, but then we ended up wanting to use stuff like Datadog, but we didn't have the budget for that because Datadog was killing us.So, we ended up hosting our own Elasticsearch. And Elasticsearch, it costs us more to maintain our Elasticsearch cluster for the logs than to actually maintain our own little infrastructure for the crash events when we were getting, like, 1 billion crashes a month at this point. So eventually, we just—that was the first burn. And then you had alert fatigue and then you had consolidating events and timestamps and whatnot. The whole thing just seemed very messy.So, we started off after some company got sold, we started off by saying, “Okay, let's go work on a new self-hosted version of the [unintelligible 00:20:05] where we do metrics and logs.” And then that didn't go as well as we thought it would, but we ended up—because from day one, we were working on cloud na—because we d—we cloud ho—we were self-hosted, so we wanted to keep costs low, we were working on and making it stateless and work against object store. And this is kind of how we started. We realized, oh, our cost, we can host this and make it scale, and won't cost us that much.So, we did that. And that started gaining more attention. But the reason we started this was we wanted to start a self-hosted version of Datadog that is not costly, and we ended up doing a Software as a Service. I mean, you can still come and self-hosted, but you'll have to pay money for it, like, proper money for that. But we do as a SaaS version of this and instead of trying to be a self-hosted Datadog, we are now trying to compete—or we are competing with Datadog.Corey: Is the technology that you've built this on top of actually that different from everything else out there, or is this effectively what you see in a lot of places: “Oh, yeah, we're just going to manage Elasticsearch for you because that's annoying.” Do you have anything that distinguishes you from, I guess, the rest of the field?Seif: Yeah. So, very just bluntly, like, I think Scuba was the first thing that started standing out, and then Honeycomb came into the scene and they start building something based on Scuba, the [unintelligible 00:21:23] principles of Scuba. Then one of the authors of actual Scuba reached out to me when I told him I'm trying to build something, and he's gave me some ideas, and I start building that. And from day one, I said, “Okay, everything in S3. All queries have to be serverless.”So, all the queries run on functions. There's no real disks. It's just all on S3 right now. And the biggest issue—achievement we got to lower our cost was to get rid of Kafka, and have—let's say, in behind the scenes we have our own coordination-free mechanism, but the idea is not to actually have to use Kafka at all and thus reduce the costs incredibly. In terms of technology, no, we don't use Elasticsearch.We wrote everything from the ground up, from scratch, even the query language. Like, we have our own query language that's based—modeled after Kusto—KQL by Microsoft—so everything we have is built from absolutely from the ground up. And no Elastic. I'm not using Elastic anymore. Elastic is a horror for me. Absolutely horror.Corey: People love the API, but no, I've never met anyone who likes managing Elasticsearch or OpenSearch, or whatever we're calling your particular flavor of it. It is a colossal pain, it is subject to significant trade-offs, regardless of how you work with it, and Amazon's managed offering doesn't make it better; it makes it worse in a bunch of ways.Seif: And the green status of Elasticsearch is a myth. You'll only see it once: the first time you start that cluster, that's what the Elasticsearch cluster is green. After that, it's just orange, or red. And you know what? I'm happy when it's orange. Elasticsearch kept me up for so long. And we had actually a very interesting situation where we had Elasticsearch running on Azure, on Windows machines, and I would have server [unintelligible 00:23:10]. And I'd have to log in and every day—you remember, what's it called—RP… RP Something. What was it called?Corey: RDP? Remote Desktop Protocol, or something else?Seif: Yeah, yeah. Where you have to log in, like, you actually have visual thing, and you have to go in and—Corey: Yep.Seif: And visually go in and say, “Please don't restart.” Every day, I'd have to do that. Please don't restart, please don't restart. And also a lot of weird issues, and also at that point, Azure would decide to disconnect the pod, wanted to try to bring in a new pod, and all these weird things were happening back then. So, eventually, end up with a [unintelligible 00:23:39] decision. I'm talking 2013, '14, so it was back in the day when Elasticsearch was very young. And so, that was just a bad start for me.Corey: I will say that Azure is the most cost-effective cloud because their security is so clown shoes, you can just run whatever you want in someone else's account and it's free to you. Problem solved.Seif: Don't tell people how we save costs, okay?Corey: [laugh]. I love that.Seif: [laugh]. Don't tell people how we do this. Like, Corey, come on [laugh], you're exposing me here. Let me tell you one thing, though. Elasticsearch is the reason I literally use a shock collar or a shock bracelet on myself every time it went down—which was almost every day, instead of having PagerDuty, like, ring my phone.And, you know, I'd wake up and my partner back then would wake up. I bought a Bluetooth collar off of Alibaba that would tase me every time I'd get a notification, regardless of the notification. So, some things are false alarm, but I got tased for at least two, three weeks before I gave up. Every night I'd wake up, like, to a full discharge.Corey: I would never hook myself up to a shocker tied to outages, even if I owned a company. There are pleasant ways to wake up, unpleasant ways to wake up, and even worse. So, you're getting shocked for some—so someone else can wind up effectively driving the future of the business. You're, more or less, the monkey that gets shocked awake to go ahead and fix the thing that just broke.Seif: [laugh]. Well, the fix to that was moving from Azure to AWS without telling anybody. That got us in a lot of trouble. Again, that wasn't my company.Corey: They didn't notice that you did this, or it caused a lot of trouble because suddenly nothing worked where they thought it would work?Seif: They—no, no, everything worked fine on AWS. That's how my love story began. But they didn't notice for, like, six months.Corey: That's kind of amazing.Seif: [laugh]. That was specta—we rewrote everything from C# to Node.js and moved everything away from Elasticsearch, started using Redshift, Redis and a—you name it. We went AWS all the way and they didn't even notice. We took the budget from another department to start filling that in.But we cut the costs from $100,000 down to, like, 40, and then eventually down to $30,000 a month.Corey: More than a little wild.Seif: Oh, God, yeah. Good times, good times. Next time, just ask me to tell you the full story about this. I can't go into details on this podcast. I'll get in a lot—I think I'll get in trouble. I didn't sign anything though.Corey: Those are the best stories. But no, I hear you. I absolutely hear you. Seif, I really want to thank you for taking the time to speak with me. If people want to learn more, where should they go?Seif: So, axiom.co—not dot com. Dot C-O. That's where they learn more about Axiom. And other than that, I think I have a Twitter somewhere. And if you know how to write my name, you'll—it's just one word and find me on Twitter.Corey: We will put that all in the [show notes 00:26:33]. Thank you so much for taking the time to speak with me. I really appreciate it.Seif: Dude, that was awesome. Thank you, man.Corey: Seif Lotfy, co-founder and CTO of Axiom, who has brought this promoted guest episode our way. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that one of these days, I will get around to aggregating in some horrifying custom homebrew logging system, probably built on top of rsyslog.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Merriam-Webster's Word of the Day for October 19, 2023 is: quintessence kwin-TESS-unss noun Quintessence is a formal word that can refer to the most typical or perfect example of something, or the most important part of something. // Roasting marshmallows over an open fire and making s'mores is the quintessence of camping in the great outdoors. // The quintessence of music is the melody. See the entry > Examples: "The stories read like the quintessence of the human imagination in its densest, strangest form, as if his language were a thick, sweet concentrate of the creativity that other writers dilute to a sippable weakness. The comparison with Kafka misses much of [Bruno] Schulz's surreal humour and vivacity; the writer of whom he reminds me most is Maurice Sendak, with his bewitching childhood worlds filled with galumphing, unpredictable adults." — Joe Moshenska, The Guardian (London), 14 May 2023 Did you know? Long ago, when people believed that everything was made up of four elements—earth, air, fire, and water—they thought the stars and planets were made up of yet another element. In the Middle Ages, people called this element by its Medieval Latin name, quinta essentia, literally, "fifth essence." They believed the quinta essentia was essential to all kinds of matter, and if they could somehow isolate it, it would cure all disease. People have since given up on that idea, but English users have kept quintessence, the offspring of quinta essentia, as a word for the purest essence of a thing. Some modern physicists have given quintessence a new twist—they use it to refer to a form of the dark energy believed to make up almost 70 percent of the energy in the observable universe.
Deze week: advocaat en schrijver Ellen Pasman. Directeur van De Balie en programmamaker Yoeri Albrecht gaat met haar in gesprek over haar boek Kafka in de Rechtsstaat. Met dit boek over het toeslagenschandaal toont ze aan dat de wereld van Frans Kafka geen nachtmerrie of fictie is, maar vaak gewoon een beschrijving van de realiteit in een bureaucratische samenleving.In deze tweewekelijkse talkshow van De Balie interviewen programmamakers de makers die hen inspireren. Van cabaretiers tot schrijvers en van wetenschappers tot activisten.De podcast wordt geïntroduceerd door programmamaker Sophie Rutenfrans.Zie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
L'écrivain praguois d'expression allemande, né en 1883, auteur de La Métamorphose, est le créateur d'univers étranges et réalistes. Soumis à l'arbitraire de lois qu'il ne connaît pas ou à des situations absurdes, l'individu y est condamné à douter jusqu'à se résigner. Franz Kafka et tous les grands auteurs sont sur www.lire.fr