POPULARITY
Nick Nicely in conversation with David Eastaugh https://nicknicely1.bandcamp.com/track/or-a-brockley-afternoon English singer-songwriter who records psychedelic and electronic music. He is best known for his 1982 single "Hilly Fields (1892)". Nicely released only one other record in the early 1980s, the single "D.C.T. Dreams", before retreating from the music industry. The influence of "Hilly Fields" has been noted on Bevis Frond, Robyn Hitchcock, Robert Wyatt, and XTC's psychedelic alter egos the Dukes of Stratosphear, as well as the hypnagogic pop movement of the 2000s. In September 2014, Lo Records released Nicely's second full-length album, Space of a Second. A third Nick Nicely album, Sleep Safari, was released on 26 September 2017 through Tapete Records. In 2018, "Hilly Fields" appeared in the Timothee Chalamont film Hot Summer Nights. From the start of 2018 Nicely has been working on live performances accompanied by the musician Bug Lover and generating new versions of old tracks and adding visuals. First came secret gigs in Frappant (February and April) in Hamburg, then on 14 June 2018 at the Electric Ballroom in London supporting John Maus, followed by a December show in Moscow and then a US East coast tour again with Maus in 2019.
Stay informed with Fuels Focus, the essential podcast form Argus that delivers timely analysis and key highlights from the US refined products marketplace. Each week, our experts break down the most significant developments affecting gasoline, diesel, jet fuel, RINs and ethanol prices – equipping you with the knowledge to navigate the market complexities with confidence. This week's episode features: • US retail diesel prices hit 10-week high • PBF plans '25 Turnaround Season • Tentative deal for US East and Gulf Coast port workers Tune in every Monday for your weekly recap of the stories shaping the road fuels market.
Dockworkers at US East and Gulf coast ports agreed to start moving cargo again while they continue collective bargaining with their employers on a new contract. For instant reaction and analysis, host Doug Krizner spoke with Bloomberg managing editor for US economic policy Kate Davidson. See omnystudio.com/listener for privacy information.
Dockworkers at US East and Gulf coast ports agreed to start moving cargo again while they continue collective bargaining with their employers on a new contract. For instant reaction and analysis, host Doug Krizner spoke with Bloomberg managing editor for US economic policy Kate Davidson. See omnystudio.com/listener for privacy information.
Dockworkers at US East and Gulf coast ports agreed to start moving cargo again while they continue collective bargaining with their employers on a new contract. For instant reaction and analysis, host Doug Krizner spoke with Bloomberg managing editor for US economic policy Kate Davidson. See omnystudio.com/listener for privacy information.
US equities finished fractionally higher overall in Wednesday trading, with the Dow Jones, S&P500, and Nasdaq closing up 9bps, 1bp, and 8bps respectively. Stocks stabilized today, largely shrugging off recent Middle East developments, while work stoppage at US East and Gulf Coast ports a widely flagged potential growth and supply chain risk. ADP private payrolls of 143K came in above 125K consensus. Richmond's Barkin said Fed may have trouble covering the "last mile" on inflation.
Hurricane Helene: Capitalism turns a natural disaster into social catastrophe / Israel launches ground operations in Lebanon amid ongoing air bombardment / Tens of thousands of dockworkers launch strike on US East and Gulf coasts, joining Boeing workers in growing strike wave
Dockworkers walked out of every major port on the US East and Gulf coasts for the first time in nearly 50 years. Port of Los Angeles Executive Director Gene Seroka spoke to Bloomberg's Carol Massar and Tim Stenovec about the strike that's threatening to ripple across the world's largest economy and cause political turmoil just weeks before the presidential election.See omnystudio.com/listener for privacy information.
Your morning briefing, the business news you need in just 15 minutes.On today's podcast:(1) Israel said it had begun “targeted ground raids” in southern Lebanon, escalating a campaign to root out Hezbollah despite international appeals for restraint.(2) Federal Reserve Chair Jerome Powell said the central bank will lower interest rates “over time,” while again emphasizing that the overall US economy remains on solid footing.(3) Christine Lagarde said the European Central Bank is becoming more optimistic that it will be able to get inflation under control, and will reflect on that at its October interest-rate decision.(4) Business chiefs are the most pessimistic they have been about Britain's economy since late 2022, when the country was still reeling from the effects of Liz Truss's short spell as prime minister.(5) Dockworkers have walked out of every major port on the US East and Gulf coasts, marking the beginning of a strike that could ripple through the world's largest economy and cause political turmoil just weeks before the presidential election. See omnystudio.com/listener for privacy information.
kiko dinucci e juçara marçal | ½ dúzia de 3 ou 4 feat. ligiana, andré abujamra, arrigo barnabé e tom zé | judas | cartola | jorge costa | clementina de jesus, pixinguinha e joão da baiana | darcy da mangueira | carlos roberto da mangueira | martinho da vila e velha guarda de vila isabel | grupo rumo | lúcia menezes | garganta profunda | joão coração | rômulo froes | china | rita lee | erasmo carlos | tommy standen | eduardo dusek | marlene | carmen miranda | tom mcdermott | di melo | jigaboo | henrick fuentes | dj grafite | dj dolores.
From today, a range of Chinese goods entering the US will have to pay steeps tariffs. The Biden administration has imposed a 50% duty on solar cells and 25% on steel, aluminium, EV batteries and key minerals. There is a 100% duty on electric vehicles to protect the American auto industry. So, can the American auto companies fill the gap by making affordable electric cars?Meanwhile, there is trouble is brewing in the US East coast. Ten of the busiest North American ports are at risk of shutdown next week due to a union strike. And the international gold price recently soared to record highs. How long will the gold boom last?
Deputy Pat Buckley speaks to PJ about the ongoing fears people in East Cork have around flooding as the 1st anniversary of Storm Babet approaches Hosted on Acast. See acast.com/privacy for more information.
Editors Jimmy Lovaas and Joe Veyera discuss Hurricane Helene, plus more on Ukraine's Zelenskyy visiting the White House, a leadership election for Japan's ruling Liberal Democratic party, parliamentary elections in Austria and a possible strike by dockworkers across the US East and Gulf coasts.Subscribe to the show: Apple Podcasts, Spotify and many more. These stories and others are also available in our free weekly Forecast newsletter.This episode includes work from Factal editors Joe Veyera, James Morgan, Hua Hsieh, Awais Ahmad and Matthew Hipolito. Produced and edited by Jimmy Lovaas. Music courtesy of Andrew Gospe. Have feedback, suggestions or events we've missed? Drop us a note: hello@factal.comWhat's Factal? Created by the founders of Breaking News, Factal alerts companies to global incidents that pose an immediate risk to their people or business operations. We provide trusted verification, precise incident mapping and a collaboration platform for corporate security, travel safety and emergency management teams. If you're a company interested in a trial, please email sales@factal.com. To learn more, visit Factal.com, browse the Factal blog or email us at hello@factal.com.Read the full episode description and transcript on Factal's blog.Copyright © 2024 Factal. All rights reserved.
In this week's Simply Trade News Roundup, we dive into the latest developments impacting international trade, importing, and exporting. From new legislation targeting Chinese e-commerce to labor negotiations that could disrupt the supply chain, this episode is a must-listen for trade professionals. Main Points & Takeaways: 1. US Bill Targets Chinese E-Commerce Imports: The US Senate has introduced the "Fighting for America Act" to enhance scrutiny on low-value shipments from China. This legislation aims to address the influx of illicit goods and counterfeit products entering the US market. As consumers, we're encouraged to be more mindful of where our purchases come from and support ethical supply chains. 2. EU Investigates Chinese EV Subsidies: The European Commission remains confident that its measures against state subsidies for Chinese electric vehicles comply with WTO regulations. However, China has challenged the investigation outcomes, leading to a formal WTO consultation. This dispute highlights the ongoing tensions around fair trade practices and the use of forced labor. 3. Potential Port Strikes Loom: Labor negotiations between the US Maritime Alliance and the International Longshoremen Association have stalled, increasing the likelihood of a strike at US East and Gulf Coast ports. This could significantly disrupt the upcoming holiday shipping season, prompting importers to reroute cargo and explore alternative transportation options. 4. Henry Ford's Supply Chain Legacy: Did you know that Henry Ford was a pioneer of modern supply chain management? His innovations, such as centralized operations, improved coordination, and multimodal transportation, laid the foundation for many of the practices we see today. Even the iconic Kingsford charcoal briquettes have their roots in Ford's efforts to repurpose byproducts from his car manufacturing. This episode of Simply Trade News Roundup is a must-listen for anyone involved in international trade, importing, or exporting. By staying informed on the latest developments, trade professionals can better navigate the evolving global landscape and make informed decisions to optimize their supply chains. Don't miss out on these insights – tune in now and share with your network! Enjoy the show! Sign up for the upcoming Forced Labor training (Supply Chain Tracing) here: https://globaltrainingcenter.com/forced-labor-supply-chain-tracing/ Find us on YouTube: https://www.youtube.com/@SimplyTradePod Host: Annik Sobing: https://www.linkedin.com/in/annik-sobing-mba-b226251a2/ Producer: Lalo Solorzano: https://www.linkedin.com/in/lalosolorzano/ Co-Producer: Mara Marquez: https://www.linkedin.com/in/mara-marquez-a00a111a8/ Contact SimplyTrade@GlobalTrainingCenter.com or message @SimplyTradePod for: Advertising and sponsoring on Simply Trade Requests to be on the show as guest Suggest any topics you would like to hear about Simply Trade is not a law firm or an advisor. The topics and discussions conducted by Simply Trade hosts and guests should not be considered and is not intended to substitute legal advice. You should seek appropriate counsel for your own situation. These conversations and information are directed towards listeners in the United States for informational, educational, and entertainment purposes only and should not be In substitute for legal advice. No listener or viewer of this podcast should act or refrain from acting on the basis of information on this podcast without first seeking legal advice from counsel. Information on this podcast may not be up to date depending on the time of publishing and the time of viewership. The content of this posting is provided as is, no representations are made that the content is error free. The views expressed in or through this podcast are those are the individual speakers not those of their respective employers or Global Training Center as a whole. All liability with respect to actions taken or not taken based on the contents of this podcast are hereby expressly disclaimed.
In this episode of "Sleepless in Singapore," I recount the third and potentially final part of a road trip with my friend Marcus along the US East Coast. We begin in Jacksonville, where we miss meeting an old friend but enjoy a Brazilian barbecue at Terra Gaucha. Our journey continues with a hearty breakfast at Cracker Barrel in Kingsland, Georgia, a delightful city tour in Savannah, and a restful night at a golf club in Litchfield Beach. We explore Wilmington and the Outer Banks, taking a ferry from Cedar Island to Ocracoke, marveling at the remote beauty. Reaching Kitty Hawk, we stay in a charming Airbnb and experience local culture, including a memorable evening meeting a NASCAR driver. The trip concludes with a brisket feast in King of Prussia, a visit to Gettysburg National Military Park, and a glimpse into Amish life, before I finally return to Singapore.
In this episode of "Sleepless in Singapore," I take you along the second leg of my US road trip with Markus, starting with a visit to the Mammoth Cave system in Kentucky. Despite its fame as the world's longest cave system, the accessible parts left us slightly underwhelmed. Yet, the experience was unique, especially with the measures to protect the bats, like the special shoe-cleaning stations. Our journey continued with a memorable lunch at Ski Daddy's and concluded the day in the vibrant nightlife of Nashville, Tennessee, where we immersed ourselves in the local honky-tonk culture and danced the night away. The adventure didn't stop there. We made our way to Memphis, exploring Elvis Presley's Graceland from the outside and indulging in a classic American barbecue. Our stay in a former recording studio-turned-Airbnb added a quirky twist to our trip. The journey took us further south to New Orleans, where we reveled in the French Quarter's charm and embarked on a thrilling swamp tour, encountering massive alligators up close. The culinary delights continued with alligator bites and a taste of the local nightlife, capping off an unforgettable segment of our road trip.
In this episode of "Sleepless in Singapore," I recount the first epic road trip through the United States with my friend Markus in 2022. Our journey began in King of Prussia, Pennsylvania, where Markus lives, and we mapped out an ambitious route that took us from Philadelphia down to New Orleans, then along the southern and eastern coast and back up to Pennsylvania. The adventure kicked off with the longest flight in the world from Singapore to New York, followed by a quick train ride to Philadelphia. With Markus's 470-horsepower Mustang, we set out on our American odyssey, starting with a hearty diner breakfast and a visit to George Washington's Mount Vernon. Driving along the scenic Blue Ridge Mountain Parkway, we soaked in the breathtaking views, stayed in rustic cabins, and indulged in local delicacies like Philadelphia cheesesteak and grits with butter. The journey was a blend of serene landscapes and quintessential American experiences, from flying drones in the park to navigating through small towns with sketchy motels. Each day was a new chapter, filled with exploration, and the simple joy of the open road. As we ventured through Tennessee, Alabama, and beyond, the road trip became a tapestry of unforgettable moments that captured the essence of a true American adventure.
Editor's note: One of the top reasons we have hundreds of companies and thousands of AI Engineers joining the World's Fair next week is, apart from discussing technology and being present for the big launches planned, to hire and be hired! Listeners loved our previous Elicit episode and were so glad to welcome 2 more members of Elicit back for a guest post (and bonus podcast) on how they think through hiring. Don't miss their AI engineer job description, and template which you can use to create your own hiring plan! How to Hire AI EngineersJames Brady, Head of Engineering @ Elicit (ex Spring, Square, Trigger.io, IBM)Adam Wiggins, Internal Journalist @ Elicit (Cofounder Ink & Switch and Heroku)If you're leading a team that uses AI in your product in some way, you probably need to hire AI engineers. As defined in this article, that's someone with conventional engineering skills in addition to knowledge of language models and prompt engineering, without being a full-fledged Machine Learning expert.But how do you hire someone with this skillset? At Elicit we've been applying machine learning to reasoning tools since 2018, and our technical team is a mix of ML experts and what we can now call AI engineers. This article will cover our process from job description through interviewing. (You can also flip the perspectives here and use it just as easily for how to get hired as an AI engineer!)My own journeyBefore getting into the brass tacks, I want to share my journey to becoming an AI engineer.Up until a few years ago, I was happily working my job as an engineering manager of a big team at a late-stage startup. Like many, I was tracking the rapid increase in AI capabilities stemming from the deep learning revolution, but it was the release of GPT-3 in 2020 which was the watershed moment. At the time, we were all blown away by how the model could string together coherent sentences on demand. (Oh how far we've come since then!)I'd been a professional software engineer for nearly 15 years—enough to have experienced one or two technology cycles—but I could see this was something categorically new. I found this simultaneously exciting and somewhat disconcerting. I knew I wanted to dive into this world, but it seemed like the only path was going back to school for a master's degree in Machine Learning. I started talking with my boss about options for taking a sabbatical or doing a part-time distance learning degree.In 2021, I instead decided to launch a startup focused on productizing new research ideas on ML interpretability. It was through that process that I reached out to Andreas—a leading ML researcher and founder of Elicit—to see if he would be an advisor. Over the next few months, I learned more about Elicit: that they were trying to apply these fascinating technologies to the real-world problems of science, and with a business model that aligned it with safety goals. I realized that I was way more excited about Elicit than I was about my own startup ideas, and wrote about my motivations at the time.Three years later, it's clear this was a seismic shift in my career on the scale of when I chose to leave my comfy engineering job at IBM to go through the Y Combinator program back in 2008. Working with this new breed of technology has been more intellectually stimulating, challenging, and rewarding than I could have imagined.Deep ML expertise not requiredIt's important to note that AI engineers are not ML experts, nor is that their best contribution to a tech team.In our article Living documents as an AI UX pattern, we wrote:It's easy to think that AI advancements are all about training and applying new models, and certainly this is a huge part of our work in the ML team at Elicit. But those of us working in the UX part of the team believe that we have a big contribution to make in how AI is applied to end-user problems.We think of LLMs as a new medium to work with, one that we've barely begun to grasp the contours of. New computing mediums like GUIs in the 1980s, web/cloud in the 90s and 2000s, and multitouch smartphones in the 2000s/2010s opened a whole new era of engineering and design practices. So too will LLMs open new frontiers for our work in the coming decade.To compare to the early era of mobile development: great iOS developers didn't require a detailed understanding of the physics of capacitive touchscreens. But they did need to know the capabilities and limitations of a multi-touch screen, the constrained CPU and storage available, the context in which the user is using it (very different from a webpage or desktop computer), etc.In the same way, an AI engineer needs to work with LLMs as a medium that is fundamentally different from other compute mediums. That means an interest in the ML side of things, whether through their own self-study, tinkering with prompts and model fine-tuning, or following along in #llm-paper-club. But this understanding is so that they can work with the medium effectively versus, say, spending their days training new models.Language models as a chaotic mediumSo if we're not expecting deep ML expertise from AI engineers, what are we expecting? This brings us to what makes LLMs different.We'll assume already that our ideal candidate is already inspired by, and full of ideas about, all the new capabilities AI can bring to software products. But the flip side is all the things that make this new medium difficult to work with. LLM calls are annoying due to high latency (measured in tens of seconds sometimes, rather than milliseconds), extreme variance on latency, high error rates even under normal operation. Not to mention getting extremely different answers to the same prompt provided to the same model on two subsequent calls!The net effect is that an AI engineer, even working at the application development level, needs to have a skillset comparable to distributed systems engineering. Handling errors, retries, asynchronous calls, streaming responses, parallelizing and recombining model calls, the halting problem, and fallbacks are just some of the day-in-the-life of an AI engineer. Chaos engineering gets new life in the era of AI.Skills and qualities in candidatesLet's put together what we don't need (deep ML expertise) with what we do (work with capabilities and limitations of the medium). Thus we start to see what Elicit looks for in AI engineers:* Conventional software engineering skills. Especially back-end engineering on complex, data-intensive applications.* Professional, real-world experience with applications at scale.* Deep, hands-on experience across a few back-end web frameworks.* Light devops and an understanding of infrastructure best practices.* Queues, message buses, event-driven and serverless architectures, … there's no single “correct” approach, but having a deep toolbox to draw from is very important.* A genuine curiosity and enthusiasm for the capabilities of language models.* One or more serious projects (side projects are fine) of using them in interesting ways on a unique domain.* …ideally with some level of factored cognition, e.g. breaking the problem down into chunks, making thoughtful decisions about which things to push to the language model and which stay within the realm of conventional heuristics and compute capabilities.* Personal studying with resources like Elicit's ML reading list. Part of the role is collaborating with the ML engineers and researchers on our team. To do so, the candidate needs to “speak their language” somewhat, just as a mobile engineer needs some familiarity with backends in order to collaborate effectively on API creation with backend engineers.* An understanding of the challenges that come along with working with large models (high latency, variance, etc.) leading to a defensive, fault-first mindset.* Careful and principled handling of error cases, asynchronous code (and ability to reason about and debug it), streaming data, caching, logging and analytics for understanding behavior in production.* This is a similar mindset that one can develop working on conventional apps which are complex, data-intensive, or large-scale apps. The difference is that an AI engineer will need this mindset even when working on relatively small scales!On net, a great AI engineer will combine two seemingly contrasting perspectives: knowledge of, and a sense of wonder for, the capabilities of modern ML models; but also the understanding that this is a difficult and imperfect foundation, and the willingness to build resilient and performant systems on top of it.Here's the resulting AI engineer job description for Elicit. And here's a template that you can borrow from for writing your own JD.Hiring processOnce you know what you're looking for in an AI engineer, the process is not too different from other technical roles. Here's how we do it, broken down into two stages: sourcing and interviewing.SourcingWe're primarily looking for people with (1) a familiarity with and interest in ML, and (2) proven experience building complex systems using web technologies. The former is important for culture fit and as an indication that the candidate will be able to do some light prompt engineering as part of their role. The latter is important because language model APIs are built on top of web standards and—as noted above—aren't always the easiest tools to work with.Only a handful of people have built complex ML-first apps, but fortunately the two qualities listed above are relatively independent. Perhaps they've proven (2) through their professional experience and have some side projects which demonstrate (1).Talking of side projects, evidence of creative and original prototypes is a huge plus as we're evaluating candidates. We've barely scratched the surface of what's possible to build with LLMs—even the current generation of models—so candidates who have been willing to dive into crazy “I wonder if it's possible to…” ideas have a huge advantage.InterviewingThe hard skills we spend most of our time evaluating during our interview process are in the “building complex systems using web technologies” side of things. We will be checking that the candidate is familiar with asynchronous programming, defensive coding, distributed systems concepts and tools, and display an ability to think about scaling and performance. They needn't have 10+ years of experience doing this stuff: even junior candidates can display an aptitude and thirst for learning which gives us confidence they'll be successful tackling the difficult technical challenges we'll put in front of them.One anti-pattern—something which makes my heart sink when I hear it from candidates—is that they have no familiarity with ML, but claim that they're excited to learn about it. The amount of free and easily-accessible resources available is incredible, so a motivated candidate should have already dived into self-study.Putting all that together, here's the interview process that we follow for AI engineer candidates:* 30-minute introductory conversation. Non-technical, explaining the interview process, answering questions, understanding the candidate's career path and goals.* 60-minute technical interview. This is a coding exercise, where we play product manager and the candidate is making changes to a little web app. Here are some examples of topics we might hit upon through that exercise:* Update API endpoints to include extra metadata. Think about appropriate data types. Stub out frontend code to accept the new data.* Convert a synchronous REST API to an asynchronous streaming endpoint.* Cancellation of asynchronous work when a user closes their tab.* Choose an appropriate data structure to represent the pending, active, and completed ML work which is required to service a user request.* 60–90 minute non-technical interview. Walk through the candidate's professional experience, identifying high and low points, getting a grasp of what kinds of challenges and environments they thrive in.* On-site interviews. Half a day in our office in Oakland, meeting as much of the team as possible: more technical and non-technical conversations.The frontier is wide openAlthough Elicit is perhaps further along than other companies on AI engineering, we also acknowledge that this is a brand-new field whose shape and qualities are only just now starting to form. We're looking forward to hearing how other companies do this and being part of the conversation as the role evolves.We're excited for the AI Engineer World's Fair as another next step for this emerging subfield. And of course, check out the Elicit careers page if you're interested in joining our team.Podcast versionTimestamps* [00:00:24] Intros* [00:05:25] Defining the Hiring Process* [00:08:42] Defensive AI Engineering as a chaotic medium* [00:10:26] Tech Choices for Defensive AI Engineering* [00:14:04] How do you Interview for Defensive AI Engineering* [00:19:25] Does Model Shadowing Work?* [00:22:29] Is it too early to standardize Tech stacks?* [00:32:02] Capabilities: Offensive AI Engineering* [00:37:24] AI Engineering Required Knowledge* [00:40:13] ML First Mindset* [00:45:13] AI Engineers and Creativity* [00:47:51] Inside of Me There Are Two Wolves* [00:49:58] Sourcing AI Engineers* [00:58:45] Parting ThoughtsTranscript[00:00:00] swyx: Okay, so welcome to the Latent Space Podcast. This is another remote episode that we're recording. This is the first one that we're doing around a guest post. And I'm very honored to have two of the authors of the post with me, James and Adam from Elicit. Welcome, James. Welcome, Adam.[00:00:22] James Brady: Thank you. Great to be here.[00:00:23] Hey there.[00:00:24] Intros[00:00:24] swyx: Okay, so I think I will do this kind of in order. I think James, you're, you're sort of the primary author. So James, you are head of engineering at Elicit. You also, We're VP Eng at Teespring and Spring as well. And you also , you have a long history in sort of engineering. How did you, , find your way into something like Elicit where, , it's, you, you are basically traditional sort of VP Eng, VP technology type person moving into a more of an AI role.[00:00:53] James Brady: Yeah, that's right. It definitely was something of a Sideways move if not a left turn. So the story there was I'd been doing, as you said, VP technology, CTO type stuff for around about 15 years or so, and Notice that there was this crazy explosion of capability and interesting stuff happening within AI and ML and language models, that kind of thing.[00:01:16] I guess this was in 2019 or so, and decided that I needed to get involved. , this is a kind of generational shift. And Spent maybe a year or so trying to get up to speed on the state of the art, reading papers, reading books, practicing things, that kind of stuff. Was going to found a startup actually in in the space of interpretability and transparency, and through that met Andreas, who has obviously been on the, on the podcast before asked him to be an advisor for my startup, and he countered with, maybe you'd like to come and run the engineering team at Elicit, which it turns out was a much better idea.[00:01:48] And yeah, I kind of quickly changed in that direction. So I think some of the stuff that we're going to be talking about today is how actually a lot of the work when you're building applications with AI and ML looks and smells and feels much more like conventional software engineering with a few key differences rather than really deep ML stuff.[00:02:07] And I think that's one of the reasons why I was able to transfer skills over from one place to the other.[00:02:12] swyx: Yeah, I[00:02:12] James Brady: definitely[00:02:12] swyx: agree with that. I, I do often say that I think AI engineering is about 90 percent software engineering with like the, the 10 percent of like really strong really differentiated AI engineering.[00:02:22] And that might, that obviously that number might change over time. I want to also welcome Adam onto my podcast because you welcomed me onto your podcast two years ago.[00:02:31] Adam Wiggins: Yeah, that was a wonderful episode.[00:02:32] swyx: That was, that was a fun episode. You famously founded Heroku. You just wrapped up a few years working on Muse.[00:02:38] And now you've described yourself as a journalist, internal journalist working on Elicit.[00:02:43] Adam Wiggins: Yeah, well I'm kind of a little bit in a wandering phase here and trying to take this time in between ventures to see what's out there in the world and some of my wandering took me to the Elicit team. And found that they were some of the folks who were doing the most interesting, really deep work in terms of taking the capabilities of language models and applying them to what I feel like are really important problems.[00:03:08] So in this case, science and literature search and, and, and that sort of thing. It fits into my general interest in tools and productivity software. I, I think of it as a tool for thought in many ways, but a tool for science, obviously, if we can accelerate that discovery of new medicines and things like that, that's, that's just so powerful.[00:03:24] But to me, it's a. It's kind of also an opportunity to learn at the feet of some real masters in this space, people who have been working on it since it was, before it was cool, if you want to put it that way. So for me, the last couple of months have been this crash course, and why I sometimes describe myself as an internal journalist is I'm helping to write some, some posts, including Supporting James in this article here we're doing for latent space where I'm just bringing my writing skill and that sort of thing to bear on their very deep domain expertise around language models and applying them to the real world and kind of surface that in a way that's I don't know, accessible, legible, that, that sort of thing.[00:04:03] And so, and the great benefit to me is I get to learn this stuff in a way that I don't think I would, or I haven't, just kind of tinkering with my own side projects.[00:04:12] swyx: I forgot to mention that you also run Ink and Switch, which is one of the leading research labs, in my mind, of the tools for thought productivity space, , whatever people mentioned there, or maybe future of programming even, a little bit of that.[00:04:24] As well. I think you guys definitely started the local first wave. I think there was just the first conference that you guys held. I don't know if you were personally involved.[00:04:31] Adam Wiggins: Yeah, I was one of the co organizers along with a few other folks for, yeah, called Local First Conf here in Berlin.[00:04:36] Huge success from my, my point of view. Local first, obviously, a whole other topic we can talk about on another day. I think there actually is a lot more what would you call it , handshake emoji between kind of language models and the local first data model. And that was part of the topic of the conference here, but yeah, topic for another day.[00:04:55] swyx: Not necessarily. I mean , I, I selected as one of my keynotes, Justine Tunney, working at LlamaFall in Mozilla, because I think there's a lot of people interested in that stuff. But we can, we can focus on the headline topic. And just to not bury the lead, which is we're talking about hire, how to hire AI engineers, this is something that I've been looking for a credible source on for months.[00:05:14] People keep asking me for my opinions. I don't feel qualified to give an opinion and it's not like I have. So that's kind of defined hiring process that I'm super happy with, even though I've worked with a number of AI engineers.[00:05:25] Defining the Hiring Process[00:05:25] swyx: I'll just leave it open to you, James. How was your process of defining your hiring, hiring roles?[00:05:31] James Brady: Yeah. So I think the first thing to say is that we've effectively been hiring for this kind of a role since before you, before you coined the term and tried to kind of build this understanding of what it was.[00:05:42] So, which is not a bad thing. Like it's, it was a, it was a good thing. A concept, a concept that was coming to the fore and effectively needed a name, which is which is what you did. So the reason I mentioned that is I think it was something that we kind of backed into, if you will. We didn't sit down and come up with a brand new role from, from scratch of this is a completely novel set of responsibilities and skills that this person would need.[00:06:06] However, it is a A kind of particular blend of different skills and attitudes and and curiosities interests, which I think makes sense to kind of bundle together. So in the, in the post, the three things that we say are most important for a highly effective AI engineer are first of all, conventional software engineering skills, which is Kind of a given, but definitely worth mentioning.[00:06:30] The second thing is a curiosity and enthusiasm for machine learning and maybe in particular language models. That's certainly true in our case. And then the third thing is to do with basically a fault first mindset, being able to build systems that can handle things going wrong in, in, in some sense.[00:06:49] And yeah, the I think the kind of middle point, the curiosity about ML and language models is probably fairly self evident. They're going to be working with, and prompting, and dealing with the responses from these models, so that's clearly relevant. The last point, though, maybe takes the most explaining.[00:07:07] To do with this fault first mindset and the ability to, to build resilient systems. The reason that is, is so important is because compared to normal APIs, where normal, think of something like a Stripe API or a search API or something like this. The latency when you're working with language models is, is wild, like you can get 10x variation.[00:07:32] I mean, I was looking at the stats before, actually, before, before the podcast. We do often, normally, in fact, see a 10x variation in the P90 latency over the course of, Half an hour, an hour when we're prompting these models, which is way higher than if you're working with a, more kind of conventional conventionally backed API.[00:07:49] And the responses that you get, the actual content and the responses are naturally unpredictable as well. They come back with different formats. Maybe you're expecting JSON. It's not quite JSON. You have to handle this stuff. And also the, the semantics of the messages are unpredictable too, which is, which is a good thing.[00:08:08] Like this is one of the things that you're looking for from these language models, but it all adds up to needing to. Build a resilient, reliable, solid feeling system on top of this fundamentally, well, certainly currently fundamentally shaky foundation. The models do not behave in the way that you would like them to.[00:08:28] And yeah, the ability to structure the code around them such that it does give the user this warm, reassuring, Snappy, solid feeling is is really what we're driving for there.[00:08:42] Defensive AI Engineering as a chaotic medium[00:08:42] Adam Wiggins: What really struck me as we, we dug in on the content for this article was that third point there. The, the language models is this kind of chaotic medium, this, this dragon, this wild horse you're, you're, you're riding and trying to guide in the direction that is going to be useful and reliable to users, because I think.[00:08:58] So much of software engineering is about making things not only high performance and snappy, but really just making it stable, reliable, predictable, which is literally the opposite of what you get from from the language models. And yet, yeah, the output is so useful, and indeed, some of their Creativity, if you want to call it that, which is, is precisely their value.[00:09:19] And so you need to work with this medium. And I guess the nuanced or the thing that came out of Elissa's experience that I thought was so interesting is quite a lot of working with that is things that come from distributed systems engineering. But you have really the AI engineers as we're defining them or, or labeling them on the illicit team is people who are really application developers.[00:09:39] You're building things for end users. You're thinking about, okay, I need to populate this interface with some response to user input. That's useful to the tasks they're trying to do, but you have this. This is the thing, this medium that you're working with that in some ways you need to apply some of this chaos engineering, distributed systems engineering, which typically those people with those engineering skills are not kind of the application level developers with the product mindset or whatever, they're more deep in the guts of a, of a system.[00:10:07] And so it's, those, those skills and, and knowledge do exist throughout the engineering discipline, but sort of putting them together into one person that is That feels like sort of a unique thing and working with the folks on the Elicit team who have that skills I'm quite struck by that unique that unique blend.[00:10:23] I haven't really seen that before in my 30 year career in technology.[00:10:26] Tech Choices for Defensive AI Engineering[00:10:26] swyx: Yeah, that's a Fascinating I like the reference to chaos engineering. I have some appreciation, I think when you had me on your podcast, I was still working at Temporal and that was like a nice Framework, if you live within Temporal's boundaries, you can pretend that all those faults don't exist, and you can, you can code in a sort of very fault tolerant way.[00:10:47] What is, what is you guys solutions around this, actually? Like, I think you're, you're emphasizing having the mindset, but maybe naming some technologies would help? Not saying that you have to adopt these technologies, but they're just, they're just quick vectors into what you're talking about when you're, when you're talking about distributed systems.[00:11:03] Like, that's such a big, chunky word, , like are we talking, are Kubernetes or, and I suspect we're not, , like we're, we're talking something else now.[00:11:10] James Brady: Yeah, that's right. It's more at the application level rather than at the infrastructure level, at least, at least the way that it works for us.[00:11:17] So there's nothing kind of radically novel here. It is more a careful application of existing concepts. So the kinds of tools that we reach for to handle these kind of slightly chaotic objects that Adam was just talking about, are retries and fallbacks and timeouts and careful error handling. And, yeah, the standard stuff, really.[00:11:39] There's also a great degree of dependence. We rely heavily on parallelization because, , these language models are not innately very snappy, and , there's just a lot of I. O. going back and forth. So All these things I'm talking about when I was in my earlier stages of a career, these are kind of the things that are the difficult parts that most senior software engineers will be better at.[00:12:01] It is careful error handling, and concurrency, and fallbacks, and distributed systems, and, , eventual consistency, and all this kind of stuff and As Adam was saying, the kind of person that is deep in the guts of some kind of distributed systems, a really high, high scale backend kind of a problem would probably naturally have these kinds of skills.[00:12:21] But you'll find them on, on day one, if you're building a, , an ML powered app, even if it's not got massive scale. I think one one thing that I would mention that we do do yeah, maybe, maybe two related things, actually. The first is we're big fans of strong typing. We share the types all the way from the Backend Python code all the way to the to the front end in TypeScript and find that is I mean We'd probably do this anyway But it really helps one reason around the shapes of the data which can going to be going back and forth and that's really important When you can't rely upon You you're going to have to coerce the data that you get back from the ML if you want if you want for it to be structured basically speaking and The second thing which is related is we use checked exceptions inside our Python code base, which means that we can use the type system to make sure we are handling, properly handling, all of the, the various things that could be going wrong, all the different exceptions that could be getting raised.[00:13:16] So, checked exceptions are not, not really particularly popular. Actually there's not many people that are big fans of them. For our particular use case, to really make sure that we've not just forgotten to handle, , This particular type of error we have found them useful to to, to force us to think about all the different edge cases that can come up.[00:13:32] swyx: Fascinating. How just a quick note of technology. How do you share types from Python to TypeScript? Do you, do you use GraphQL? Do you use something[00:13:39] James Brady: else? We don't, we don't use GraphQL. Yeah. So we've got the We've got the types defined in Python, that's the source of truth. And we go from the OpenAPI spec, and there's a, there's a tool that you work and use to generate types dynamically, like TypeScript types from those OpenAPI definitions.[00:13:57] swyx: Okay, excellent. Okay, cool. Sorry, sorry for diving into that rabbit hole a little bit. I always like to spell out technologies for people to dig their teeth into.[00:14:04] How do you Interview for Defensive AI Engineering[00:14:04] swyx: One thing I'll, one thing I'll mention quickly is that a lot of the stuff that you mentioned is typically not part of the normal interview loop.[00:14:10] It's actually really hard to interview for because this is the stuff that you polish out in, as you go into production, the coding interviews are typically about the happy path. How do we do that? How do we, how do we design, how do you look for a defensive fault first mindset?[00:14:24] Because you can defensive code all day long and not add functionality. to your to your application.[00:14:29] James Brady: Yeah, it's a great question and I think that's exactly true. Normally the interview is about the happy path and then there's maybe a box checking exercise at the end of the candidate says of course in reality I would handle the edge cases or something like this and that unfortunately isn't isn't quite good enough when when the happy path is is very very narrow and yeah there's lots of weirdness on either side so basically speaking, it's just a case of, of foregrounding those kind of concerns through the interview process.[00:14:58] It's, there's, there's no magic to it. We, we talk about this in the, in the po in the post that we're gonna be putting up on, on Laton space. The, there's two main technical exercises that we do through our interview process for this role. The first is more coding focus, and the second is more system designy.[00:15:16] Yeah. White whiteboarding a potential solution. And in, without giving too much away in the coding exercise. You do need to think about edge cases. You do need to think about errors. The exercise consists of adding features and fixing bugs inside the code base. And in both of those two cases, it does demand, because of the way that we set the application up and the interview up, it does demand that you think about something other than the happy path.[00:15:41] But your thinking is the right prompt of how do we get the candidate thinking outside of the, the kind of normal Sweet spot, smooth smooth, smoothly paved path. In terms of the system design interview, that's a little easier to prompt this kind of fault first mindset because it's very easy in that situation just to say, let's imagine that, , this node dies, how does the app still work?[00:16:03] Let's imagine that this network is, is going super slow. Let's imagine that, I don't know, like you, you run out of, you run out of capacity in, in, in this database that you've sketched out here, how do you handle that, that, that sort of stuff. So. It's, in both cases, they're not firmly anchored to and built specifically around language models and ways language models can go wrong, but we do exercise the same muscles of thinking defensively and yeah, foregrounding the edge cases, basically.[00:16:32] Adam Wiggins: James, earlier there you mentioned retries. And this is something that I think I've seen some interesting debates internally about things regarding, first of all, retries are, can be costly, right? In general, this medium, in addition to having this incredibly high variance and response rate, and, , being non deterministic, is actually quite expensive.[00:16:50] And so, in many cases, doing a retry when you get a fail does make sense, but actually that has an impact on cost. And so there is Some sense to which, at least I've seen the AI engineers on our team, worry about that. They worry about, okay, how do we give the best user experience, but balance that against what the infrastructure is going to, , is going to cost our company, which I think is again, an interesting mix of, yeah, again, it's a little bit the distributed system mindset, but it's also a product perspective and you're thinking about the end user experience, but also the.[00:17:22] The bottom line for the business, you're bringing together a lot of a lot of qualities there. And there's also the fallback case, which is kind of, kind of a related or adjacent one. I think there was also a discussion on that internally where, I think it maybe was search, there was something recently where there was one of the frontline search providers was having some, yeah, slowness and outages, and essentially then we had a fallback, but essentially that gave people for a while, especially new users that come in that don't the difference, they're getting a They're getting worse results for their search.[00:17:52] And so then you have this debate about, okay, there's sort of what is correct to do from an engineering perspective, but then there's also what actually is the best result for the user. Is giving them a kind of a worse answer to their search result better, or is it better to kind of give them an error and be like, yeah, sorry, it's not working right at the moment, try again.[00:18:12] Later, both are obviously non optimal, but but this is the kind of thing I think that that you run into or, or the kind of thing we need to grapple with a lot more than you would other kinds of, of mediums.[00:18:24] James Brady: Yeah, that's a really good example. I think it brings to the fore the two different things that you could be optimizing for of uptime and response at all costs on one end of the spectrum and then effectively fragility, but kind of, if you get a response, it's the best response we can come up with at the other end of the spectrum.[00:18:43] And where you want to land there kind of depends on, well, it certainly depends on the app, obviously depends on the user. I think it depends on the, feature within the app as well. So in the search case that you, that you mentioned there, in retrospect, we probably didn't want to have the fallback. And we've actually just recently on Monday, changed that to Show an error message rather than giving people a kind of degraded experience in other situations We could use for example a large language model from a large language model from provider B rather than provider A and Get something which is within the A few percentage points performance, and that's just a really different situation.[00:19:21] So yeah, like any interesting question, the answer is, it depends.[00:19:25] Does Model Shadowing Work?[00:19:25] swyx: I do hear a lot of people suggesting I, let's call this model shadowing as a defensive technique, which is, if OpenAI happens to be down, which, , happens more often than people think then you fall back to anthropic or something.[00:19:38] How realistic is that, right? Like you, don't you have to develop completely different prompts for different models and won't the, won't the performance of your application suffer from whatever reason, right? Like it may be caused differently or it's not maintained in the same way. I, I think that people raise this idea of fallbacks to models, but I don't think it's, I don't, I don't see it practiced very much.[00:20:02] James Brady: Yeah, it is, you, you definitely need to have a different prompt if you want to stay within a few percentage points degradation Like I, like I said before, and that certainly comes at a cost, like fallbacks and backups and things like this It's really easy for them to go stale and kind of flake out on you because they're off the beaten track And In our particular case inside of Elicit, we do have fallbacks for a number of kind of crucial functions where it's going to be very obvious if something has gone wrong, but we don't have fallbacks in all cases.[00:20:40] It really depends on a task to task basis throughout the app. So I can't give you a kind of a, a single kind of simple rule of thumb for, in this case, do this. And in the other, do that. But yeah, we've it's a little bit easier now that the APIs between the anthropic models and opening are more similar than they used to be.[00:20:59] So we don't have two totally separate code paths with different protocols, like wire protocols to, to speak, which makes things easier, but you're right. You do need to have different prompts if you want to, have similar performance across the providers.[00:21:12] Adam Wiggins: I'll also note, just observing again as a relative newcomer here, I was surprised, impressed, not sure what the word is for it, at the blend of different backends that the team is using.[00:21:24] And so there's many The product presents as kind of one single interface, but there's actually several dozen kind of main paths. There's like, for example, the search versus a data extraction of a certain type, versus chat with papers, versus And each one of these, , the team has worked very hard to pick the right Model for the job and craft the prompt there, but also is constantly testing new ones.[00:21:48] So a new one comes out from either, from the big providers or in some cases, Our own models that are , running on, on essentially our own infrastructure. And sometimes that's more about cost or performance, but the point is kind of switching very fluidly between them and, and very quickly because this field is moving so fast and there's new ones to choose from all the time is like part of the day to day, I would say.[00:22:11] So it isn't more of a like, there's a main one, it's been kind of the same for a year, there's a fallback, but it's got cobwebs on it. It's more like which model and which prompt is changing weekly. And so I think it's quite, quite reasonable to to, to, to have a fallback that you can expect might work.[00:22:29] Is it too early to standardize Tech stacks?[00:22:29] swyx: I'm curious because you guys have had experience working at both, , Elicit, which is a smaller operation and, and larger companies. A lot of companies are looking at this with a certain amount of trepidation as, as, , it's very chaotic. When you have, when you have , one engineering team that, that, knows everyone else's names and like, , they, they, they, they meet constantly in Slack and knows what's going on.[00:22:50] It's easier to, to sync on technology choices. When you have a hundred teams, all shipping AI products and all making their own independent tech choices. It can be, it can be very hard to control. One solution I'm hearing from like the sales forces of the worlds and Walmarts of the world is that they are creating their own AI gateway, right?[00:23:05] Internal AI gateway. This is the one model hub that controls all the things and has our standards. Is that a feasible thing? Is that something that you would want? Is that something you have and you're working towards? What are your thoughts on this stuff? Like, Centralization of control or like an AI platform internally.[00:23:22] James Brady: Certainly for larger organizations and organizations that are doing things which maybe are running into HIPAA compliance or other, um, legislative tools like that. It could make a lot of sense. Yeah. I think for the TLDR for something like Elicit is we are small enough, as you indicated, and need to have full control over all the levers available and switch between different models and different prompts and whatnot, as Adam was just saying, that that kind of thing wouldn't work for us.[00:23:52] But yeah, I've spoken with and, um, advised a couple of companies that are trying to sell into that kind of a space or at a larger stage, and it does seem to make a lot of sense for them. So, for example, if you're trying to sell If you're looking to sell to a large enterprise and they cannot have any data leaving the EU, then you need to be really careful about someone just accidentally putting in, , the sort of US East 1 GPT 4 endpoints or something like this.[00:24:22] I'd be interested in understanding better what the specific problem is that they're looking to solve with that, whether it is to do with data security or centralization of billing, or if they have a kind of Suite of prompts or something like this that people can choose from so they don't need to reinvent the wheel again and again I wouldn't be able to say without understanding the problems and their proposed solutions , which kind of situations that be better or worse fit for but yeah for illicit where really the The secret sauce, if there is a secret sauce, is which models we're using, how we're using them, how we're combining them, how we're thinking about the user problem, how we're thinking about all these pieces coming together.[00:25:02] You really need to have all of the affordances available to you to be able to experiment with things and iterate rapidly. And generally speaking, whenever you put these kind of layers of abstraction and control and generalization in there, that, that gets in the way. So, so for us, it would not work.[00:25:19] Adam Wiggins: Do you feel like there's always a tendency to want to reach for standardization and abstractions pretty early in a new technology cycle?[00:25:26] There's something comforting there, or you feel like you can see them, or whatever. I feel like there's some of that discussion around lang chain right now. But yeah, this is not only so early, but also moving so fast. , I think it's . I think it's tough to, to ask for that. That's, that's not the, that's not the space we're in, but the, yeah, the larger an organization, the more that's your, your default is to, to, to want to reach for that.[00:25:48] It, it, it's a sort of comfort.[00:25:51] swyx: Yeah, I find it interesting that you would say that , being a founder of Heroku where , you were one of the first platforms as a service that more or less standardized what, , that sort of early developer experience should have looked like.[00:26:04] And I think basically people are feeling the differences between calling various model lab APIs and having an actual AI platform where. , all, all their development needs are thought of for them. , it's, it's very much, and, and I, I defined this in my AI engineer post as well.[00:26:19] Like the model labs just see their job ending at serving models and that's about it. But actually the responsibility of the AI engineer has to fill in a lot of the gaps beyond that. So.[00:26:31] Adam Wiggins: Yeah, that's true. I think, , a huge part of the exercise with Heroku, which It was largely inspired by Rails, which itself was one of the first frameworks to standardize the SQL database.[00:26:42] And people had been building apps like that for many, many years. I had built many apps. I had made my own templates based on that. I think others had done it. And Rails came along at the right moment. We had been doing it long enough that you see the patterns and then you can say look let's let's extract those into a framework that's going to make it not only easier to build for the experts but for people who are relatively new the best practices are encoded into you.[00:27:07] That framework, , Model View Controller, to take one example. But then, yeah, once you see that, and once you experience the power of a framework, and again, it's so comforting, and you can develop faster, and it's easier to onboard new people to it because you have these standards. And this consistency, then folks want that for something new that's evolving.[00:27:29] Now here I'm thinking maybe if you fast forward a little to, for example, when React came on the on the scene, , a decade ago or whatever. And then, okay, we need to do state management. What's that? And then there's, , there's a new library every six months. Okay, this is the one, this is the gold standard.[00:27:42] And then, , six months later, that's deprecated. Because of course, it's evolving, you need to figure it out, like the tacit knowledge and the experience of putting it in practice and seeing what those real What those real needs are are, are critical, and so it's, it is really about finding the right time to say yes, we can generalize, we can make standards and abstractions, whether it's for a company, whether it's for, , a library, an open source library, for a whole class of apps and it, it's very much a, much more of a A judgment call slash just a sense of taste or , experience to be able to say, Yeah, we're at the right point.[00:28:16] We can standardize this. But it's at least my, my very, again, and I'm so new to that, this world compared to you both, but my, my sense is, yeah, still the wild west. That's what makes it so exciting and feels kind of too early for too much. too much in the way of standardized abstractions. Not that it's not interesting to try, but , you can't necessarily get there in the same way Rails did until you've got that decade of experience of whatever building different classes of apps in that, with that technology.[00:28:45] James Brady: Yeah, it's, it's interesting to think about what is going to stay more static and what is expected to change over the coming five years, let's say. Which seems like when I think about it through an ML lens, it's an incredibly long time. And if you just said five years, it doesn't seem, doesn't seem that long.[00:29:01] I think that, that kind of talks to part of the problem here is that things that are moving are moving incredibly quickly. I would expect, this is my, my hot take rather than some kind of official carefully thought out position, but my hot take would be something like the You can, you'll be able to get to good quality apps without doing really careful prompt engineering.[00:29:21] I don't think that prompt engineering is going to be a kind of durable differential skill that people will, will hold. I do think that, The way that you set up the ML problem to kind of ask the right questions, if you see what I mean, rather than the specific phrasing of exactly how you're doing chain of thought or few shot or something in the prompt I think the way that you set it up is, is probably going to be remain to be trickier for longer.[00:29:47] And I think some of the operational challenges that we've been talking about of wild variations in, in, in latency, And handling the, I mean, one way to think about these models is the first lesson that you learn when, when you're an engineer, software engineer, is that you need to sanitize user input, right?[00:30:05] It was, I think it was the top OWASP security threat for a while. Like you, you have to sanitize and validate user input. And we got used to that. And it kind of feels like this is the, The shell around the app and then everything else inside you're kind of in control of and you can grasp and you can debug, etc.[00:30:22] And what we've effectively done is, through some kind of weird rearguard action, we've now got these slightly chaotic things. I think of them more as complex adaptive systems, which , related but a bit different. Definitely have some of the same dynamics. We've, we've injected these into the foundations of the, of the app and you kind of now need to think with this defined defensive mindset downwards as well as upwards if you, if you see what I mean.[00:30:46] So I think it would gonna, it's, I think it will take a while for us to truly wrap our heads around that. And also these kinds of problems where you have to handle things being unreliable and slow sometimes and whatever else, even if it doesn't happen very often, there isn't some kind of industry wide accepted way of handling that at massive scale.[00:31:10] There are definitely patterns and anti patterns and tools and whatnot, but it's not like this is a solved problem. So I would expect that it's not going to go down easily as a, as a solvable problem at the ML scale either.[00:31:23] swyx: Yeah, excellent. I would describe in, in the terminology of the stuff that I've written in the past, I describe this inversion of architecture as sort of LLM at the core versus LLM or code at the core.[00:31:34] We're very used to code at the core. Actually, we can scale that very well. When we build LLM core apps, we have to realize that the, the central part of our app that's orchestrating things is actually prompt, prone to, , prompt injections and non determinism and all that, all that good stuff.[00:31:48] I, I did want to move the conversation a little bit from the sort of defensive side of things to the more offensive or, , the fun side of things, capabilities side of things, because that is the other part. of the job description that we kind of skimmed over. So I'll, I'll repeat what you said earlier.[00:32:02] Capabilities: Offensive AI Engineering[00:32:02] swyx: It's, you want people to have a genuine curiosity and enthusiasm for the capabilities of language models. We just, we're recording this the day after Anthropic just dropped Cloud 3. 5. And I was wondering, , maybe this is a good, good exercise is how do people have Curiosity and enthusiasm for capabilities language models when for example the research paper for cloud 3.[00:32:22] 5 is four pages[00:32:23] James Brady: Maybe that's not a bad thing actually in this particular case So yeah If you really want to know exactly how the sausage was made That hasn't been possible for a few years now in fact for for these new models but from our perspective as when we're building illicit What we primarily care about is what can these models do?[00:32:41] How do they perform on the tasks that we already have set up and the evaluations we have in mind? And then on a slightly more expansive note, what kinds of new capabilities do they seem to have? Can we elicit, no pun intended, from the models? For example, well, there's, there's very obvious ones like multimodality , there wasn't that and then there was that, or it could be something a bit more subtle, like it seems to be getting better at reasoning, or it seems to be getting better at metacognition, or Or it seems to be getting better at marking its own work and giving calibrated confidence estimates, things like this.[00:33:19] So yeah, there's, there's plenty to be excited about there. It's just that yeah, there's rightly or wrongly been this, this, this shift over the last few years to not give all the details. So no, but from application development perspective we, every time there's a new model release, there's a flow of activity in our Slack, and we try to figure out what's going on.[00:33:38] What it can do, what it can't do, run our evaluation frameworks, and yeah, it's always an exciting, happy day.[00:33:44] Adam Wiggins: Yeah, from my perspective, what I'm seeing from the folks on the team is, first of all, just awareness of the new stuff that's coming out, so that's, , an enthusiasm for the space and following along, and then being able to very quickly, partially that's having Slack to do this, but be able to quickly map that to, okay, What does this do for our specific case?[00:34:07] And that, the simple version of that is, let's run the evaluation framework, which Lissa has quite a comprehensive one. I'm actually working on an article on that right now, which I'm very excited about, because it's a very interesting world of things. But basically, you can just try, not just, but try the new model in the evaluations framework.[00:34:27] Run it. It has a whole slew of benchmarks, which includes not just Accuracy and confidence, but also things like performance, cost, and so on. And all of these things may trade off against each other. Maybe it's actually, it's very slightly worse, but it's way faster and way cheaper, so actually this might be a net win, for example.[00:34:46] Or, it's way more accurate. But that comes at its slower and higher cost, and so now you need to think about those trade offs. And so to me, coming back to the qualities of an AI engineer, especially when you're trying to hire for them, It's this, it's, it is very much an application developer in the sense of a product mindset of What are our users or our customers trying to do?[00:35:08] What problem do they need solved? Or what what does our product solve for them? And how does the capabilities of a particular model potentially solve that better for them than what exists today? And by the way, what exists today is becoming an increasingly gigantic cornucopia of things, right? And so, You say, okay, this new model has these capabilities, therefore, , the simple version of that is plug it into our existing evaluations and just look at that and see if it, it seems like it's better for a straight out swap out, but when you talk about, for example, you have multimodal capabilities, and then you say, okay, wait a minute, actually, maybe there's a new feature or a whole new There's a whole bunch of ways we could be using it, not just a simple model swap out, but actually a different thing we could do that we couldn't do before that would have been too slow, or too inaccurate, or something like that, that now we do have the capability to do.[00:35:58] I think of that as being a great thing. I don't even know if I want to call it a skill, maybe it's even like an attitude or a perspective, which is a desire to both be excited about the new technology, , the new models and things as they come along, but also holding in the mind, what does our product do?[00:36:16] Who is our user? And how can we connect the capabilities of this technology to how we're helping people in whatever it is our product does?[00:36:25] James Brady: Yeah, I'm just looking at one of our internal Slack channels where we talk about things like new new model releases and that kind of thing And it is notable looking through these the kind of things that people are excited about and not It's, I don't know the context, the context window is much larger, or it's, look at how many parameters it has, or something like this.[00:36:44] It's always framed in terms of maybe this could be applied to that kind of part of Elicit, or maybe this would open up this new possibility for Elicit. And, as Adam was saying, yeah, I don't think it's really a I don't think it's a novel or separate skill, it's the kind of attitude I would like to have all engineers to have at a company our stage, actually.[00:37:05] And maybe more generally, even, which is not just kind of getting nerd sniped by some kind of technology number, fancy metric or something, but how is this actually going to be applicable to the thing Which matters in the end. How is this going to help users? How is this going to help move things forward strategically?[00:37:23] That kind of, that kind of thing.[00:37:24] AI Engineering Required Knowledge[00:37:24] swyx: Yeah, applying what , I think, is, is, is the key here. Getting hands on as well. I would, I would recommend a few resources for people listening along. The first is Elicit's ML reading list, which I, I found so delightful after talking with Andreas about it.[00:37:38] It looks like that's part of your onboarding. We've actually set up an asynchronous paper club instead of my discord for people following on that reading list. I love that you separate things out into tier one and two and three, and that gives people a factored cognition way of Looking into the, the, the corpus, right?[00:37:55] Like yes, the, the corpus of things to know is growing and the water is slowly rising as far as what a bar for a competent AI engineer is. But I think, , having some structured thought as to what are the big ones that everyone must know I think is, is, is key. It's something I, I haven't really defined for people and I'm, I'm glad that this is actually has something out there that people can refer to.[00:38:15] Yeah, I wouldn't necessarily like make it required for like the job. Interview maybe, but , it'd be interesting to see like, what would be a red flag. If some AI engineer would not know, I don't know what, , I don't know where we would stoop to, to call something required knowledge, , or you're not part of the cool kids club.[00:38:33] But there increasingly is something like that, right? Like, not knowing what context is, is a black mark, in my opinion, right?[00:38:40] I think it, I think it does connect back to what we were saying before of this genuine Curiosity about and that. Well, maybe it's, maybe it's actually that combined with something else, which is really important, which is a self starting bias towards action, kind of a mindset, which again, everybody needs.[00:38:56] Exactly. Yeah. Everyone needs that. So if you put those two together, or if I'm truly curious about this and I'm going to kind of figure out how to make things happen, then you end up with people. Reading, reading lists, reading papers, doing side projects, this kind of, this kind of thing. So it isn't something that we explicitly included.[00:39:14] We don't have a, we don't have an ML focused interview for the AI engineer role at all, actually. It doesn't really seem helpful. The skills which we are checking for, as I mentioned before, this kind of fault first mindset. And conventional software engineering kind of thing. It's, it's 0. 1 and 0.[00:39:32] 3 on the list that, that we talked about. In terms of checking for ML curiosity and there are, how familiar they are with these concepts. That's more through talking interviews and culture fit types of things. We want for them to have a take on what Elisa is doing. doing, certainly as they progress through the interview process.[00:39:50] They don't need to be completely up to date on everything we've ever done on day zero. Although, , that's always nice when it happens. But for them to really engage with it, ask interesting questions, and be kind of bought into our view on how we want ML to proceed. I think that is really important, and that would reveal that they have this kind of this interest, this ML curiosity.[00:40:13] ML First Mindset[00:40:13] swyx: There's a second aspect to that. I don't know if now's the right time to talk about it, which is, I do think that an ML first approach to building software is something of a different mindset. I could, I could describe that a bit now if that, if that seems good, but yeah, I'm a team. Okay. So yeah, I think when I joined Elicit, this was the biggest adjustment that I had to make personally.[00:40:37] So as I said before, I'd been, Effectively building conventional software stuff for 15 years or so, something like this, well, for longer actually, but professionally for like 15 years. And had a lot of pattern matching built into my brain and kind of muscle memory for if you see this kind of problem, then you do that kind of a thing.[00:40:56] And I had to unlearn quite a lot of that when joining Elicit because we truly are ML first and try to use ML to the fullest. And some of the things that that means is, This relinquishing of control almost, at some point you are calling into this fairly opaque black box thing and hoping it does the right thing and dealing with the stuff that it sends back to you.[00:41:17] And that's very different if you're interacting with, again, APIs and databases, that kind of a, that kind of a thing. You can't just keep on debugging. At some point you hit this, this obscure wall. And I think the second, the second part to this is the pattern I was used to is that. The external parts of the app are where most of the messiness is, not necessarily in terms of code, but in terms of degrees of freedom, almost.[00:41:44] If the user can and will do anything at any point, and they'll put all sorts of wonky stuff inside of text inputs, and they'll click buttons you didn't expect them to click, and all this kind of thing. But then by the time you're down into your SQL queries, for example, as long as you've done your input validation, things are pretty pretty well defined.[00:42:01] And that, as we said before, is not really the case. When you're working with language models, there is this kind of intrinsic uncertainty when you get down to the, to the kernel, down to the core. Even, even beyond that, there's all that stuff is somewhat defensive and these are things to be wary of to some degree.[00:42:18] Though the flip side of that, the really kind of positive part of taking an ML first mindset when you're building applications is that you, If you, once you get comfortable taking your hands off the wheel at a certain point and relinquishing control, letting go then really kind of unexpected powerful things can happen if you lean on the, if you lean on the capabilities of the model without trying to overly constrain and slice and dice problems with to the point where you're not really wringing out the most capability from the model that you, that you might.[00:42:47] So, I was trying to think of examples of this earlier, and one that came to mind was we were working really early when just after I joined Elicit, we were working on something where we wanted to generate text and include citations embedded within it. So it'd have a claim, and then a, , square brackets, one, in superscript, something, something like this.[00:43:07] And. Every fiber in my, in my, in my being was screaming that we should have some way of kind of forcing this to happen or Structured output such that we could guarantee that this citation was always going to be present later on that the kind of the indication of a footnote would actually match up with the footnote itself and Kind of went into this symbolic.[00:43:28] I need full control kind of kind of mindset and it was notable that Andreas Who's our CEO, again, has been on the podcast, was was the opposite. He was just kind of, give it a couple of examples and it'll probably be fine. And then we can kind of figure out with a regular expression at the end. And it really did not sit well with me, to be honest.[00:43:46] I was like, but it could say anything. I could say, it could literally say anything. And I don't know about just using a regex to sort of handle this. This is a potent feature of the app. But , this is that was my first kind of, , The starkest introduction to this ML first mindset, I suppose, which Andreas has been cultivating for much longer than me, much longer than most, of yeah, there might be some surprises of stuff you get back from the model, but you can also It's about finding the sweet spot, I suppose, where you don't want to give a completely open ended prompt to the model and expect it to do exactly the right thing.[00:44:25] You can ask it too much and it gets confused and starts repeating itself or goes around in loops or just goes off in a random direction or something like this. But you can also over constrain the model. And not really make the most of the, of the capabilities. And I think that is a mindset adjustment that most people who are coming into AI engineering afresh would need to make of yeah, giving up control and expecting that there's going to be a little bit of kind of extra pain and defensive stuff on the tail end, but the benefits that you get as a, as a result are really striking.[00:44:58] The ML first mindset, I think, is something that I struggle with as well, because the errors, when they do happen, are bad. , they will hallucinate, and your systems will not catch it sometimes if you don't have large enough of a sample set.[00:45:13] AI Engineers and Creativity[00:45:13] swyx: I'll leave it open to you, Adam. What else do you think about when you think about curiosity and exploring capabilities?[00:45:22] Do people are there reliable ways to get people to push themselves? for joining us on Capabilities, because I think a lot of times we have this implicit overconfidence, maybe, of we think we know what it is, what a thing is, when actually we don't, and we need to keep a more open mind, and I think you do a particularly good job of Always having an open mind, and I want to get that out of more engineers that I talk to, but I, I, I, I struggle sometimes.[00:45:45] Adam Wiggins: I suppose being an engineer is, at its heart, this sort of contradiction of, on one hand, yeah,
Listen to the latest from Bloomberg NewsSee omnystudio.com/listener for privacy information.
Victoria is joined by guest co-host Joe Ferris, CTO at thoughtbot, and Seif Lotfy, the CTO and Co-Founder of Axiom. Seif discusses the journey, challenges, and strategies behind his data analytics and observability platform. Seif, who has a background in robotics and was a 2008 Sony AIBO robotic soccer world champion, shares that Axiom pivoted from being a Datadog competitor to focusing on logs and event data. The company even built its own logs database to provide a cost-effective solution for large-scale analytics. Seif is driven by his passion for his team and the invaluable feedback from the community, emphasizing that sales validate the effectiveness of a product. The conversation also delves into Axiom's shift in focus towards developers to address their need for better and more affordable observability tools. On the business front, Seif reveals the company's challenges in scaling across multiple domains without compromising its core offerings. He discusses the importance of internal values like moving with urgency and high velocity to guide the company's future. Furthermore, he touches on the challenges and strategies of open-sourcing projects and advises avoiding platforms like Reddit and Hacker News to maintain focus. Axiom (https://axiom.co/) Follow Axiom on LinkedIn (https://www.linkedin.com/company/axiomhq/), X (https://twitter.com/AxiomFM), GitHub (https://github.com/axiomhq), or Discord (https://discord.com/invite/axiom-co). Follow Seif Lotfy on LinkedIn (https://www.linkedin.com/in/seiflotfy/) or X (https://twitter.com/seiflotfy). Visit his website at seif.codes (https://seif.codes/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido, and with me today is Seif Lotfy, CTO and Co-Founder of Axiom, the best home for your event data. Seif, thank you for joining me. SEIF: Hey, everybody. Thanks for having me. This is awesome. I love the name of the podcast, given that I used to compete in robotics. VICTORIA: What? All right, we're going to have to talk about that. And I also want to introduce a guest co-host today. Since we're talking about cloud, and observability, and data, I invited Joe Ferris, thoughtbot CTO and Director of Development of our platform engineering team, Mission Control. Welcome, Joe. How are you? JOE: Good, thanks. Good to be back again. VICTORIA: Okay. I am excited to talk to you all about observability. But I need to go back to Seif's comment on competing with robots. Can you tell me a little bit more about what robots you've built in the past? SEIF: I didn't build robots; I used to program them. Remember the Sony AIBOs, where Sony made these dog robots? And we would make them compete. There was an international competition where we made them play soccer, and they had to be completely autonomous. They only communicate via Bluetooth or via wireless protocols. And you only have the camera as your sensor as well as...a chest sensor throws the ball near you, and then yeah, you make them play football against each other, four versus four with a goalkeeper and everything. Just look it up: RoboCup AIBO. Look it up on YouTube. And I...2008 world champion with the German team. VICTORIA: That sounds incredible. What kind of crowds are you drawing out for a robot soccer match? Is that a lot of people involved with that? SEIF: You would be surprised how big the RoboCup competition is. It's ridiculous. VICTORIA: I want to go. I'm ready. I want to, like, I'll look it up and find out when the next one is. SEIF: No more Sony robots but other robots. Now, there's two-legged robots. So, they make them play as two-legged robots, much slower than four-legged robots, but works. VICTORIA: Wait. So, the robots you were playing soccer with had four legs they were running around on? SEIF: Yeah, they were dogs [laughter]. VICTORIA: That's awesome. SEIF: We all get the same robot. It's just a competition on software, right? On a software level. And some other competitions within the RoboCup actually use...you build your own robot and stuff like that. But this one was...it's called the Standard League, where we all have a robot, and we have to program it. JOE: And the standard robot was a dog. SEIF: Yeah, I think back then...we're talking...it's been a long time. I think it started in 2001 or something. I think the competition started in 2001 or 2002. And I compete from 2006 to 2008. Robots back then were just, you know, simple. VICTORIA: Robots today are way too complicated [laughs]. SEIF: Even AI is more complicated. VICTORIA: That's right. Yeah, everything has gotten a lot more complicated [laughs]. I'm so curious how you went from being a world-champion robot dog soccer player [laughs] programmer [laughs] to where you are today with Axiom. Can you tell me a little bit more about your journey? SEIF: The journey is interesting because it came from open source. I used to do open source on the side a lot–part of the GNOME Project. That's where I met Neil and the rest of my team, Mikkel Kamstrup, the whole crowd, basically. We worked on GNOME. We worked on Ubuntu. Like, most of them were working professionally on it. I was working for another company, but we worked on the same project. We ended up at Xamarin, which was bought by Microsoft. And then we ended up doing Axiom. But we've been around each other professionally since 2009, most of us. It's like a little family. But how we ended up exactly in observability, I think it's just trying to fix pain points in my life. VICTORIA: Yeah, I was reading through the docs on Axiom. And there's an interesting point you make about organizations having to choose between how much data they have and how much they want to spend on it. So, maybe you can tell me a little bit more about that pain point and what you really found in the early stages that you wanted to solve. SEIF: So, the early stages of what we wanted to solve we were mainly dealing with...so, the early, early stage, we were actually trying to be a Datadog competitor, where we were going to be self-hosted. Eventually, we focused on logs because we found out that's what was a big problem for most people, just event data, not just metric but generally event data, so logs, traces, et cetera. We built out our own logs database completely from scratch. And one of the things we stumbled upon was; basically, you have three things when it comes to logging, which is low cost, low latency, and large scale. That's what everybody wants. But you can't get all three of them; you can only get two of them. And we opted...like, we chose large scale and low cost. And when it comes to latency, we say it should be just fast enough, right? And that's where we focused on, and this is how we started building it. And with that, this is how we managed to stand out by just having way lower cost than anybody else in the industry and dealing with large scale. VICTORIA: That's really interesting. And how did you approach making the ingestion pipeline for masses amount of data more efficient? SEIF: Just make it coordination-free as possible, right? And get rid of Kafka because Kafka just, you know, drains your...it's where you throw in money. Like maintaining Kafka...it's like back then Elasticsearch, right? Elasticsearch was the biggest part of your infrastructure that would cost money. Now, it's also Kafka. So, we found a way to have our own internal way of queueing things without having to rely on Kafka. As I said, we wrote everything from scratch to make it work. Like, every now and then, I think that we can spin this out of the company and make it a new product. But now, eyes on the prize, right? JOE: It's interesting to hear that somebody who spent so much time in the open-source community ended up rolling their own solution to so many problems. Do you feel like you had some lessons learned from open source that led you to reject solutions like Kafka, or how did that journey go? SEIF: I don't think I'm rejecting Kafka. The problem is how Kafka is built, right? Kafka is still...you have to set up all these servers. They have to communicate, et cetera, etcetera. They didn't build it in a way where it's stateless, and that's what we're trying to go to. We're trying to make things as stateless as possible. So, Kafka was never built for the cloud-native era. And you can't really rely on SQS or something like that because it won't deal with this high throughput. So, that's why I said, like, we will sacrifice some latency, but at least the cost is low. So, if messages show after half a second or a second, I'm good. It doesn't have to be real-time for me. So, I had to write a couple of these things. But also, it doesn't mean that we reject open source. Like, we actually do like open source. We open-source a couple of libraries. We contribute back to open source, right? We needed a solution back then for that problem, and we couldn't find any. And maybe one day, open source will have, right? JOE: Yeah. I was going to ask if you considered open-sourcing any of your high latency, high throughput solutions. SEIF: Not high latency. You make it sound bad. JOE: [laughs] SEIF: You make it sound bad. It's, like, fast enough, right? I'm not going to compete on milliseconds because, also, I'm competing with ClickHouse. I don't want to compete with ClickHouse. ClickHouse is low latency and large scale, right? But then the cost is, you know, off the charts a bit sometimes. I'm going the other route. Like, you know, it's fast enough. Like, how, you know, if it's under two, three seconds, everybody's happy, right? If the results come within two, three seconds, everybody is happy. If you're going to build a real-time trading system on top of it, I'll strongly advise against that. But if you're building, you know, you're looking at dashboards, you're more in the observability field, yeah, we're good. VICTORIA: Yeah, I'm curious what you found, like, which customer personas that market really resonated with. Like, is there a particular, like, industry type where you're noticing they really want to lower their cost, and they're okay with this just fast enough latency? SEIF: Honestly, with the current recession, everybody is okay with giving up some of the speed to reduce the money because I think it's not linear reduction. It's more exponential reduction at this point, right? You give up a second, and you're saving 30%. You give up two seconds, all of a sudden, you're saving 80%. So, I'd say in the beginning, everybody thought they need everything to be very, very fast. And now they're realizing, you know, with limitations you have around your budget and spending, you're like, okay, I'm okay with the speed. And, again, we're not slow. I'm just saying people realize they don't need everything under a second. They're okay with waiting for two seconds. VICTORIA: That totally resonates with me. And I'm curious if you can add maybe a non-technical or a real-life example of, like, how this impacts the operations of a company or organization, like, if you can give us, like, a business-y example of how this impacts how people work. SEIF: I don't know how, like, how do people work on that? Nothing changed, really. They're still doing the, like...really nothing because...and that aspect is you run a query, and, again, as I said, you're not getting the result in a second. You're just waiting two seconds or three seconds, and it's there. So, nothing really changed. I think people can wait three seconds. And we're still like–when I say this, we're still faster than most others. We're just not as fast as people who are trying to compete on a millisecond level. VICTORIA: Yeah, that's okay. Maybe I'll take it back even, like, a step further, right? Like, our audience is really sometimes just founders who almost have no formal technical training or background. So, when we talk about observability, sometimes people who work in DevOps and operations all understand it and kind of know why it's important [laughs] and what we're talking about. So, maybe you could, like, go back to -- SEIF: Oh, if you're asking about new types of people who've been using it -- VICTORIA: Yeah. Like, if you're going to explain to, like, a non-technical founder, like, why your product is important, or, like, how people in their organization might use it, what would you say? SEIF: Oh, okay, if you put it like that. It's more of if you have data, timestamp data, and you want to run analytics on top of it, so that could be transactions, that could be web vitals, rather than count every time somebody visits, you have a timestamp. So, you can count, like, how many visitors visited the website and what, you know, all these kinds of things. That's where you want to use something like Axiom. That's outside the DevOps space, of course. And in DevOps space, there's so many other things you use Axiom for, but that's outside the DevOps space. And we actually...we implemented as zero-config integration with Vercel that kind of went viral. And we were, for a while, the number one enterprise for self-integration because so many people were using it. So, Vercel users are usually not necessarily writing the most complex backends, but a lot of things are happening on the front-end side of things. And we would be giving them dashboards, automated dashboards about, you know, latencies, and how long a request took, and how long the response took, and the content type, and the status codes, et cetera, et cetera. And there's a huge user base around that. VICTORIA: I like that. And it's something, for me, you know, as a managing director of our platform engineering team, I want to talk more to founders about. It's great that you put this product and this app out into the world. But how do you know that people are actually using it? How do you know that people, like, maybe, are they all quitting after the first day and not coming back to your app? Or maybe, like, the page isn't loading or, like, it's not working as they expected it to. And, like, if you don't have anything observing what users are doing in your app, then it's going to be hard to show that you're getting any traction and know where you need to go in and make corrections and adjust. SEIF: We have two ways of doing this. Right now, internally, we use our own tools to see, like, who is sending us data. We have a deployment that's monitoring production deployment. And we're just, you know, seeing how people are using it, how much data they're sending every day, who stopped sending data, who spiked in sending data sets, et cetera. But we're using Mixpanel, and Dominic, our Head of Product, implemented a couple of key metrics to that for that specifically. So, we know, like, what's the average time until somebody starts going from building its own queries with the builder to writing APL, or how long it takes them from, you know, running two queries to five queries. And, you know, we just start measuring these things now. And it's been going...we've been growing healthy around that. So, we tend to measure user interaction, but also, we tend to measure how much data is being sent. Because let's keep in mind, usually, people go in and check for things if there's a problem. So, if there's no problem, the user won't interact with us much unless there's a notification that kicks off. We also just check, like, how much data is being sent to us the whole time. VICTORIA: That makes sense. Like, you can't just rely on, like, well, if it was broken, they would write a [chuckles], like, a question or something. So, how do you get those metrics and that data around their interactions? So, that's really interesting. So, I wonder if we can go back and talk about, you know, we already mentioned a little bit about, like, the early days of Axiom and how you got started. Was there anything that you found in the early discovery process that was surprising and made you pivot strategy? SEIF: A couple of things. Basically, people don't really care about the tech as much as they care [inaudible 12:51] and the packaging, so that's something that we had to learn. And number two, continuous feedback. Continuous feedback changed the way we worked completely, right? And, you know, after that, we had a Slack channel, then we opened a Discord channel. And, like, this continuous feedback coming in just helps with iterating, helps us with prioritizing, et cetera. And that changed the way we actually developed product. VICTORIA: You use Slack and Discord? SEIF: No. No Slack anymore. We had a community Slack. We had a community [inaudible 13:19] Slack. Now, there's no community Slack. We only have a community Discord. And the community Slack is...sorry, internally, we use Slack, but there's a community Discord for the community. JOE: But how do you keep that staffed? Is it, like, everybody is in the Discord during working hours? Is it somebody's job to watch out for community questions? SEIF: I think everybody gets involved now just...and you can see it. If you go on our Discord, you will just see it. Just everyone just gets involved. I think just people are passionate about what they're doing. At least most people are involved on Discord, right? Because there's, like, Discord the help sections, and people are just asking questions and other people answering. And now, we reached a point where people in the community start answering the questions for other people in the community. So, that's how we see it's starting to become a healthy community, et cetera. But that is one of my favorite things: when I see somebody from the community answering somebody else, that's a highlight for me. Actually, we hired somebody from that community because they were so active. JOE: Yeah, I think one of the biggest signs that a product is healthy is when there's a healthy ecosystem building up around it. SEIF: Yeah, and Discord reminds me of the old days of open sources like IRC, just with memes now. But because all of us come from the old IRC days, being on Discord and chatting around, et cetera, et cetera, just gives us this momentum back, gave us this momentum back, whereas Slack always felt a bit too businessy to me. JOE: Slack is like IRC with emoji. Discord is IRC with memes. SEIF: I would say Slack reminds me somehow of MSN Messenger, right? JOE: I feel like there's a huge slam on MSN Messenger here. SEIF: [laughs] What do you guys use internally, Slack or? I think you're using Slack, right? Or Teams. Don't tell me you're using Teams. JOE: No, we're using Slack. SEIF: Okay, good, because I shit talk. Like, there is this, I'll sh*t talk here–when I start talking about Teams, so...I remember that one thing Google did once, and that failed miserably. JOE: Google still has, like, seven active chat products. SEIF: Like, I think every department or every, like, group of engineers just uses one of them internally. I'm not sure. Never got to that point. But hey, who am I to judge? VICTORIA: I just feel like I end up using all of them, and then I'm just rotating between different tabs all day long. You maybe talked me into using Discord. I feel like I've been resisting it, but you got me with the memes. SEIF: Yeah, it's definitely worth it. It's more entertaining. More noise, but more entertaining. You feel it's alive, whereas Slack is...also because there's no, like, history is forever. So, you always go back, and you're like, oh my God, what the hell is this? VICTORIA: Yeah, I have, like, all of them. I'll do anything. SEIF: They should be using Axiom in the background. Just send data to Axiom; we can keep your chat history. VICTORIA: Yeah, maybe. I'm so curious because, you know, you mentioned something about how you realized that it didn't matter really how cool the tech was if the product packaging wasn't also appealing to people. Because you seem really excited about what you've built. So, I'm curious, so just tell us a little bit more about how you went about trying to, like, promote this thing you built. Or was, like, the continuous feedback really early on, or how did that all kind of come together? SEIF: The continuous feedback helped us with performance, but actually getting people to sign up and pay money it started early on. But with Vercel, it kind of skyrocketed, right? And that's mostly because we went with the whole zero-config approach where it's just literally two clicks. And all of a sudden, Vercel is sending your data to Axiom, and that's it. We will create [inaudible 16:33]. And we worked very closely with Vercel to do this, to make this happen, which was awesome. Like, yeah, hats off to them. They were fantastic. And just two clicks, three clicks away, and all of a sudden, we created Axiom organization for you, the data set for you. And then we're sending it...and the data from Vercel is being forwarded to it. I think that packaging was so simple that it made people try it out quickly. And then, the experience of actually using Axiom was sticky, so they continued using it. And then the price was so low because we give 500 gigs for free, right? You send us 500 gigs a month of logs for free, and we don't care. And you can start off here with one terabyte for 25 bucks. So, people just start signing up. Now, before that, it was five terabytes a month for $99, and then we changed the plan. But yeah, it was cheap enough, so people just start sending us more and more and more data eventually. They weren't thinking...we changed the way people start thinking of “what am I going to send to Axiom” or “what am I going to send to my logs provider or log storage?” To how much more can I send? And I think that's what we wanted to reach. We wanted people to think, how much more can I send? JOE: You mentioned latency and cost. I'm curious about...the other big challenge we've seen with observability platforms, including logs, is cardinality of labels. Was there anything you had to sacrifice upfront in terms of cardinality to manage either cost or volume? SEIF: No, not really. Because the way we designed it was that we should be able to deal with high cardinality from scratch, right? I mean, there's open-source ways of doing, like, if you look at how, like, a column store, if you look at a column store and every dimension is its own column, it's just that becomes, like, you can limit on the amount of columns you're creating, but you should never limit on the amount of different values in a column could be. So, if you're having something like stat tags, right? Let's say hosting, like, hostname should be a column, but then the different hostnames you have, we never limit that. So, the cardinality on a value is something that is unlimited for us, and we don't really see it in cost. It doesn't really hit us on cost. It reflects a bit on compression if you get into technical details of that because, you know, high cardinality means a lot of different data. So, compression is harder, but it's not repetitive. But then if you look at, you know, oh, I want to send a lot of different types of fields, not values with fields, so you have hostname, and latency, and whatnot, et cetera, et cetera, yeah, that's where limitation starts because then they have...it's like you're going to a wide range of...and a wider dimension. But even that, we, yeah, we can deal with thousands at this point. And we realize, like, most people will not need more than three or four. It's like a Postgres table. You don't need more than 3,000 to 4000 columns; else, you know, you're doing a lot. JOE: I think it's actually pretty compelling in terms of cost, though. Like, that's one of the things we've had to be most careful about in terms of containing cost for metrics and logs is, a lot of providers will...they'll either charge you based on the number of unique metric combinations or the performance suffers greatly. Like, we've used a lot of Prometheus-based solutions. And so, when we're working with developers, even though they don't need more than, you know, a few dozen metric combinations most of the time, it's hard for people to think of what they need upfront. It's much easier after you deploy it to be able to query your data and slice it retroactively based on what you're seeing. SEIF: That's the detail. When you say we're using Prometheus, a lot of the metrics tools out there are using, just like Prometheus, are using the Gorilla data structure. And the real data structure was never designed to deal with high cardinality labels. So, basically, to put it in a simple way, every combination of tags you send for metrics is its own file on disk. That's, like, the very simple way of explaining this. And then, when you're trying to search through everything, right? And you have a lot of these combinations. I actually have to get all these files from this conversion back together, you know, and then they're chunked, et cetera. So, it's a problem. Generally, how metrics are doing it...most metrics products are using it, even VictoriaMetrics, et cetera. What they're doing is they're using either the Prometheus TSDB data structure, which is based on Gorilla. Influx was doing the same thing. They pivoted to using more and more like the ones we use, and Honeycomb uses, right? So, we might not be as fast on metrics side as these highly optimized. But then when it comes to high [inaudible 20:49], once we start dealing with high cardinality, we will be faster than those solutions. And that's on a very technical level. JOE: That's pretty cool. I realize we're getting pretty technical here. Maybe it's worth defining cardinality for the audience. SEIF: Defining cardinality to the...I mean, we just did that, right? JOE: What do you think, Victoria? Do you know what cardinality is now? [laughs] VICTORIA: All right. Now I'm like, do I know? I was like, I think I know what it means. Cardinality is, like, let's say you have a piece of data like an event or a transaction. SEIF: It's like the distinct count on a property that gives you the cardinality of a property. VICTORIA: Right. It's like how many pieces of information you have about that one event, basically, yeah. JOE: But with some traditional metrics stores, it's easy to make mistakes. For example, you could have unbounded cardinality by including response time as one of the labels -- SEIF: Tags. JOE: And then it's just going to -- SEIF: Oh, no, no. Let me give you a better one. I put in timestamp at some point in my life. JOE: Yeah, I feel like everybody has done that one. [laughter] SEIF: I've put a system timestamp at some point in my life. There was the actual timestamp, and there was a system timestamp that I would put because I wanted to know when the...because I couldn't control the timestamp, and the only timestamp I had was a system timestamp. I would always add the actual timestamp of when that event actually happened into a metric, and yeah, that did not scale. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. VICTORIA: Yeah. I wonder if you could maybe share, like, a story about when it's gone wrong, and you've suddenly charged a lot of money [laughs] just to get information about what's happening in the system. Any, like, personal experiences with observability that kind of informed what you did with Axiom? SEIF: Oof, I have a very bad one, like, a very, very bad one. I used to work for a company. We had to deploy Elasticsearch on Windows Servers, and it was US-East-1. So, just a combination of Elasticsearch back in 2013, 2014 together with Azure and Windows Server was not a good idea. So, you see where this is going, right? JOE: I see where it's going. SEIF: Eventually, we had, like, we get all these problems because we used Elasticsearch and Kibana as our, you know, observability platform to measure everything around the product we were building. And funny enough, it cost us more than actually maintaining the infrastructure of the product. But not just that, it also kept me up longer because most of the downtimes I would get were not because of the product going down. It's because my Elasticsearch cluster started going down, and there's reasons for that. Because back then, Microsoft Azure thought that it's okay for any VM to lose connection with the rest of the VMs for 30 seconds per day. And then, all of a sudden, you have Elasticsearch with a split-brain problem. And there was a phase where I started getting alerted so much that back then, my partner threatened to leave me. So I bought a...what I think was a shock bracelet or a shock collar via Bluetooth, and I connected it to phone for any notification. And I bought that off Alibaba, by the way. And I would charge it at night, put it on my wrist, and go to sleep. And then, when alert happens, it will fully discharge the battery on me every time. JOE: Okay, I have to admit, I did not see where that was going. SEIF: Yeah, did that for a while; definitely did not save my relationship either. But eventually, that was the point where, you know, we started looking into other observability tools like Datadog, et cetera, et cetera, et cetera. And that's where the actual journey began, where we moved away from Elasticsearch and Kibana to look for something, okay, that we don't have to maintain ourselves and we can use, et cetera. So, it's not about the costs as much; it was just pain. VICTORIA: Yeah, pain is a real pain point, actual physical [chuckles] and emotional pain point [laughter]. What, like, motivates you to keep going with Axiom and to keep, like, the wind in your sails to keep working on it? SEIF: There's a couple of things. I love working with my team. So, honestly, I just wake up, and I compliment my team. I just love working with them. They're a lot of fun to work with. And they challenge me, and I challenge them back. And I upset them a lot. And they can't upset me, but I upset them. But I love working with them, and I love working with that team. And the other thing is getting, like, having this constant feedback from customers just makes you want to do more and, you know, close sales, et cetera. It's interesting, like, how I'm a very technical person, and I'm more interested in sales because sales means your product works, the product, the technical parts, et cetera. Because if technically it's not working, you can't build a product on top of it. And if you're not selling it, then what's the point? You only sell when the product is good, more or less, unless you're Oracle. VICTORIA: I had someone ask me about Oracle recently, actually. They're like, "Are you considering going back to it?" And I'm maybe a little allergic to it from having a federal consulting background [laughs]. But maybe they'll come back around. I don't know. We'll see. SEIF: Did you sell your soul back then? VICTORIA: You know, I feel like I just grew up in a place where that's what everyone did was all. SEIF: It was Oracle, IBM, or HP back in the day. VICTORIA: Yeah. Well, basically, when you're working on applications that were built in, like, the '80s, Oracle was, like, this hot, new database technology [laughs] that they just got five years ago. So, that's just, yeah, interesting. SEIF: Although, from a database perspective, they did a lot of the innovations. A lot of first innovations could have come from Oracle. From a technical perspective, they're ridiculous. I'm not sure from a product perspective how good they are. But I know their sales team is so big, so huge. They don't care about the product anymore. They can still sell. VICTORIA: I think, you know, everything in tech is cyclical. So, you know, if they have the right strategy and they're making some interesting changes over there, there's always a chance [laughs]. Certain use cases, I mean, I think that's the interesting point about working in technology is that you know, every company is a tech company. And so, there's just a lot of different types of people, personas, and use cases for different types of products. So, I wonder, you know, you kind of mentioned earlier that, like, everyone is interested in Axiom. But, you know, I don't know, are you narrowing the market? Or, like, how are you trying to kind of focus your messaging and your sales for Axiom? SEIF: I'm trying to focus on developers. So, we're really trying to focus on developers because the experience around observability is crap. It's stupid expensive. Sorry for being straightforward, right? And that's what we're trying to change. And we're targeting developers mainly. We want developers to like us. And we'll find all these different types of developers who are using it, and that's the interesting thing. And because of them, we start adding more and more features, like, you know, we added tracing, and now that enables, like, billions of events pushed through for, you know, again, for almost no money, again, $25 a month for a terabyte of data. And we're doing this with metrics next. And that's just to address the developers who have been giving us feedback and the market demand. I will sum it up, again, like, the experience is crap, and it's stupid expensive. I think that's the [inaudible 28:07] of observability is just that's how I would sum it up. VICTORIA: If you could go back in time and talk to yourself when you were still a developer, now that you're CTO, what advice would you give yourself? JOE: Besides avoiding shock collars. VICTORIA: [laughs] Yes. SEIF: Get people's feedback quickly so you know you're on the right track. I think that's very, very, very, very important. Don't just work in the dark, or don't go too long into stealth mode because, eventually, people catch up. Also, ship when you're 80% ready because 100% is too late. I think it's the same thing here. JOE: Ship often and early. SEIF: Yeah, even if it's not fully ready, it's still feedback. VICTORIA: Ship often and early and talk to people [laughs]. Just, do you feel like, as a developer, did you have the skills you needed to be able to get the most out of those feedback and out of those conversations you were having with people around your product? SEIF: I still don't think I'm good enough. You're just constantly learning, right? I just accepted I'm part of a team, and I have my contributions. But as an individual, I still don't think I know enough. I think there's more I need to learn at this point. VICTORIA: I wonder, what questions do you have for me or Joe? SEIF: How did you start your podcast, and why the name? VICTORIA: Oh, man, I hope I can answer. So, the podcast was started...I think it's, like, we're actually about to be at our 500th Episode. So, I've only been a host for the last year. Maybe Joe even knows more than I do. But what I recall is that one person at thoughtbot thought it would be a great idea to start a podcast, and then they did it. And it seems like the whole company is obsessed with robots. I'm not really sure where that came from. There used to be a tiny robot in the office, is what I remember. And people started using that as, like, the mascot. And then, yeah, that's it, that's the whole thing. SEIF: Was the robot doing anything useful or just being cute? JOE: It was just cute, and it's hard to make a robot cute. SEIF: Was it a real robot, or was it like a -- JOE: No, there was, at one point, a toy robot. The name...I actually forget the origin–origin of the name, but the name Giant Robots comes from our blog. So, we named the podcast the same as the blog: Giant Robots Smashing Into Other Giant Robots. SEIF: Yes, it's called transformers. VICTORIA: Yeah, I like it. It's, I mean, now I feel like -- SEIF: [laughs] VICTORIA: We got to get more, like, robot dogs involved [laughs] in the podcast. SEIF: Like, I wanted to add one thing when we talked about, you know, what gets me going. And I want to mention that I have a six-month-old son now. He definitely adds a lot of motivation for me to wake up in the morning and work. But he also makes me wake up regardless if I want to or not. VICTORIA: Yeah, you said you had invented an alarm clock that never turns off. Never snoozes [laughs]. SEIF: Yes, absolutely. VICTORIA: I have the same thing, but it's my dog. But he does snooze, actually. He'll just, like, get tired and go back to sleep [laughs]. SEIF: Oh, I have a question. Do dogs have a Tamagotchi phase? Because, like, my son, the first three months was like a Tamagotchi. It was easy to read him. VICTORIA: Oh yeah, uh-huh. SEIF: Noisy but easy. VICTORIA: Yes, yes. SEIF: Now, it's just like, yeah, I don't know, like, the last month he has opinions at six months. I think it's because I raised him in Europe. I should take him back to the Middle East [laughs]. No opinions. VICTORIA: No, dogs totally have, like, a communication style, you know, I pretty much know what he, I mean, I can read his mind, obviously [laughs]. SEIF: Sure, but that's when they grow a bit. But what when they were very...when the dog was very young? VICTORIA: Yeah, they, I mean, they also learn, like, your stuff, too. So, they, like, learn how to get you to do stuff or, like, I know she'll feed me if I'm sitting here [laughs]. SEIF: And how much is one dog year, seven years? VICTORIA: Seven years. SEIF: Seven years? VICTORIA: Yeah, seven years? SEIF: Yeah. So, basically, in one year, like, three months, he's already...in one month, he's, you know, seven months old. He's like, yeah. VICTORIA: Yeah. In a year, they're, like, teenagers. And then, in two years, they're, like, full adults. SEIF: Yeah. So, the first month is basically going through the first six months of a human being. So yeah, you pass...the first two days or three days are the Tamagotchi phase that I'm talking about. VICTORIA: [chuckles] I read this book, and it was, like, to understand dogs, it's like, they're just like humans that are trying to, like, maximize the number of positive experiences that they have. So, like, if you think about that framing around all your interactions about, like, maybe you're trying to get your son to do something, you can be like, okay, how do I, like, I don't know, train him that good things happen when he does the things I want him to do? [laughs] That's kind of maybe manipulative but effective. So, you're not learning baby sign language? You're just, like, going off facial expressions? SEIF: I started. I know how Mama looks like. I know how Dada looks like. I know how more looks like, slowly. And he already does this thing that I know that when he's uncomfortable, he starts opening and closing his hands. And when he's completely uncomfortable and basically that he needs to go sleep, he starts pulling his own hair. VICTORIA: [laughs] I do the same thing [laughs]. SEIF: You pull your own hair when you go to sleep? I don't have that. I don't have hair. VICTORIA: I think I do start, like, touching my head though, yeah [inaudible 33:04]. SEIF: Azure took the last bit of hair I had! Went away with Azure, Elasticsearch, and the shock collar. VICTORIA: [laughs] SEIF: I have none of them left. Absolutely nothing. I should sue Elasticsearch for this shit. VICTORIA: [laughs] Let me know how that goes. Maybe there's more people who could join your lawsuit, you know, with a class action. SEIF: [laughs] Yeah. Well, one thing I wanted to also just highlight is, right now, one of the things that also makes the company move forward is we realized that in a single domain, we proved ourselves very valuable to specific companies, right? So, that was a big, big thing, milestone for us. And now we're trying to move into a handful of domains and see which one of those work out the best for us. Does that make sense? VICTORIA: Yeah. And I'm curious: what are the biggest challenges or hurdles that you associate with that? SEIF: At this point, you don't want just feedback. You want constructive criticism. Like, you want to work with people who will criticize the applic...and you iterate with them based on this criticism, right? They're just not happy about you and trying to create design partners. So, for us, it was very important to have these small design partners who can work with us to actually prove ourselves as valuable in a single domain. Right now, we need to find a way to scale this across several domains. And how do you do that without sacrificing? Like, how do you open into other domains without sacrificing the original domain you came from? So, there's a lot of things [inaudible 34:28]. And we are in the middle of this. Honestly, I Forrest Gumped my way through half of this, right? Like, I didn't know what I was doing. I had ideas. I think it's more of luck at this point. And I had luck. No, we did work. We did work a lot. We did sleepless nights and everything. But I think, in the last three years, we became more mature and started thinking more about product. And as I said, like, our CEO, Neil, and Dominic, our head of product, are putting everything behind being a product-led organization, not just a tech-led organization. VICTORIA: That's super interesting. I love to hear that that's the way you're thinking about it. JOE: I was just curious what other domains you're looking at pushing into if you can say. SEIF: So, we are going to start moving into ETL a bit more. We're trying to see how we can fit in specific ML scenarios. I can't say more about the other, though. JOE: Do you think you'll take the same approaches in terms of value proposition, like, low cost, good enough latency? SEIF: Yes, that's definitely one thing. But there's also...so, this is the values we're bringing to the customer. But also, now, our internal values are different. Now it's more of move with urgency and high velocity, as we said before, right? Think big, work small. The values in terms of values we're going to take to the customers it's the same ones. And maybe we'll add some more, but it's still going to be low-cost and large-scale. And, internally, we're just becoming more, excuse my French, agile. I hate that word so much. Should be good with Scrum. VICTORIA: It's painful, but everyone knows what you're talking about [laughs], you know, like -- SEIF: See, I have opinions here about Scrum. I think Scrum should be only used in terms of iceScrum [inaudible 36:04], or something like that. VICTORIA: Oh no [laughter]. Well, it's a Rugby term, right? Like, that's where it should probably stay. SEIF: I did not know it's a rugby term. VICTORIA: Yeah, so it should stay there, but -- SEIF: Yes [laughs]. VICTORIA: Yeah, I think it's interesting. Yeah, I like the being flexible. I like the just, like, continuous feedback and how you all have set up to, like, talk with your customers. Because you mentioned earlier that, like, you might open source some of your projects. And I'm just curious, like, what goes into that decision for you when you're going to do that? Like, what makes you think this project would be good for open source or when you think, actually, we need to, like, keep it? SEIF: So, we open source libraries, right? We actually do that already. And some other big organizations use our libraries; even our competitors use our libraries, that we do. The whole product itself or at least a big part of the product, like database, I'm not sure we're going to open source that, at least not anytime soon. And if we open source, it's going to be at a point where the value-add it brings is nothing compared to how well our product is, right? So, if we can replace whatever's at the back with...the storage engine we have in the back with something else and the product doesn't get affected, that's when we open source it. VICTORIA: That's interesting. That makes sense to me. But yeah, thank you for clarifying that. I just wanted to make sure to circle back. Since you have this big history in open source, yeah, I'm curious if you see... SEIF: Burning me out? VICTORIA: Burning you out, yeah [laughter]. Oh, that's a good question. Yeah, like, because, you know, we're about to be in October here. Do you have any advice or strategies as a maintainer for not getting burned out during the next couple of weeks besides, like, hide in a cave and without internet access [laughs]? SEIF: Stay away from Reddit and Hacker News. That's my goal for October now because I'm always afraid of getting too attached to an idea, or too motivated, or excited by an idea that I drift away from what I am actually supposed to be doing. VICTORIA: Last question is, is there anything else you would like to promote? SEIF: Yeah, check out our website; I think it's at axiom.co. Check it out. Sign up. And comment on Discord and talk to me. I don't bite, sometimes grumpy, but that's just because of lack of sleep in the morning. But, you know, around midday, I'm good. And if you're ever in Berlin and you want to hang out, I'm more than willing to hang out. VICTORIA: Whoo, that's awesome. Yeah, Berlin is great. I was there a couple of years ago but no plans to go back anytime soon, but maybe I'll keep that in mind. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you could find me on Twitter @victori_ousg. And this podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions. Special Guests: Joe Ferris and Seif Lotfy.
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan, Jonathan, Matthew are your hosts this week. Join us as we discuss all things cloud, AI, the upcoming Google AI Conference, AWS Console, and Duet AI for Google cloud. Titles we almost went with this week:
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan, Jonathan, Matthew and Peter are your hosts this week as we discuss all things cloud and AI, Titles we almost went with this week: The Cloud Pod is better than Bob's Used Books The Cloud Pod sets up AWS notifications for all The Cloud Pod is non-differential about privacy in BigQuery The Cloud Pod finds Windows Bob The Cloud Pod starts preparing for its Azure Emergency today A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Matthew are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI. Do people really love Matt's Azure know-how? Can Google make Bard fit into literally everything they make? What's the latest with Azure AI and their space collaborations? Let's find out! Titles we almost went with this week: Clouds in Space, Fictional Realms of Oracles, Oh My. The cloudpod streams lambda to the cloud A big thanks to this week's sponsor: Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
Pastor Doug teaches us from Acts 22:22-23:11 that when we stand for Jesus, he stands with us.To learn more about Parkview Church, please visit parkviewchurch.org.
Why Does Allah Test Us? | East London Mosque | Shaykh Dr. Yasir Qadhi
This week we discuss Cloud Earnings, ChatGPT Prompts and the OpenTelemetry controversy. Plus, thoughts on refrigerating eggs… Watch the YouTube Live Recording of Episode 400 (https://youtube.com/live/3lko4YjndKY?feature=share) Runner-up Titles You don't want to toy with food poisoning Do you put all your eggs in one basket? The answer to this and every question is ChatGPT This is just Bing Bullshit as a Service It doesn't matter if it's right, it's fine Bad dog The answer to every question is ChatGPT Linux under the desktop Rundown Why Does the U.S. Refrigerate Eggs When Much of the World Doesn't? (https://www.organicvalley.coop/blog/why-does-us-refrigerate-eggs/) Earnings and Outlook Cloud Giants Update (https://twitter.com/jaminball/status/1621260249016434691?s=46&t=E1TVgOcjzgZuJPndlhI9NA) Red Hat OpenShift making money (https://twitter.com/adamhjk/status/1618795275665162247?s=56&t=lB7BRczZa4_zVz6n4aAjIg) Gartner: Overall IT Spend Has Slowed. But Software? That's Still Growing. (https://www.saastr.com/gartner-overall-it-spend-has-slowed-but-software-still-growing/) Cloud Earnings (https://www.cnbc.com/2023/02/02/amazon-aws-earnings-q4-2022.html) The Big Tech Rebound Is Underway (https://www.bigtechnology.com/p/the-big-tech-rebound-is-underway) Cloud leaders Amazon, Google and Microsoft show the once-booming market is cooling down (https://www.cnbc.com/2023/02/04/amazon-google-microsoft-show-slowing-growth-in-cloud-infrastructure.html) The Four Horsemen of the Tech Recession (https://stratechery.com/2023/the-four-horsemen-of-the-tech-recession/) Microsoft offers lackluster guidance, says new business growth slowed in December (https://www.cnbc.com/2023/01/24/microsoft-msft-earnings-q2-2023.html) FY23 Q2 - Press Releases - Investor Relations - Microsoft (https://www.microsoft.com/en-us/investor/earnings/fy-2023-q2/press-release-webcast) The On-Premises Empire Strikes Back At AWS (https://www.nextplatform.com/2023/02/06/the-on-premises-empire-strikes-back-at-aws/) A.I. OpenAI has hired an army of contractors to make basic coding obsolete (https://www.semafor.com/article/01/27/2023/openai-has-hired-an-army-of-contractors-to-make-basic-coding-obsolete) Google has developed a music-making AI bot (https://mashable.com/article/google-ai-bot-music) Why does ChatGPT constantly lie? (https://noahpinion.substack.com/p/why-does-chatgpt-constantly-lie) Who will compete with ChatGPT? Meet the contenders (https://venturebeat.com/ai/who-will-compete-with-chatgpt-meet-the-contenders-the-ai-beat/) OpenAI API (https://platform.openai.com/ai-text-classifier) Infrastructure-as-Code Generator (https://github.com/gofireflyio/aiac) Code-generating platform Magic challenges GitHub's Copilot with $23M in VC backing (https://techcrunch.com/2023/02/06/magic-dev-code-generating-startup-raises-23m/) Microsoft to Invest $10 Billion in OpenAI, the Creator of ChatGPT (https://www.nytimes.com/2023/01/23/business/microsoft-chatgpt-artificial-intelligence.html) Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight (https://www.nytimes.com/2023/01/20/technology/google-chatgpt-artificial-intelligence.html) Claims Datadog asked developer to kill open source data tool (https://www.theregister.com/2023/02/02/datadog_opentelemetry_tool_dorman/) Everybody Gets Fired Eventually (https://www.thecloudcast.net/2023/02/everybody-gets-fired-eventually.html) (The Cloudcast Podcast) This FTX Slide has been nominated as Slide of the Year (https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iD_V.CWTliQc/v0/-1x-1.png) Relevant to your Interests Slicing Cash Flows for Better Ratings (https://www.bloomberg.com/opinion/articles/2023-01-18/slicing-cash-flows-for-better-ratings#xj4y7vzkg) Twitter Manager: Daily Revenue Has Dropped 40%, 500 Top Advertisers Have Left (https://www.theinformation.com/articles/twitter-manager-daily-revenue-has-dropped-40-500-top-advertisers-have-left?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) $NOW CEO Bill McDermott on 2023 IT Spend (https://twitter.com/upholdings/status/1616103812372267008?s=46&t=WKIJg71CxhnkC9pPlPnHPg) Amazon to Wind Down Charity-Donation Program AmazonSmile (https://www.wsj.com/articles/amazon-to-wind-down-charity-donation-program-amazonsmile-11674144274) HPE and Oracle Solaris suit ends with hushed settlement (https://www.theregister.com/2023/01/19/hpe_and_oracle_lawsuit_ends/) Cloud growth slowing as customers get a dose of cost reality (https://www.theregister.com/2023/01/19/cloud_growth_slowdown_as_customers/) How We Learned to Be Lonely (https://www.theatlantic.com/family/archive/2023/01/loneliness-solitude-pandemic-habit/672631/) Kevin Kelly: The Case for Optimism (https://www.warpnews.org/premium-content/kevin-kelly-the-case-for-optimism/) Activist investor Elliott sets its sights on Salesforce (https://www.axios.com/2023/01/23/activist-elliott-salesforce-benioff) AWS expanding site of infamously flaky US-EAST-1 region (https://www.theregister.com/2023/01/23/aws_expanding_infamous_useast1_region/) Amazon-Stripe partnership accelerates ecommerce and streamlines online payments (https://stripe.com/en-es/newsroom/news/amazon-and-stripe) Undo — Chartr: Data Storytelling (https://read.chartr.co/newsletters/2023/1/23/undo) VMware 2023 Predictions: Platform Engineering Improves Developer Experience, Tech Layoffs Solve Enterprise Talent Gaps (https://vmblog.com/archive/2023/01/24/vmware-2023-predictions-platform-engineering-improves-developer-experience-tech-layoffs-solve-enterprise-talent-gaps.aspx) LastPass owner GoTo says hackers stole customers' backups (https://techcrunch.com/2023/01/24/goto-customer-backups-stolen-lastpass/) Subject: Focusing on our short- and long-term opportunity - The Official Microsoft Blog (https://blogs.microsoft.com/blog/2023/01/18/subject-focusing-on-our-short-and-long-term-opportunity/) Slack's second chance (https://open.substack.com/pub/mostlycloudy/p/slacks-second-chance?r=2d4o&utm_medium=ios&utm_campaign=post) Internal Developer Portal: What It Is and Why You Need One (https://thenewstack.io/internal-developer-portal-what-it-is-and-why-you-need-one/) Microsoft Outlook and Teams down for thousands around world (https://www.bbc.com/news/technology-64397643) U.S. Accuses Google of Abusing Monopoly in Ad Technology (https://www.nytimes.com/2023/01/24/technology/google-ads-lawsuit.html?smid=nytcore-ios-share&referringSource=articleShare) Microsoft set to face EU antitrust probe over video calls (https://www.politico.eu/article/microsoft-european-union-antitrust-video-calls-software-giant/?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Replacing a SQL analyst with 26 recursive GPT prompts | Patterns (https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/) Defying logic, Apple announces 2nd-gen HomePod for $299 (https://www.macworld.com/article/1476747/homepod-2nd-gen-audio-siri-feaures-sensors.html) ADS-B Exchange Sells Up, Contributors Unhappy (https://hackaday.com/2023/01/26/ads-b-exchange-sells-up-contributors-unhappy/) Confluent : Message to Confluent Employees from Jay Kreps - Form 8-K (https://www.marketscreener.com/quote/stock/CONFLUENT-INC-124047168/news/Confluent-Message-to-Confluent-Employees-from-Jay-Kreps-Form-8-K-42820152/) FBI shuts down ransomware gang that targeted schools and hospitals (https://www.washingtonpost.com/national-security/2023/01/26/hive-ransomware-fbi-doj) Reduce Kubernetes spend with these 10 Kubecost alternatives | TechTarget (https://www.techtarget.com/searchitoperations/tip/Reduce-Kubernetes-spend-with-these-10-Kubecost-alternatives) The ‘Enshittification' of TikTok (https://www.wired.com/story/tiktok-platforms-cory-doctorow/) Why Corporate America Still Runs on Ancient Software That Breaks - Odd Lots (https://omny.fm/shows/odd-lots/why-corporate-america-still-runs-on-ancient-softwa) Why are so many tech companies laying people off right now? (https://www.theverge.com/2023/1/26/23571659/tech-layoffs-facebook-google-amazon) Salesforce Announces Appointment of Three New Independent Directors (https://investor.salesforce.com/press-releases/press-release-details/2023/Salesforce-Announces-Appointment-of-Three-New-Independent-Directors/default.aspx?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) 1Password announces multiple improvements coming soon to its iOS app (https://9to5mac.com/2023/01/30/1password-announces-multiple-improvements-coming-soon-to-its-ios-app/) Identity management platform Saviynt secures $205M in debt, appoints new CEO (https://techcrunch.com/2023/01/31/identity-management-platform-saviynt-secures-205m-in-debt-appoints-new-ceo/) Artifact (https://artifact.news/) GitHub says hackers cloned code-signing certificates in breached repository (https://arstechnica.com/information-technology/2023/01/github-says-hackers-cloned-code-signing-certificates-in-breached-repository/) Introducing Hermes, An Open Source Document Management System (https://www.hashicorp.com/blog/introducing-hermes-an-open-source-document-management-system) Introducing Helios, HashiCorp's New Design System (https://www.hashicorp.com/blog/introducing-helios-hashicorp-s-new-design-system) Kubernetes is great, but it's been a 7 year distraction (https://newsletter.cote.io/p/kubernetes-is-great-but-its-been?utm_source=post-email-title&publication_id=50&post_id=100430955&isFreemail=true&utm_medium=email) Former Ubiquiti dev pleads guilty to trying to extort his employer (https://www.bleepingcomputer.com/news/security/former-ubiquiti-dev-pleads-guilty-to-trying-to-extort-his-employer/) Charted: Hardest hit in tech layoffs (https://www.axios.com/newsletters/axios-login-040e9788-23b3-4353-912c-6c9750e3e82f.html?chunk=2&utm_term=emshare#story2) Silicon Valley needs to stop laying off workers and start firing CEOs (https://www.businessinsider.com/fire-blame-ceo-tech-employee-layoffs-google-facebook-salesforce-amazon-2023-2?r=US&IR=T) Visa vs. AMEX (https://twitter.com/anshgupta64/status/1619538351127937027) Musk's Twitter Has Just 180,000 U.S. Subscribers, Two Months After Launch (https://www.theinformation.com/articles/musks-twitter-has-just-180-000-u-s-subscribers-two-months-after-launch) Coté ponders IBM what-if in AI (https://newsletter.cote.io/p/catatonic-leadership?utm_source=substack&publication_id=50&post_id=97661373&utm_medium=email&utm_content=share&triggerShare=true&isFreemail=true) Appliance makers sad that 50% of customers won't connect smart appliances (https://arstechnica.com/gadgets/2023/01/half-of-smart-appliances-remain-disconnected-from-internet-makers-lament/) Broadcom's VMware battle plan is to challenge hyperscalers (https://www.theregister.com/2023/02/01/broadcom_vmware_update/) PagerDuty Layoffs Affect 7 Percent Of Workforce (https://www.crn.com/news/cloud/pagerduty-layoffs-affect-7-percent-of-workforce?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Meta Pressures Average-Rated Employees to Up Their Game (https://www.theinformation.com/articles/meta-pressures-average-rated-employees-to-up-their-game) Spotify to Shed 6% of Its Work Force in Latest Round of Tech Layoffs (https://www.nytimes.com/2023/01/23/business/spotify-layoffs.html) The Job Market for Remote Workers Is Shrinking (https://www.wsj.com/articles/the-job-market-for-remote-workers-is-shrinking-11674526943) Big Tech Is Really Bad at Firing People (https://www.wired.com/story/google-meta-big-tech-is-bad-at-firing/) Zoom layoffs impact 15% of staff (https://techcrunch.com/2023/02/07/zoom-layoffs-impact-15-of-staff/) The Newer Geography of Jobs (https://arpitrage.substack.com/p/the-newer-geography-of-jobs?utm_source=direct&utm_campaign=post&utm_medium=web) Nonsense Missing radioactive capsule found in Australia (https://www.bbc.com/news/world-australia-64481317) Boeing delivers its final 747 jet today, ending a run of more than 50 years (https://www.npr.org/2022/12/08/1141578966/boeing-747-last-jet) King Charles will not appear on new Australia $5 note (https://www.bbc.com/news/business-64493849) Donkey Kong cheating case rocked by photos of illicit joystick modification (https://arstechnica.com/gaming/2023/02/did-billy-mitchell-use-this-illicit-joystick-to-set-a-donkey-kong-high-score/) Sponsor The New Stack — Subscribe to The New Stack Makers Podcast (https://thenewstack.io/podcasts/). Conferences Southern California Linux Expo, (https://www.socallinuxexpo.org/scale/20x) Los Angeles, March 9-12, 2023 Matt (https://www.socallinuxexpo.org/scale/20x/presentations/kubernetes-cloud-cost-monitoring-opencost-optimization-strategies) & Cote (https://www.socallinuxexpo.org/scale/20x/presentations/lessons-learned-7-years-running-developer-platforms)! Use Discount Code: DEVOP And, get 50% with the code SPEAK. Coté and Matt arranging a live recording. PyTexas 2023, Austin, TX April 1 - 2, 2023 (https://www.pytexas.org) DevOpsDays Birmingham, AL 2023 (https://devopsdays.org/events/2023-birmingham-al/welcome/), April 20 - 21, 2023 DevOpsDays Austin 2023 (https://devopsdays.org/events/2023-austin/welcome/), May 4-5 SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: (https://www.homedepot.com/p/Husky-5-Tier-Industrial-Duty-Steel-Freestanding-Garage-Storage-Shelving-Unit-in-Black-90-in-W-x-90-in-H-x-24-in-D-N2W902490W5B/319132842)Hand Mirror (https://handmirror.app) Matt: StarFive VisionFiveV2 (https://www.kickstarter.com/projects/starfive/visionfive-2) RISC V has arrived! Photo Credits Header (https://unsplash.com/photos/dTgyj9okQ_w) CoverArt (https://labs.openai.com/s/1twM82RtWf5pjWk9fWJ7g0qS)
Google invoca a sus fundadores / ChatGPT de pago / Refrigeración sin ventilador / Contacto visual de Nvidia Broadcast / Cometa verde en 10 días / Red de hidrógeno en España Patrocinador: Vuelven los mejores amigos de tu descanso, porque en Morfeo.com tienen grandes rebajas. Así que durante esta semana tendrás la mejor oportunidad para ahorrarte 200-300€ o incluso más. Recuerda que el envío es gratuito y en 24 horas, y tienes 100 días de prueba sin compromiso. — No voy a parar hasta que todos los lectores de mixx.io tengáis uno. Google invoca a sus fundadores / ChatGPT de pago / Refrigeración sin ventilador / Contacto visual de Nvidia Broadcast / Cometa verde en 10 días / Red de hidrógeno en España / Satélite encuentra pingüinos por sus cacas
This episode is sponsored by Fiberplane. Your platform for collaborative debugging notebooks!Episode Resources:Try Fiberplane hereFiberplane websiteFiberplane DocsNP-hard Ventures About Micha Hernandez van LeuffenMicha Hernandez van Leuffen is the founder and CEO of Fiberplane. He previously founded Wercker, a container-native CI/CD platform that was acquired by Oracle. Micha has dedicated his career to improving the workflows of developers. Read the whole episode (Transcript)[If you want, you can help make the transcript better, and improve the podcast's accessibility via Github. I'm happy to lend a hand to help you get started with pull requests, and open source work.][00:00:00] Michaela: Hello and welcome to the Software Engineering Unlocked Podcast. I'm host, Dr. McKayla, and today I have the pleasure to talk to Micha Hernandez van Leuffen. He is the founder and CEO of Fiberplane. He previously was the founder of Wercker, a container native CI/CD platform that was acquired by Oracle. Micha has dedicated his career to improving the workflow of developers, so he and I have a lot to talk about today.I'm really, really happy that he's here today and he's also sponsoring today's episode. Welcome to the show. I'm happy that you're here, Micha.[00:00:36] Micha: Thank you for having me. Excited to be on the show.[00:00:38] Michaela: Yeah, I'm really, really excited. So, Micha, I wanted to start really from the beginning. So you are the CEO of Fiberplane and you are the founder of Wercker, which you already sold.So, can you tell me a little bit about how you actually started to this entrepreneur journey of yours and what brought you to the developer experience area.[00:01:03] Micha: Yeah, sure thing. So I have a background in computer science and I did my so, I'm originally from Amsterdam, but I did my thesis at USF.And the topic was autonomous resource provision using software containers. This was all before Docker was a thing, you know, the container format that we now know and love. And I sort of got excited by that field of, of so containers and decided to start a company around it. That company was Worker, so container native CI/CD platform.So we helped developers build tests and deploy their applications to the cloud. We went, I would say, so we went through various iterations of the platform. You know, eventually, you know, we started off with Lxc as a container format and then eventually ended up, you know, having to, to platform on Docker.And Kubernetes. But, you know, it was quite a, quite a journey. So that company eventually got acquired by Oracle to bolster their cloud native strategy. And then, you know, spent a couple years in a Bay area as a VP of software development focusing on their cloud native efforts.Tried to do a little bit of open source there as well, and then, you know, move back to Europe. And so sort of started thinking about what's. Did some angel investing. We're still doing some angel investing as well actually in the sort of same arena. So developer tools, infrastructure building blocks for tomorrow.So I run a, a small precede seat fund with to other friends of mine. But then also started, you know, thinking about what to build next. And you know, we can get into that, but sort of from our experience at running work or this sort of large distributed. Sort of fiber plane was, was born.[00:02:26] Michaela: Cool. Yeah. And so how, how was the acquisition for you? I, from the time I'm, you said you were studying at the university, but then did you write out of university, you know, start worker or maybe already while[00:02:40] Micha: you were Yeah. More or less studying? Yeah. Yeah, more or less just out of university. So it was around 20, 20 12, 20 13.And then, you know, expanded the team. Of course we got an office in San Francisco and, and London. And then 2017 we got acquired by Whirlpool. Oh,[00:02:56] Michaela: very cool. Wow. Cool. So, and you were the, you were the founder of that and also probably cto, CEO. At, at the beginning you were one person shop, or was this, or have this idea and I get some funding and I already, you know, have a team when I'm starting out, or was it more bootstrapped way?How, how was that?[00:03:16] Micha: Yeah, yeah. We both gates, both fiber plane and, and, and worker. We got some funding early on. Then eventually got a CTO. For worker was one of the co-founders of, of OpenStack. So also, you know, very early in the, in that sort of, mm-hmm. container and, and cloud infrastructure journey.And then if for fiber plane, Yeah. There, there's no cto. I'm. I'm both CEO and cto, I guess[00:03:38] Michaela: at the same time. Yeah. Cool, cool. Can you tell me a little bit about fiber plane? What is fiber plane? You know, what does, what does it has to do with containers and with developer experience? What, what kind of of a product is it?[00:03:51] Micha: Yeah, sure thing. So, so guess coming back to the worker days, right? So we, we, you know, we're running this distributed system cic cd, so we were also running users arbitrary code. You know, any, any sort of job could happen on the platform on top of Kubernetes, inside of containers. So one of the things that, you know, stuck with me was it was very hard to always sort of debug the system, like figure out what's really going on when we had some kind of issue.You know, we've going back and forth between metrics, logs, traces, trying to figure out what is the root cause of an issue. So sort of that, that was sort of one thing. So we're thinking a lot about, you know, surely there must be a better way to, to, to help you on this, on this journey. . The other thing that I started thinking about a lot was sort of just challenge the assumption of the dashboard, mm-hmm.So if you think about it, like a lot of the monitoring observability tools are modeled after the dashboard, like sort of cockpit like view of your infrastructure. But I'd say that those are great for the known knowns. So dashboard is great. You set it up in advance, you know exactly what's gonna go wrong.These are the things to monitor. These are the things, you know, to keep tabs on. But then reality hits and you know, the thing that you're looking at, at the dashboard is not necessarily a thing that's. Going wrong. Right? So started thinking a lot about you know, what, what is a better form factor to support that sort of more investigative explorative debugging of your infrastructure.And not to say that dashboards don't have their place, right? It's like still that sort of cockpit view of your infrastructure. I think that's a, a good thing to have. But for debugging, you might wanna sort of more explorative a form factor that also gives you actionable intelligence. I think the other thing that you see a lot with dashboards, like everybody's monitoring everything and now you get a lot of signal and a lot of inputs, but not necessarily the actionable intelligence to figure out what's going on.So that's sort of the other piece where it, then the other, like, the third like I would say is collaboration sort of thing that stuck with. Was also like we've come to enjoy tools like Notion, you know, Google Docs obviously. You know, in the design space we got Figma where collaboration is built in from the get go and it is found that it was kind of odd how in the developer tools and then sort of specifically DevOps.We don't really have sort of these collab collaboration not really built in. Right. If you think about it you know, the status quo of, of you and I debugging an issue is we get on, you know, we get on a. You share your screen you open some dashboard and we started talking over it or something.Right. And so it's, and it's, you know, I guess sort of covid accelerated his thinking a bit, but you know, of everybody going remote you know, how can you make that experience more collaborative?[00:06:22] Michaela: Mm-hmm. . So it's in the incident space, it's in the monitoring space, and you want to bring more collaboration.So how does it work? Yeah,[00:06:32] Micha: yeah, yeah, exactly. So what's your solution now? Yeah. Now I've explained sort of the in inception. Yeah. But yeah, but what is it? What is it? Right. So it's, it's it's a notebook form factor. So very much inspired by data science, right? Like rc, like Jupiter. Yeah, we can Jupyter Notebooks.Yeah. Think of, think of that form factor. Mm-hmm. . We don't use Jupyter or anything like that. We've written everything from scratch. But it's a sort of, yeah, a notebook form factor and you know, built in with collaboration. So you can add, mention people like you would on Slack. You can leave, you know, comments or discussions and all and all that.But where it gets interesting, we've got these things called providers, which are effectively plugins. So they're web assembly bundles, which we can sort of dive into into that as well. But they're providers that connect to your infrastructure, right? So we have, for instance, a provider for Elastic Search for your logs.We have a provider for Prometheus for your. And it allows you to connect to these observability systems and kind of pull 'em together into one form, factor the notebook, and then, you know, start collaborating around that. Mm-hmm. . So, you know, imagine if Notion and Datadog would have a baby . Yeah.That's kind what you get. Yeah.[00:07:41] Michaela: That's cool. So I can imagine that. Let's. I'm on call and hopefully I'm not alone. A call. You are also on call, right? Yeah, and so we would open a fiber plane notebook.[00:07:52] Micha: Hopefully we're in the same time zone and we don't need to like wake up in the middle of that. Yeah.[00:07:57] Michaela: Hopefully. Yes. And then we want to understand. How the system is behaving. And so we are pulling in observers. These are data sources. Yeah. More or less. Right. And then we can do some transformation with those data. Data sources or[00:08:12] Micha: Yeah, yeah. That, yeah, exactly. That, that might be the case. The other thing that we integrate with is, for instance, PagerDuty.So an alert goes off indeed we are on call, but an alert goes off and we have this PagerDuty integration. And subsequently a notebook is created for us already. Mm-hmm. . Okay. Maybe, maybe even with, you know, some, some charts and logs that are already related to the service that might be down.Okay. So depend, So depending on the alert, obviously you're, as you know, you're as good as how you've instrumented your alerts. But say we've written some good alerts, we now have a notebook ready to go. Based off a template. So that's another thing that we, that we have as well, which is this template mechanism.And now, you know, we're ready to, to, to go in, get in into things and start debugging. So we might have a checklist, you know you look at the metrics, I'll look at the logs, sort of this action plan. We pull in that data we start a discussion around it. Mm-hmm. , hopefully we come, we come to the, to the, you know, the root cause of, of our issue.[00:09:11] Michaela: Okay. And so this discussion and this pulling in data, this happens all in the notebook. Can you explain me a little bit more, and also our listeners Exactly. When we are on this, you know, on this call now, having a fiber plane notebook in front of our, what do we see, right? How does that, how does the tool look?[00:09:28] Micha: It's, it's very similar to, I would say, like a Notion Page or a Google Doc page. Mm-hmm. . So we've got like different, different headings. The other thing that we have is, so you might have a title for a notebook, right? You know, the billing, the billing API is now. The other thing that we have is sort of this, this time range.So maybe usually when there's an issue, you know, we've seen this behavior over the last three hours, so we can sort of have that time range locked into place. So we only want to see our. For the last three hours. And that means that any chart that we plot or any log that we pull in will adhere to that global timeframe.So that's what we see. Mm-hmm. . We have support for labels, so, you know, obviously big fans of Kubernetes and, and Promeus. So we, you know, labels are. A first class primitive on the platform. So you're able to sort of populate the notebook with the labels that might maybe be related to our service.Right? So it's a US East one, which is our region. It might, you know, say service is the billing. It might be, you know, environment is production. And the status of our incident is, mm-hmm. ongoing, stuff like that. So we have, we've got, go ahead.[00:10:34] Michaela: Cool. Yeah. And, and so is it then from top to bottom we are writing and we are investigating and we are writing out down the questions that we have and the investigation.Yeah, exactly. We do.[00:10:44] Micha: Yeah. Yeah. And so, so[00:10:45] Michaela: we might have, Is it an Yeah. Is it Yes,[00:10:48] Micha: we our work? Yeah. Yeah. It's sort of Exactly. And I think in the most ideal use case, right. And I do it most ideal scenario, you're kind of like writing your postmortem as you go along. That's what[00:10:59] Michaela: I, I was thinking exactly that.Right. And then maybe next time I'm on call again and I get PagerDuty and something is down, it's again, billing. Can I search in the fiber plane notebooks to find, you know, what we did last time and then[00:11:13] Micha: Exactly. So you'll, you'll search, jump to the conclusion . Yeah. Yeah, exactly. Hopefully, hopefully if you, if you experience, you know, the same issue multiple times at some point, we'll, we'll, you know, do a little commit on GitHub and we, we fix our, fix our, Yeah.Do. But yeah, indeed, so you can search Yeah. Cool. On the notebooks and see if you've, you know, ran into similar issues. So that's, you know, it's great for building up this, this system of record, right. This knowledge base of mm-hmm. . Mm-hmm. . Of infrastructure issues and, and incidents. And it's also great for onboarding, right?If a new person joins, like, this is our process. These are some of examples that we've run into you know, have a look. And now you've got a sense for you know, how we, how we handle things and some of the issues that we've investigated. Yeah. Cool. One more thing on the, on the product. So the other, so, you know, sort of explained the, the notebook form factor.We've got these providers, right, that pull in data. From different, different data sources like Elastic Search or, or Prometheus. The other thing that we have is a command line interface which is called fp. Mm-hmm . And apart from, you know, being able to create notebooks from your terminal and you know, even invite people from the terminal, all this sort of usual stuff that you would, you know, expect from interacting with an API, with, with a product like this, there's two other things that we do.So one is a command called FP Run. And it allows you to, if you are typing a command like cube, ctl logs for a specific pod, you can pipe that the output of that command to a notebook. And why that is useful is of course, you know, when we're de debugging this issue, you and I, and you're start typing things in your terminal.I have no idea what you just did. And this is a way sort of to capture that. So you're piping these these, these outputs from your, the stuff that you're typing into your into your. into the notebook. And the cool thing is, you know, in, on your laptop you just, you know, sort of see text, right?Monospace output. But for certain outputs such as the cube CTL logs command, we actually know the structure of the data and we're actually capable of formatting that in the notebook in the structured manner that you can start filtering on, on the logs and you know, select certain columns and sort of highlight even certain loglines for prosperity that you say, Hey, these are the culprits, these are the things that you need to take into, into consideration next.So we have this sort of command line interface companion, and the other thing that it does, you're actually capable of running a long, like, sort of same use case as it just, I mentioned, but like a long running recording, like you actually record your entire shell session session as you're debugging this thing and all the output gets piped into the notebook.Cool. Cool. And[00:13:46] Michaela: so I have two questions for fiber plane. One. Is the software engineer the right person to interact with you know, fiber plane or is it the site reliability engineer that's really designed, you know, or the tool is designed[00:14:03] Micha: for? Yeah, it's, it's, that's an excellent question. So I think one, one site reliability engineers, you kind of see in more larger organizations, right, where you start splitting up your teams.I will say, I think at the end of the day, right, is if you're an engineer, you've built the service, now you need to maintain it now you need to operate it like it's, it's your baby, right? You need to, you probably know best how that system behaves than anybody else. So indeed I would say that, yeah, the target group is, you know, developers.Mm-hmm. .[00:14:40] Michaela: And so the other question that I had around fiber plane is also. When we are on this call and we are writing in this notebook, how does the whole scenario look like? Are we still on a call, like, do we have Zoom or, you know, Google meet open, or are we really in the, in the fiber plane document just writing, Or are we sitting next to each other?You know, what, what's the traditional, Is there a traditional scenario or is this all possible with fiber plane? How would you recommend using[00:15:09] Micha: it? Yeah, Yeah. Yeah. Not a, not a great question. Right. I think back in the day, it would be that, you know, we maybe sit in the same office and I scoot over and we start looking at a, at a screen, right?And start typing together. Mm-hmm. . The reality is, of course, we're all doing remote work now, and we might not be in the same room. So I do think people will still use a Zoom call or a Google meet you know, as a companion to talk over stuff. I think, you know, people will still communicate in Slack and sort of start chatting back and forth.But I think what we hope to achieve with fiber plane is like the pasting of screenshots, right? Well, if you take a screenshot of some kind of chart in your dashboard and you put it in Slack and you know, somebody yells, Oh, that's not the, that's not the thing that you should be looking at. You should, you know, like all that sort of slack glue That, you know, it's our, our goal to do away with that.[00:15:59] Michaela: Yeah. And, and the slack blue is also very problematic for the search. At least I'm never able to find it again. Right. It's like is in the dark, super in[00:16:07] Micha: the dark area. Yeah. Super ephemeral. Yeah. Yeah. You can't, can't go back in time easily. And, and you know, how did we solve this last time? So again, like building up that system of record, I think.[00:16:17] Michaela: Yeah. Very cool. And so how long are you now working on fiber plane already?[00:16:23] Micha: So we've been working on it for about two years now. Which is a, is a, is a long time. I think as a sort of, you know, one of the things that we've, I guess, sort of discovered along the way that we're kind of like building two startups at the same time, Right?We're doing a notion or like a, a rich text, collaborative rich text editing experience, which is kind of like a startup on its own. Mm-hmm. . And we're building sort of this infrastructure product. So it's, you know, it's taken quite some time and, and energy to, to get the product to where it is now.[00:16:54] Michaela: Yeah. And do you have already users? Is it like can people that listen today, can they hop on fiber plane already or.[00:17:02] Micha: It's, it's in it's been in private beta, mm-hmm. , but I think by the time this gets aired it's will be in public beta and people can sign up and take it for a spin. And, you know, we would love to get feedback on, on our roadmap, right?And Okay. People can suggest what other types of providers we need to support, what are types of integrations we, you know, would love to, to have that convers.[00:17:23] Michaela: Cool. Yeah. So is there, There is the provider side. Is there something else that you want feedback on that you are exploring[00:17:30] Micha: maybe. . Yeah. Yeah. So we've got the providers that's one thing.We've got sort of our templating stack. Mm-hmm. So curious to sort of see how people sort of start codifying their knowledge, right? What's, what, what kind of processes people have to debug their infrastructure and sort of run their incidents or write their postmortems. So curious to see what people come up with there.Other types of integrations. Right? So we have as I said, sort of PagerDuty what other type of, sort of alert, alert to notebook or other types of external systems that we need to plug in with. I would love to get some feedback on that as well. Yeah,[00:18:04] Michaela: I think I had page Bailey over on the podcast.She's from GitHub and she was she was also, they were releasing something with copilot and you know, For data scientists, some, some spaces here. And she also said like, well, we really need input from the users, right? So try it out, you know, tell us how it's working. I think it's so valuable, right, to see not only like you have your vision and obviously.It's going one way, but then if you have your users, sometimes they take your product and they use it in a very different, you know, way than you anticipate it, which can be very informative. Right. I dunno. You have done two startups already. Have you seen that? And how do you react to it? Do you instrument the data a little bit?How do you realize that people are using your product in a different. . Yeah.[00:18:50] Micha: So, so obviously we have metrics and analytics on sort of usage patterns of the, of the product. But I think, I think that data is excellent, right? But also qualitative data is, mm-hmm. , especially at this stage is probably even better, right?Where you can get somebody on a call and, you know, tell us about your use case. Tell, tell us about the problem that you're trying to solve here and how can we be, be helpful in like what types of integrations should we support? I think sort of the difference between. Worker, I would say, and, and fiber plane is that, you know, worker was a pretty confined piece of surface area, right?Cic, c d the whole goal is so you either have a, you know, a green check mark next to your build or a red check mark next to your build. Like it either, you know, failed or passed. And we need to sort of do that fast for you, get, get that result quick. Mm-hmm. . And with fiber plane, it's a more. I think that the interesting thing here is like, it's a, it's a more explorative and a sort of rich design space, right?It's this notebook, which already you, you know, you can start typing and text and images and headings and check checklists and whatnot, right? It's a very open form factor and design space. And then of course, with the integrations, it can even, you know, be richer. So I'm very curious into your point, right what direction people will pull the product into.Cause you can take it into all sorts. Use cases and scenarios. Yeah,[00:20:05] Michaela: exactly. And I think as a founder also, or as the design team, product team, it's it's also a little bit of a balancing act, right? So how far, you know, let me, are we going with what the user are doing with our product and where are we setting some boundaries that they can't do everything right?So there's also often the talk about opinionated products, right? That you can actually do one thing and on one thing only, and we have an opinion on, you know, how. Supposed to use our product. And you know, we try to, if we see people deviate from that, we try to put an end to it. And then there's the other way where you say, Well, you know, if you take fiber plane and you do X with it and we haven't thought about this maybe, you know, we are okay with it.Or maybe we even support that path, right.[00:20:48] Micha: Yeah, I think, I think we're more on indeed on the, on the ladder, right? I think what we've sort of, we talk about this a lot internally, sort of everything is a building block. You know, you've, we've got the notebook, you've got these different cell types, you've got providers, you've got templates.Mm-hmm. You've got the command line interface. So like for us, like everything is a building block and we, we actually want to retain that flexibility. Not be too prescriptive. Cause maybe you have a, a if you think about sort of the, the incident debugging or, or investigating your infrastructure, like you might have a certain process, I might have a completely different process and we need to be able to facilitate, you know, these different workflows.So, you know, thus far sort of our, our thinking around the product has been everything is a building block. And it should be this sort of flexible form factor that people can pull into into different scenarios and use. I mean,[00:21:36] Michaela: we have infrastructure as code, right? And we have like security as code.Maybe we have debugging as code. Maybe, you know, this is what's coming next. Can, can you envision that, that it's going in this direction? Because while we have building blocks, maybe right now it's not you know, programming language for debugging, but it could go a little bit into the distraction, right?No code coding for debugging.[00:22:02] Micha: Yeah, we've actually, we've, we've had some of, of that sort of discussion internally as well. If you think about the templates right. To, to some extent that is a, you know, we use J Sonet as a, as a sort of language, but we sort of codified them in a certain way and you can, you could argue that the templates is, you know, sort of a programming language for at least, you know, that debugging process, right?Yeah, exactly. Right. Yeah. And. And, and we, you can take that even further and make it kind of like statically typed and make it adhere to, you know, certain rules and maybe even have control flow. So I think that, that there's, there's a piece there. And then maybe, you know, obviously we have you know, some YAML configuration on how you set up your providers, right?Like how to connect to your infrastructure. So there's some, you know, observability as code in mm-hmm. in that realm. Yeah. Yeah. I think that'll be an interesting part of the journey, right? Like to figure out can we, and some even.[00:22:55] Michaela: Yeah, in some parts should be well, don't repeat yourself, right?Like, for example, pulling in these providers, configuring that, you know, I get the right data. This would actually be something that I'm, you know, pulling in again. And probably that's what your templates do, right? So you say billing, oh, and then check, check, check, check, check. I have, you know, all my signals here and they're configured in a way that it's useful.And then for this investigation, hopefully, One at a type thing, right? So I'm investigating, and as we, as we talked before once I realized what's going on, hopefully in my postmortem I'm going to, you know, make sure that this is not happening again. So this code probably is not going to be reused that often.Maybe some, you know, some ideas from it, but hopefully we won't reproduce the same sect completely exact thing again.[00:23:44] Micha: Yeah. Cool. Yeah, that's, that's a super great point. And I think coming back, sort of the early part of the conversation around dashboards, right? I think thus far what we've sort of experienced as, you know, engineers ourselves, like, I think, I think we probably had sort of a phase around information gathering.Like all these dashboards are great for information gathering, but now with Kubernetes and containers and microservices like the, the, the number. Services that we're running and the complexity has increased. So I think, I think there's sort of an opportunity for more exactly what you're describing. So it's more about action, right?Mm-hmm. , what? What are we doing? We want to have the information, we want actionable intelligence that informs us what to do.[00:24:20] Michaela: Yeah, yeah, exactly. Because now I'm looking at this dashboard and I'm seeing the signals. But then everything else is outside of, you know, this realm, right? So what actions do I take?Do I go, go to the console? Do I restart that service? You know, or, you know, whatever I'm doing. And, and it's also vanishing, right? So I'm doing it, but then. Who can see it, What I did. Right? Yeah, exactly. And so now we are capturing this, which is very nice, and then we can learn from it Right. Postmortems as well.Yeah. So I looked a little bit through your blog and and, and your Twitter, and you were also talking about blameless postmortems. So how do you think about psychological safety? How should people. In an organization look at on call and incident management to really make it sure that we are ending the blame game.Right. You probably have some thoughts about that as well, because you're working in this area.[00:25:19] Micha: Yeah. I, I think it's important to, and you like not have put any blame on any person. Right. It, it is a, and I guess sort of, you know, that's also why we're building this product. It is a collaborative process to debug an issue or resolve an incident.Like, and what you want to achieve is to put the entire team in the best possible position to solve the issue at hand and and, you know, a support structure around it. So, you know, coming back to the product, like being able to, to open discussions. Point people in the, in the, in the right direction.[00:25:52] Michaela: So maybe also if it's easier to find a problem to root cause it, and, you know, incidents become no issue or at least a lesser issue. So maybe the blame game is not that important. Can, can we say it that way?[00:26:08] Micha: I think so. Yeah. Yeah, yeah. If, if, you know, if the process becomes repeatable and we codify that and we collaborate on it and we build up that, again, that system of record and knowledge base I think that, you know, puts us in a safer position to, to solve the next one.That's[00:26:25] Michaela: true. Yeah. Another thing that I was thinking of when I looked through, you know, fiber plane and what it does is KS engineering and I thought like what KS engineering is where you try to prevent not only the knowns, but also the unknowns, right? So really think about, you know, what, what could go wrong and then, you know, make a fallback so that your system is reliable.Or, you know, if this database goes down that not the whole system goes down, but only a part of it and so on. Do you think that KS engineers can act. Source or, you know, use those notebooks that you're creating as input for knowing, you know, what we should actually look at and, Yeah.[00:27:02] Micha: Well, I think it, well, one thing I think it'd be a great provider yeah.integrating with, with, with, you know, one or many of the, the chaos engineering services out there. I think it's a great way to train your team, right? You, we plug in some K engineering provider. The, the provider communicates with your infrastructure and such, pulling out wires from from, you know, your, your system.And then now go ahead and start, you know, debugging this issue and mm-hmm. and you know, use different templates and you can, you know, sort of trial all sorts of different issues. I think it'd be super fun. Yeah.[00:27:37] Michaela: Yeah. So Micha, one thing that I also saw is that some of your of fiber plane is open source.So what's your vision for open sourcing that are, you know, are some parts being open source? Can people help with the building fiber plane?[00:27:51] Micha: Yeah, great question. So right now what we've open sourced is a project called fp bind Gen. So this is actually of SDK bindings, generat. For how you would create full stack web assembly plugins.So this is what we use to build our own elastic search and our Prometheus plugins. So we've, we've open sourced that. It's on GitHub we've already got some, quite some feedback on it. So, but would love some more. And then going forward we'll be open sourcing sort of our templating stack the proxy.Which sort of sets which you install inside your cluster and sort of sets up the secure connections between the providers and your infrastructure and then the fabric plane managed service. And then the command line interface that I mentioned will also be open source. So expect more to hear from us on the open source front.[00:28:36] Michaela: Yeah, Cool. I think that's so important, especially for developer tooling, that people can also really get it into their hands and then help, you know, shape the, or make the best product for their, for their environments that they have. I think this is such a success strategy.[00:28:50] Micha: Yeah, exactly. And you know, we, as I said, we would love to get feedback on the, on the providers and the, the plugin model, but maybe even, you know, once we open source the the, the provider stack would be great if people maybe come up with crazy ideas.Right? You can think of any type of provider that you could surface data inside of, inside of the notebook. Yeah. Doesn't need to be observability or like monitoring data. Like could be. Yeah.[00:29:14] Michaela: Cool. Yeah, I'm super excited. What, you know, what will come out of that. Yeah. So I want to come back a little bit to your founding story because I know a lot of people are interested in developer tools and, you know, and, and Startup founding as well.And you did it twice already, right? And maybe several more times in your life, I dunno. But right now we know of two instances. Yeah. There, there. So, and and also for fiber plane, you already got funding, right? Several million dollars. And so how do you do. How do you do it out of Europe is also some of my questions that I have because I think it's a little bit a different game here in Europe than it's in Silicon Valley.Yeah. It doesn't look like, you know, opportunities around the corner everywhere. I, I have been studying in the Netherlands, so I know that actually Netherlands is really a good place, I think for, for tech startups and, you know, also a little bit out of the universities I saw there like You know, you get a little bit of help and, and, and funding and things like this, but still, I would assume it's harder than in Silicon Valley.So how did you make it work? How did you get funding? You also said that worker had some funding at the beginning. Yeah.[00:30:26] Micha: Yeah. It's a good question. Well, how did we do the second time around, to be honest, Because it's the second time. Yeah. It was a bit easier. I mean, it's never, It's, Yeah. Yeah. It's obviously, you know, never as easy.But it was definitely easier. I do think in Europe, if I also compare it to the worker days to where we are now, Like I do think the funding climate and sort of the, the, the, the thinking around startups has improved a lot, right? There's there's more funding out there, there's more feess. I think more importantly though, what we've seen is that now.Sort of the European unicorns have exited or gone ipo. And we have actually more operators inside of Europe that have experience in either founding a startup are able to sort of start doing angel investing or have worked at multiple startups and we have just more operating experience you know, versus honestly like bankers, right?That That, you know, help you out or are, are investing in you? So actually the, the, the funds that funder does were Crane Venture Partners which is actually a seed fund out of London that's actually focused on developer tools and infrastructure. So I would highly recommend, you know, talking to them.If you're thinking about, you know, building a developer tool company and you need some funding, of course my own fund is also focused on developer tool. So shameless plug there on MP Hard Ventures. You can just Google that and find me. And then we have North Zone, which is a, you know, very like multi-stage fund.Also out of, well actually quite different geographies and Notion Capital out of out of London as well. Okay. We've got some have several micro VCs, several things. Yeah. We have somebody funded West Coast Alana Anderson was doing with base case capitals investing in a lot of infrastructure and enterprise startups and Max Cloud from System one in Berlin.Is another one. So yeah, we have a good crew of, you know, a diff different experience and sort of different stage type of funding as well.[00:32:19] Michaela: Yeah. This was my next question that I had for you. It's probably not only about the money, you said experience, right? It's also about the knowledge that people have, right.How to do things. Probably, yeah. The people that they know, right? So that they can Yeah. To be Yeah, exactly. Can consider the right people have the right network and so.[00:32:36] Micha: Yeah, I think, I think the most, yeah, it's is, is introductions, but it's also. You know, if you, if you think about the, the funds that actually do developer tools, right?So they, in their portfolio, they, they've seen, you know, startups trying over and over to tackle some kind of go to market issue or trying to build an open source, mm-hmm. company, right? So they have some, some pattern matching and some, some knowledge about, you know, what to do and what, what not to do.Of course, it's all advice, but it's good to sort of have some people in your corner that have at least seen this, these types of companies being built. Over and over again. Right. That's, and then, and then other VCs have more experience in, you know, more, more like how to build up or scale up a sales organization and thinking about how to run a SaaS company.So yeah. Different experience from different, different funds.[00:33:20] Michaela: And so now you listed quite a lot of different investors. Do you reach out to each one of them or do you have like a whole group meeting and they're all in there and you ask them for advice? , how does it[00:33:33] Micha: Yeah. No, it's, it's sort of one on one chats, right?Either over, over chat or, you know, we meet up for coffee or, or or breakfast, mm-hmm. . But yeah, we try to do that on a, on a regular cadence. And then of course, when, you know, something exciting happens, such as our launch know, we try to group them together and get them all on the same page around the same time.Or of course if an issue arises, Right, which could also be the case. Yeah. And then sort of all hands on deck and everybody in the same room or zoom.[00:34:01] Michaela: And what about your biggest struggle on your, on your entrepreneurial journey, maybe now with fiber plane or maybe with Worker? Did you ever think that, you know, worker, when you started it, did you think that somebody is going to buy this and.This is going to be huge.[00:34:16] Micha: Yeah. Yeah. I think, I think the ambition was always there. Mm-hmm. . And, but, and, and sort of that drive to just make better developer tools. I think that sort of, that, you know, that's been true for all the companies or all too. Yeah, that's,[00:34:30] Michaela: Yeah. And what[00:34:32] Micha: I struggle. Yeah. Yeah. So I think, I think as I think for fiber plane now, it's not necessarily a struggle, it's just the real, which this mission of this flexible form factor, just the fact that we're doing sort of two startups at the same time has been sort of mm-hmm. An interesting thing to to build now, right? You're doing this rich, collaborative, rich tech editor and trying to build this infrastructure oriented company, and I think that's been yeah, just an interesting experience with building out a team.You know, the technology and the product that we.[00:35:01] Michaela: Yeah. Yeah. So maybe can you tell me a little bit more about again, if people want to hop over to Fiber plane now and try it out how does that work? Do you have to, you know is there a sign up? Is there a waiting list? I mean, you said probably when this airs there is a public beat, but still do you have to, you know, what do you have to reach out to you, you give me a demo or I just fill in my credentials and I'm off togo.[00:35:25] Micha: you can just sign, sign up with Google and then you're off to the races. And then of course, if you want a demo and sort of get some more, more more help or onboarding we're happy to help you and get on a call and walk you through it. But yeah. Okay, cool. Try playing com. Is there[00:35:40] Michaela: also a, Yeah, is there a video or something that we can look[00:35:44] Micha: at?Yes. The, the website and there's a video.[00:35:50] Michaela: Okay. I will link that so that people can go Yeah. And it will explain everything to them. Right. What about pricing? Whatever pricing? Yeah. You have already some idea around pricing. Yeah.[00:36:01] Micha: We've got some ideas on how to charge, but I think right now for us, it's important to get the product market fit, mm-hmm.and as such, you know, get, get the feedback. From these companies and these teams using the product. So we'll introduce pricing at a later stage. So for now it's, it's free to use, mm-hmm. . And you just give us your time and your feedback, and then Yeah, we're grateful.[00:36:20] Michaela: Yeah. And what about my data?Is it safe with you? Like, do you have some visibility into my data or do I send it over to[00:36:29] Micha: you? Yeah, so we actually so the way the, the providers work the plugins, so they actually get activated through a proxy. So we install a proxy inside of your cluster. The proxy sets up a secure bidirectional tunnel from your infrastructure to the fiber plane managed service.And then we do, for that specific query, we do store the data that's related to that query. So of a result, we do store that in the notebook. And yeah, we probably will come up with sort of more enterprisey ideas around how to self host[00:36:59] Michaela: it, Right? Or something[00:37:01] Micha: as an example. Yeah, yeah, yeah. But again, we'd love to get some feedback on that.[00:37:07] Michaela: How that works. Right? Yeah. Okay, cool. So yeah, that sounds really good. I think you, at least my questions, , you could answer them all, but maybe my listeners have questions and then they can send them to you. I think you will be, Yeah. Quite happy, right?[00:37:22] Micha: A hundred percent. At mes on Twitter, m i e s and at fiber, playing on Twitter, fiber playing.com.Sign up, take it for spin, shoot us a message. Yeah, sounds.[00:37:33] Michaela: Yeah. Yeah, it sounds super interesting. I hope that a lot of my listeners will do that, and I will link everything in my show notes that we, you know, talked about your, your Twitter handle and everything so that people can reach you. And I hope you get a lot of questions and people give it a spin and give it a try and send you their use cases,And yeah. I hope you all the best with your product. Thank you so much for being on my show today Micha. And yeah. Thank you. Bye.[00:37:59] Micha: Thank. Thank you for having me.[00:38:01] Michaela: Yeah, it was really great. Bye bye[00:38:04] Micha: bye.[00:38:06] Michaela: This was another episode of the Software Engineering Unlocked Podcast. If you enjoyed the episode, please help me spread the word about the podcast.Send episode to a friend via email, Twitter, LinkedIn. Well, whatever messaging system you use, or give it a positive review on your favorite podcasting platform such as Spotify or iTune. This would mean really a lot to me. So thank you for listening. Don't forget to subscribe and I will talk to you in two weeks.
Good Day and welcome to IAQ Radio+ episode 672. This week we welcome Tom Peter and Trent Darden for a show we are calling Joining Forces. Trent and Tom are part of the management team at First Onsite. Each has unique skills and experience that our audience will benefit from hearing about. Trent Darden is SVP of Operations, US East at First Onsite Property Restoration. He was formerly the COO and spent 20 years working at Rolyn Companies of Rockville, MD. Trent started with Rolyn in 2000 as a Project Manager, where he worked exclusively on large loss insurance claims, handling all related estimating and general project management. In 2007, he was named the Vice President of Rolyn's Estimating and Consulting Department. As Rolyn's Chief Operating Officer, Trent worked with all of their offices to ensure delivery of the quality service our clients know and expect. A Virginia native, Mr. Darden has lived in the Tidewater area for over forty years. He began working in construction immediately following high school, and throughout his college career at Tidewater Community College and Old Dominion University where he pursued a degree in business. Tom Peter, MS, CIH is Senior Vice President – Regulatory Business Practice at First Onsite Property Restoration. He was formerly the CEO of Insurance Restoration Specialists of East Brunswick, NJ. Tom has supervised just about every type of hazardous waste, indoor environmental quality, mold remediation, water damage restoration project there is and all while in the shoes of a Certified Industrial Hygienist. His vast experience in the field and education make him the go-to guy in the restoration world for everyday and emerging issues in the restoration industry. Tom is also a CIH working as a contractor, something unique in the disaster restoration industry. He sees himself as someone with a scientific and technical background but with realistic and practical solutions.
Teaching, Jul 17 2022
Hosts Nate Wilcox & Ryan Harkness talk about Storm Raves and their various downtown rivals, club kids and others involved in the first wave of rave on the US East coast.Have a question or a suggestion for a topic or person for Nate to interview? Email letitrollpodcast@gmail.comFollow us on Twitter.Follow us on Facebook.Let It Roll is proud to be part of Pantheon Podcasts.
Hosts Nate Wilcox & Ryan Harkness talk about Storm Raves and their various downtown rivals, club kids and others involved in the first wave of rave on the US East coast.Have a question or a suggestion for a topic or person for Nate to interview? Email letitrollpodcast@gmail.comFollow us on Twitter.Follow us on Facebook.Let It Roll is proud to be part of Pantheon Podcasts.
In this episode of Winning the Challenger Sale, hosted by Jen, Johnathan Bald, Area Vice President of Large Enterprise Sales Netwitness - Canada & US East at RSA Security, discusses the fundamentals of qualifying, disqualifying, and prioritizing opportunities. Johnathan explains how the unique market fit and the maturity of an organization influence the likelihood of qualifying or disqualifying a prospect, and factors that influence internal opportunity prioritization.
Welcome to the 100,000 downloads milestone episode!
Teaching, Feb 27 2022
Teaching, Feb 20 2022
Teaching, Feb 13 2022
Teaching, Feb 6 2022
Teaching, Jan 16 2022
Teaching, Jan 9 2022
Teaching, Dec 26 2021
Teaching, Dec 19 2021
Breaking news today AWS had a major outage that impacted hundreds of companies. AWS prides itself in having the highest availability. However, today (12/7/21) we see a widespread outage across all of US-East-1. --- Send in a voice message: https://anchor.fm/level99/message
Teaching, Nov 28 2021
This week's episode of the Camerosity Podcast is one for the ages. Michael Wescott Loder, Nikon historian and author of "The Nikon Camera in America 1946-1953", joins us to discuss a huge number of topics regarding Nikon's history along with the Zeiss-Ikon Tenax II and a huge number of other cameras that he is interested in. We go over everything from Nippon Kogaku's role in the post war Japanese camera industry, why they were successful and Canon wasn't, Nikon rangefinder prototypes, and a whole lot of other interesting topics. But wait there's more... Also joining in on the discussion is none other than president of the Nikon Historical Society, and fellow author, Robert Rotoloni jumps in to add his thoughts and ask some questions of his own. I'm not done yet.... Returning from episode 8, author of "Making Kodak Film", Robert Shanebrook surprises us with a visit to participate in the discussion and ask some questions. Rounding out our guests are Mark Faulkner, Nick Lyle, and Miles Libak! Next week, we will return with Episode 14, so look out for the Episode announcement this upcoming Sunday, November 28th! This Week's Episode Wes Bought a Nikon S2 to Photograph in Caves / How Wes Got Started Researching Nikon Tenax II / Akarette / Akarelle / AGFA Ambi Silette / Lordomat Why Was Nikon So Successful? Why Didn't Canon or Pentax Have the Same Level of Success? Why Did Zeiss-Ikon Make the Tenax II? / Which Came First, the Tenax or Robot? Hubert Nerwin, Zeiss-Ikon, the West German Camera Industry, and the Kodak Instamatic Why Didn't West Germany Have any Focal Plane SLRs like the Contax SLR? Zeiss-Ikon Stuttgart Had to Rebuild After the War / East Germany Was Winning For a While Nikon Saw the Best Success in the US East and West Coasts, Not the Midwest Hubert Nerwin and Operation Paperclip Many Camera Companies Went Out of Business Between 1958-1962 / Differences Between Germany and Japan Nippon Kogaku Developed the Nikon F at the Same Time as the Nikon SP German SLRs Were Hampered by Their Own Reliance on the Compur Leaf Shutter Nikon and Canon Had Different Approaches Toward Making Rangefinders The Nikon S2 Originally Had Knobs for Wind and Rewind / Only Black Dial S2s Can Be Motorized Canon Made 33 Different Rangefinders Between 1945 and 1958 Why Was the Nikon F So Successful? / Why the Canonflex Failed Nikon Model One Bellows Did Nippon Kogaku and Eastman Kodak Ever Collaborate On Anything? Tuberculosis Almost Killed Medium Format in Japan David Douglas Duncan's Role In Nikon's History / Jun Miki Nikkor 50mm f/1.4 and f/1.5 Lenses Nippon Kogaku Would Service Any Camera or Lens for Any Professional Photographer During the Korean War. The Nikon F Was So Successful, It Killed the Nikon Rangefinder / The Nikon SPX and SP2 Was Built But Never Released There Were More Canon 7 Rangefinders Made Than All Nikon Rangefinders Combined Nippon Kogaku Also Worked on a Medium Format SLR and a 16mm Subminiature, None Were Ever Released What Would the Japanese Camera Industry Look Like if Nippon Kogaku Never Existed? Japanese Optics Companies Helped Each Other Out Can Nikon Survive Today? Smartphones Are Killing the Camera Industry Color Film is Out of Stock Everywhere (Except for Theo) When Nikon Made the Millennium SP and S3 Rangefinders, They Used 80-Year old Retired Guys to Train People How to Make Them Robert Rotoloni Did the Foreward in Wes Loder's Book Show Notes If you would like to offer feedback or contact me with questions or ideas for future episodes, please email me at mike@mikeeckman.com The Official Camerosity Facebook Page - https://www.facebook.com/CamerosityPodcast Camerosity Instagram - https://www.instagram.com/camerosity_podcast/ Wes Loder "The Nikon Camera in America 1946-1953" - https://www.amazon.com/Nikon-Camera-America-1946-1953/dp/0786432217 "The Tenax II: Zeiss Ikon's Precision, Fast-action Camera" - https://www.amazon.com/Tenax-II-Precision-Fast-action-Camera/dp/138987821X Wes's Blog - http://wesloderandnikon.blogspot.com/ Robert Rotoloni - http://www.nikonhistoricalsociety.com/p/rotoloni-book.html Mark Faulkner - https://thegashaus.com/ Theo Panagopoulos - https://www.photothinking.com/ Anthony Rue - https://www.instagram.com/kino_pravda/ Paul Rybolt - https://www.ebay.com/usr/paulkris
Teaching, Nov 21 2021
Teaching, Nov 14 2021
On this episode of The AIE Podcast... Congradudolences to Gusty! New World emerges- this week! Saturday night is alright for MEGA *Insert Final Fantasy XIV meme here* And, we have Maellung and Dux who are here to talk to us about AIE in LOTRO! All that and more coming up right now... Podcast Audio Raw Video http://youtu.be/9Ia4fngEVOw Open Welcome to episode #374 of the podcast celebrating you, the Alea Iacta Est gaming community, the die has been podcast. This is Mkallah: To my left is Mewkow: - (catch phrase here). And to my right is Tetsemi: (catch phrase here). This week we are joined by special guest Maellung and Dux who are here to talk to us about AIE in LOTRO Welcome! Ok, we'll be digging into LOTRO shortly, but first, let's cover this week's news... AIE News Community New WoW GM - With Syreyne stepping away from WoW and the WoW officers still needing leadership help there, the WoW Director team has asked Gusty to step in to help in that capacity along with existing GM's Tetsemi and Ashayo. Gusty's leadership in WoW has been great and she has more plans on continuing to nurture that community. Please join in congratulating Gusty-Koo! Mandatory Fun Nights Where the fun is mandatory but the attendance is not. Sunday - STO 8:30 pm Eastern Monday - GW2 9:30 pm Eastern Tuesday - SWTOR 9:00 pm Eastern Tuesday - FFXIV (Casual Raiding) 9:30 pm Eastern Thursday - FFXIV (Progression Raiding) 9:30 pm Eastern Friday - ESO 9:00 pm Eastern Friday - FFXIV (Late Night Fun Night) 11:00 pm Eastern Saturday - LotRO 8:30 pm Eastern Saturday - FFXIV (Maps) 9:30 pm Eastern Saturday - Noob Raid (WoW) 11:00 pm Eastern Streaming and Guild Podcast News New Overlords - http://www.newoverlords.com/ SWTOR Escape Pod Cast 397: Take 5 (More Levels) http://www.newoverlords.com/swtor-escape-pod-cast-397-take-5-more-levels/ The Public Test Server is updated with L80 plus Sorc and Assassin styles. Working Class Nerds - https://workingclassnerdscom.wordpress.com/ Episode 123: LIVE with Kogass_!!! https://www.buzzsprout.com/143519/9251311-episode-123-live-with-kogass_.mp3 Marcus and Nick go LIVE on Twitch with Kogass_!!! Kogass is a longtime friend of the show, real life friend, and SWTOR Twitch Streamer! The three of them talk about Phase Four of the Marvel Cinematic Universe, and of course all of the gaming shenanigans that they're up to. Enjoy! You can find more of Kogass at: https://www.twitch.tv/kogass_ Fleet Action Report - https://www.youtube.com/channel/UCqEKYJPKmiiYuj7eiHy6kOg Fleet Action Report Ep 63 Equal and Opposites https://www.youtube.com/watch?v=_tzIUSehfI0 Nikodas and Grebog are looking through the mirror in this episode of Fleet Action Report. We focus on what you need to know about the new event focusing on the task force operations and without going over and spoilers a brief chat on the new episode and season. Boards and Swords - https://boardsandswords.com C'MON NFTs, Warhammer Escalation League, How are we doing? - Boards & Swords #181 https://boardsandswords.com/blog/bs181 Chris is up to his eyeballs in painting miniatures - is that someone on the inside of his ear?? How did that get there?? Board Games did well on Kickstarter again, and NFTs coming to board games?? Plus Chris & Philip talk about the past year of the show and where they see it going over the next year. NOMADS Getting ready for the New World release on 09/28 As with all best laid plans, this may not survive first contact with the enemy. However, until such time as we need to pivot, AIE New World will be launching in US East on the Pyrallis server as Syndicate. You will need to join the Syndicate faction to join the company so make that decision carefully since faction changes are limited. We'll attempt to secure the Company name of Alea Iacta Est but have a few suffixes ready if that's not possible and will post here as soon as we lock it in. Welcome to AIE in New World. The next AIE board game night will be October 7th @ 7:30PM Centra...
During a routine addition of some servers to the Kinesis front end cluster in US-East-1 in November 2020, AWS ran into an OS limit on the max number of threads. That resulted in a multi hour outage that affected a number of other AWS servers, including ECS, EKS, Cognito, and Cloudwatch. We probably won't do […]