Computer memory managment methodology
POPULARITY
JOIN OUR DISCORD CHANNEL https://discord.gg/4uwxk6TN6r Support us at: buymeacoffee.com/techpodcast In this episode of Project Synapse, John Pinard, Marcel Gagne, and host Jim Love discuss the latest advancements and challenges in the AI industry. The conversation highlights Google's strides with their Gemini AI, the enhancement of AI models with large context windows, and the importance of user-defined system prompts for better AI interaction. The discussion shifts to OpenAI's 'OpenAI for Countries' initiative, examining the implications of a centralized AI controlled by another nation. They introduce 'Maiple,' an open-source AI initiative for Canada aimed at creating a sovereign AI managed collaboratively within the country. The show emphasizes the necessity of a national AI framework to ensure data privacy, economic stability, and innovation. Listeners are encouraged to join the movement and help shape the future of AI in Canada by visiting maple.org. 00:00 Introduction to Project Synapse 00:23 Google's AI Advancements: Gemini Pro 2.5 03:05 Navigating Google's AI Studio 05:54 Google's Video Generation Model: VE O2 11:36 The Future of AI and Energy Requirements 15:26 AI Hallucinations and Memory Management 23:34 AI Models and Context Protocols 25:12 AI Safety and Regulation Challenges 27:27 Guardrails and Software Changes 27:54 Challenges with AI Reliability 28:16 The Evolution of Fact-Checking AI 28:59 Issues with AI-Based Products 29:18 The Problem with AI Tool Reliability 32:04 Building Local AI Systems 34:08 Custom Instructions for AI 37:48 OpenAI's New Initiative: Country AI 40:16 The Case for a Canadian Sovereign AI 44:42 The Vision for Maiple: A Collaborative AI Future 46:05 The Importance of Open Source in AI 58:43 Conclusion and Call to Action
Dr. Arkaprava Basu is an Associate Professor at the Indian Institute of Science, where he mentors students in the Computer Systems Lab. Arka's research focuses on pushing the boundaries of memory management and software reliability for both CPUs and GPUs. His work spans diverse areas, from optimizing memory systems for chiplet-based GPUs to developing innovative techniques to eliminate synchronization bottlenecks in GPU programs. He is also a recipient of the Intel Rising Star Faculty Award, ACM India Early Career Award, and multiple other accolades, recognizing his innovative approaches to enhancing GPU performance, programmability, and reliability.
Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo's talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company
Exploring Rust for Embedded Systems with Philip MarkgrafIn this episode of the Agile Embedded Podcast, hosts Jeff Gable and Luca Ingianni are joined by Philip Markgraf, an experienced software developer and technical leader, to discuss the use of Rust in embedded systems. Philip shares his background in C/C++ development, his journey with Rust, and the advantages he discovered while using it in a large development project. The conversation touches on memory safety, efficient resource management, the benefits of Rust's type system, and the supportive Rust community. They also explore the practical considerations for adopting Rust, including its tooling, ecosystem, and applicability to Agile development. The episode concludes with Philip offering resources for learning Rust and connecting with its community.00:00 Introduction and Guest Welcome00:26 Philip's Journey with Rust01:01 The Evolution of Programming Languages02:27 Evaluating Programming Languages for Embedded Systems06:13 Adopting Rust for a Green Energy Project08:57 Benefits of Using Rust11:24 Rust's Memory Management and Borrow Checker15:50 Comparing Rust and C/C++19:32 Industry Trends and Future of Rust22:30 Rust in Cloud Computing and Embedded Systems23:11 Vendor-Supplied Driver Support and ARM Processors24:09 Open Source Hardware Abstraction Libraries25:52 Advantages of Rust's Memory Model29:32 Test-Driven Development in Rust30:35 Refactoring and Tooling in Rust31:14 Simplicity and Coding Standards in Rust32:14 Error Messages and Linting Tools33:32 Sustainable Pace and Developer Satisfaction36:15 Adoption and Transition to Rust39:37 Hiring Rust Developers42:23 Conclusion and ResourcesResourcesPhil's LinkedinThe Rust LanguageRust chat rooms (at the Awesome Embedded Rust Resources List)The Ferrocene functional-safety qualified Rust compiler You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click here
Video Episode: https://youtu.be/7et_7YkwAHs In today’s episode, we dive into the alarming rise of malware delivery through fake job applications targeting HR professionals, specifically focusing on the More_eggs backdoor. We also discuss critical gaming performance issues in Windows 11 24H2 and the vulnerabilities in DrayTek routers that expose over 700,000 devices to potential hacking. Lastly, we address the urgent exploitation of a remote code execution flaw in Zimbra email servers, emphasizing the need for immediate updates to safeguard against evolving threats. Links to articles: 1. https://thehackernews.com/2024/10/fake-job-applications-deliver-dangerous.html 2. https://www.bleepingcomputer.com/news/microsoft/microsoft-warns-of-windows-11-24h2-gaming-performance-issues/ 3. https://thehackernews.com/2024/10/alert-over-700000-draytek-routers.html 4. https://www.bleepingcomputer.com/news/security/critical-zimbra-rce-flaw-exploited-to-backdoor-servers-using-emails/ Timestamps 00:00 – Introduction 01:14 – Zimbra RCE Vulnerability 02:17 – 700k DrayTek Routers Vulnerable 04:36 – Recruiters Targeted with Malware 06:14 – Microsoft blocks updates for gamers 1. What are today’s top cybersecurity news stories? 2. How is More_eggs malware targeting HR professionals? 3. What vulnerabilities exist in DrayTek routers? 4. Why did Microsoft block Windows 11 24H2 upgrades? 5. What is the impact of the Zimbra RCE flaw? 6. How do fake job applications spread malware? 7. What security measures can protect against More_eggs malware? 8. What are the latest gaming issues with Windows 11? 9. How can DrayTek router vulnerabilities be mitigated? 10. What are the latest tactics used by cybercriminals in email attacks? More_eggs, Golden Chickens, spear-phishing, credential theft, Microsoft, Windows 11, Asphalt 8, Intel Alder Lake+, DrayTek, vulnerabilities, exploits, cyber attackers, Zimbra, RCE, vulnerability, exploitation, # Intro HR professionals are under siege as a spear-phishing campaign disguised as fake job applications delivers the lethal More_eggs malware, leading to potentially devastating credential theft. Powered by the notorious Golden Chickens group, this malware-as-a-service targets recruiters with chilling precision. **How are recruitment officers unknowingly downloading malicious files, and what methods are threat actors using to bypass security measures?** “Microsoft is blocking Windows 11 24H2 upgrades on some systems due to critical gaming performance issues like Asphalt 8 crashes and Easy Anti-Cheat blue screens. The company is scrambling to resolve these problems that uniquely impact devices with Intel Alder Lake+ processors.” How can gamers with affected systems work around these issues until Microsoft releases a fix? Over 700,000 DrayTek routers are currently vulnerable to 14 newly discovered security flaws, with some critical exploits that could be used to take full control of the devices and infiltrate enterprise networks. Despite patches being released, many routers remain exposed, creating a lucrative target for cyber attackers. How can these vulnerabilities impact businesses that rely on DrayTek routers for network security? Hackers are leveraging a critical Zimbra RCE vulnerability to backdoor servers through specially crafted emails that execute malicious commands, revealing widespread exploitation just days after a proof-of-concept was published. Notable security experts warn of attackers embedding harmful code in the email’s CC field, which the Zimbra server inadvertently executes. How are attackers camouflaging their malicious emails to slip through security measures unnoticed? # Stories Welcome back to our podcast. Today, we’re talking about a new cyber threat targeting HR professionals. Researchers at Trend Micro have uncovered a spear-phishing campaign where fake job applications deliver a JavaScript backdoor called More_eggs to recruiters. This malware, sold as malware-as-a-service by a group known as Golden Chickens, can steal credentials for online banking, email accounts, and IT admin accounts. What’s unique this time is that attackers are using spear-phishing emails to build trust, as observed in a case targeting a talent search lead in engineering. The attack sequence involves downloading a ZIP file from a deceptive URL, leading to the execution of the More_eggs backdoor. This malware probes the host system, connects to a command-and-control server, and can download additional malicious payloads. Trend Micro’s findings highlight the persistent and evolving nature of these attacks, which are difficult to attribute because multiple threat actors can use the same toolkits. The latest insights also connect these activities to known cybercrime groups like FIN6. Stay vigilant, especially if you work in HR or recruitment. 1. **Spear-Phishing**: – **Definition**: A targeted phishing attack aiming at specific individuals or companies, typically using information about the victim to make fraudulent messages more convincing. – **Importance**: This method is specifically dangerous because it can trick even tech-savvy users by exploiting personalized details, leading to significant security breaches like credential theft. 2. **More_eggs**: – **Definition**: A JavaScript backdoor malware sold as a malware-as-a-service (MaaS) with capabilities to siphon credentials and provide unauthorized access to infected systems. – **Importance**: Due to its ability to latently steal sensitive information and its widespread use by various e-crime groups, More_eggs represents a significant threat to corporate cybersecurity. 3. **Malware-as-a-Service (MaaS)**: – **Definition**: A business model where malicious software is developed and sold to cybercriminals who can then use it to conduct attacks. – **Importance**: This model lowers the barrier of entry for cybercriminals, allowing even those with limited technical skills to launch sophisticated attacks using pre-made malware. 4. **Golden Chickens**: – **Definition**: A cybercriminal group (also known as Venom Spider) attributed with developing and distributing the More_eggs malware. – **Importance**: Understanding threat actors like Golden Chickens can help cybersecurity professionals anticipate and defend against specific threat tactics. 5. **Command-and-Control (C2) Server**: – **Definition**: A server used by threat actors to maintain communications with compromised systems within a target network to execute commands and control malware. – **Importance**: Disrupting C2 servers is crucial because it can cut off the attacker's control over their malware, mitigating the threat. 6. **LNK File**: – **Definition**: A shortcut file in Windows that points to another file or executable. – **Importance**: Misuse of LNK files in phishing campaigns can lead to automated execution of malicious payloads, making them an effective vector for malware distribution. 7. **PowerShell**: – **Definition**: A task automation framework from Microsoft consisting of a command-line shell and scripting language. – **Importance**: PowerShell is often used by attackers to execute and conceal malicious scripts due to its powerful capabilities and integration with Windows. 8. **Tactics, Techniques, and Procedures (TTPs)**: – **Definition**: The behavior patterns or methodologies used by cyber threat actors to achieve their goals. – **Importance**: Identifying TTPs helps security professionals understand, detect, and mitigate specific attack strategies used by threat actors. 9. **Obfuscation**: – **Definition**: The process of deliberately making code or data difficult to understand or interpret. – **Importance**: Obfuscation is commonly used by malware developers to conceal malicious activities and bypass security mechanisms. 10. **Cryptocurrency Miner**: – **Definition**: Software used to perform the computational work required to validate and add transactions to a blockchain ledger in exchange for cryptocurrency rewards. – **Importance**: Unauthorized cryptocurrency mining (cryptojacking) can misuse system resources for financial gain, leading to performance degradation and security vulnerabilities. — On today’s tech update: Microsoft has blocked upgrades to Windows 11 version 24H2 on certain systems due to gaming performance issues. Players of Asphalt 8 may encounter game crashes, while some systems running Easy Anti-Cheat might experience blue screens. These problems mainly affect devices with Intel Alder Lake+ processors. Until Microsoft resolves these issues, impacted users are advised not to manually upgrade using tools like the Media Creation Tool. Microsoft is working on fixes and will include them in upcoming updates. 1. **Windows 11 24H2**: A version of Microsoft’s Windows 11 operating system, released in the second half (H2) of 2024. It is significant because it represents Microsoft’s ongoing update cycle aimed at improving system performance and user experience, though it also highlights the challenges of software compatibility and stability. 2. **Asphalt 8 (Airborne)**: A popular racing video game often used for showcasing graphical and processing capabilities of devices. Its relevance lies in exposing potential software and hardware compatibility issues when new operating systems are released. 3. **Easy Anti-Cheat**: A software tool designed to detect and prevent cheating in multiplayer games. It is crucial for maintaining fair play and integrity in online gaming environments but can pose compatibility challenges with system updates. 4. **Blue Screen of Death (BSoD)**: An error screen displayed on Windows computers following a system crash. It is important as it signals serious software or hardware issues that could affect system stability and data integrity. 5. **Intel Alder Lake+ processors**: A generation of Intel’s microprocessors known for their hybrid architecture design. Understanding these chips is important for recognizing which systems might be more susceptible to the reported compatibility issues. 6. **vPro platform**: A set of Intel technologies aimed at enhancing business security and manageability. It’s critical to cybersecurity professionals because it allows for hardware-level encryption and more robust security management, but compatibility with OS updates can be problematic. 7. **MEMORY_MANAGEMENT error**: A specific type of error indicating system memory management problems, often leading to system crashes. It is crucial for cybersecurity and IT professionals as it affects the stability and reliability of a system. 8. **Compatibility holds (Safeguard IDs)**: Mechanisms employed by Microsoft to prevent system upgrades when known issues are detected. These are essential for protecting users from potential system failures and ensuring a stable computing environment. 9. **Media Creation Tool**: A Microsoft utility used for installing or upgrading Windows OS. It's important for IT professionals as it provides a means to manually deploy Windows updates, though it highlights the risks of bypassing automatic update safeguards. 10. **KB5043145 (Preview Update)**: A specific Windows update known to cause issues such as reboot loops and connection failures. Understanding these updates is crucial for maintaining system stability and ensuring that deployed systems are free from vulnerabilities and bugs. — In a recent cybersecurity alert, over 700,000 DrayTek routers have been identified as vulnerable to hacking due to 14 newly discovered security flaws. These vulnerabilities, found in both residential and enterprise routers, include two rated critical, with one receiving the maximum CVSS score of 10.0. This critical flaw involves a buffer overflow in the Web UI, potentially allowing remote code execution. Another significant vulnerability is OS command injection via communication binaries. The report highlights the widespread exposure of these routers’ web interfaces online, creating a tempting target for attackers, particularly in the U.S. DrayTek has released patches to address these vulnerabilities, urging users to apply updates, disable unnecessary remote access, and utilize security measures like ACLs and two-factor authentication. This development coincides with international cybersecurity agencies offering guidance to secure critical infrastructure, emphasizing the importance of safety, protecting valuable OT data, secure supply chains, and the role of people in cybersecurity. 1. **Vulnerability**: A weakness in a system or software that can be exploited by hackers. – **Importance**: Identifying vulnerabilities is crucial in cyber security because it helps protect systems from attacks. 2. **Router**: A device that routes data from one network to another, directing traffic on the internet. – **Importance**: Routers are essential for internet connectivity and their security is vital to prevent unauthorized access to networks. 3. **Buffer Overflow**: A coding error where a program writes more data to a buffer than it can hold, potentially leading to system crashes or unauthorized code execution. – **Importance**: Buffer overflows are common vulnerabilities that can be exploited to gain control of a system. 4. **Remote Code Execution (RCE)**: A type of vulnerability that allows an attacker to execute code on a remote system without authorization. – **Importance**: RCE vulnerabilities are highly critical as they enable attackers to take over affected systems. 5. **Cross-site Scripting (XSS)**: A web security vulnerability that allows attackers to inject malicious scripts into content from otherwise trusted websites. – **Importance**: XSS can be used to steal information, deface websites, and spread malware. 6. **Adversary-in-the-Middle (AitM) Attack**: An attack where the attacker secretly intercepts and possibly alters the communication between two parties who believe they are directly communicating with each other. – **Importance**: AitM attacks can lead to data theft, man-in-the-middle proxy attacks, and unauthorized access to sensitive information. 7. **Denial-of-Service (DoS)**: An attack intended to shut down a machine or network, making it inaccessible to its intended users. – **Importance**: DoS attacks disrupt the availability of services and can cause significant downtime and financial loss. 8. **Access Control List (ACL)**: A list of permissions attached to an object that specifies which users or system processes can access the object and what operations they can perform. – **Importance**: ACLs are crucial for implementing security policies to control access to resources. 9. **Two-Factor Authentication (2FA)**: A security process in which the user provides two different authentication factors to verify themselves. – **Importance**: 2FA improves security by adding an additional layer of verification, making it harder for attackers to gain unauthorized access. 10. **Operational Technology (OT)**: Hardware and software that detects or causes changes through direct monitoring and control of physical devices, processes, and events in an enterprise. – **Importance**: OT security is critical for the functioning and safety of critical infrastructure systems, such as those in manufacturing, power generation, and transportation. — Today, we’re discussing a critical remote code execution (RCE) vulnerability in Zimbra email servers, tracked as CVE-2024-45519, which hackers are actively exploiting. This flaw allows attackers to trigger malicious commands simply by sending specially crafted emails, which are processed by Zimbra’s post journal service. First flagged by Ivan Kwiatkowski of HarfangLab and confirmed by Proofpoint, the exploit involves spoofed emails with commands hidden in the “CC” field. Once processed, these emails deliver a webshell to the server, giving attackers full access for data theft or further network infiltration. A proof-of-concept exploit was released by Project Discovery on September 27, prompting immediate malicious activity. Administrators are urged to apply security updates released in Zimbra’s latest versions—9.0.0 Patch 41 and later—or disable the vulnerable postjournal service and ensure secure network configurations to mitigate the threat. Stay vigilant and update your Zimbra servers immediately to protect against this critical vulnerability. 1. **Remote Code Execution (RCE)** – **Definition**: A type of security vulnerability that enables attackers to run arbitrary code on a targeted server or computer. – **Importance**: This flaw can be exploited to gain full control over the affected machine, leading to data theft, unauthorized access, and further network penetration. 2. **Zimbra** – **Definition**: An open-source email, calendaring, and collaboration platform. – **Importance**: Popular among organizations for its integrated communication tools, making it a significant target for cyberattacks due to the sensitive data it handles. 3. **SMTP (Simple Mail Transfer Protocol)** – **Definition**: A protocol used to send and route emails across networks. – **Importance**: Integral to email services, its exploitation can deliver malicious content to servers and users, forming a vector for cyber-attacks. 4. **Postjournal Service** – **Definition**: A service within Zimbra used to parse incoming emails over SMTP. – **Importance**: Its vulnerability can be leveraged to execute arbitrary commands, making it a crucial attack point for hackers. 5. **Proof-of-Concept (PoC)** – **Definition**: A demonstration exploit showing that a vulnerability can be successfully taken advantage of. – **Importance**: PoC exploits serve as proof that theoretical vulnerabilities are practical and dangerous, necessitating urgent security responses. 6. **Base64 Encoding** – **Definition**: A method of encoding binary data into an ASCII string format. – **Importance**: Often used to encode commands within emails or other data streams to evade basic security detections. 7. **Webshell** – **Definition**: A type of malicious script that provides attackers with remote access to a compromised server. – **Importance**: Webshells afford attackers sustained control over a server, allowing for ongoing data theft, disruptions, and further exploits. 8. **CVE (Common Vulnerabilities and Exposures)** – **Definition**: A list of publicly known cybersecurity vulnerabilities and exposures, identified by unique CVE IDs. – **Importance**: Helps standardize and track security issues, facilitating communication and management of vulnerabilities across the cybersecurity community. 9. **Patch** – **Definition**: An update to software aimed at fixing security vulnerabilities or bugs. – **Importance**: Patching vulnerabilities is critical for protecting systems from attacks exploiting known security flaws. 10. **Execvp Function** – **Definition**: A function in Unix-like operating systems that executes commands with an argument vector, featuring improved input sanitization. – **Importance**: By replacing vulnerable functions like ‘popen,’ ‘execvp’ helps prevent the execution of malicious code, thus enhancing system security. —
In this episode, the hosts discuss the implications of AI in cybersecurity, focusing on a recent article about hackers planting false memories in ChatGPT to exfiltrate user data. They explore the mechanics of prompt injection attacks, the importance of user awareness regarding AI memory management, and the potential security risks associated with AI technologies. The conversation emphasizes the need for guardrails to prevent misinformation and protect sensitive data, while also considering the future of AI in cybersecurity. Please LISTEN
Rust meets Linux in a clash of coding cultures. Why some developers are resisting, and where things go from here.Sponsored By:Core Contributor Membership: Take $1 a month of your membership for a lifetime!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
There are probably only a handful of open-source projects that had a bigger impact on the world than Mozilla Thunderbird. The email client has been around for over two decades and has been a staple for many users (me included). Dealing with a legacy codebase that servers millions of users is no easy feat. The team at MZLA, a subsidiary of Mozilla, has been working hard to modernize the core of Thunderbird by writing new parts in Rust. In this episode, I talk to Brendan Abolivier, a software engineer at MZLA, about the challenges of working on a legacy codebase, the new Rust-based Exchange protocol support, which is the first new protocol in Thunderbird in over a decade, and the future of Thunderbird.
Diesmal kaum Politik, jedoch ein bisschen Journalismuskritik, dann geht's ums Runterkommen, Memory-Management und einen Messeausblick.
Rust changed the discussion around memory management - this week's guest hopes to push that discussion even further.This week we're joined by Evan Ovadia, creator of the Vale programming language and collector of memory management techniques from far and wide. He takes us through his most important ones, including linear types, generation references and regions, to see what Evan hopes the future of memory management will look like.If you've been interested in Rust's borrow-check and want more (or want different!) then Evan has some big ideas for you to sink your teeth into.–Vale: https://vale.dev/The Vale Discord: https://discord.com/invite/SNB8yGHEvan's Blog: https://verdagon.dev/homeEvan's 7DRL Entry: https://verdagon.dev/blog/higher-raii-7drl7DRL: https://7drl.com/https://verdagon.dev/grimoire/grimoireWhat Colour Is Your Function?: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/42, the language: https://forty2.is/Verona Language: https://www.microsoft.com/en-us/research/project/project-verona/Austral language: https://austral-lang.org/Surely You're Joking, Mr Feynman! (book): https://www.goodreads.com/book/show/35167685-surely-you-re-joking-mr-feynmanEvan on Twitter: https://twitter.com/verdagonFind Evan in the Vale Discord: https://discord.com/invite/SNB8yGHKris on Mastodon: http://mastodon.social/@krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/Kris on Twitter: https://twitter.com/krisajenkins–#software #programming #podcast #valelang
In today's episode, we bring back Teej DeVries, the first guest ever on our podcast! Today we are discussing Teej's new course on Boot.dev on Memory Management. In this talk, we discuss the importance of memory, why Go is a C-programmer minded language, garbage collectors, among other technical topics. We also talk about why understanding the fundamentals in crucial in helping you increase your learning ability, how different it is hiring juniors and seniors and why being curious gives you the advantage over everyone else.Learn back-end development - https://boot.devListen on your favorite podcast player: https://www.backendbanter.fmTeej's Twitter: https://twitter.com/teej_dvTeej's Channel: https://www.youtube.com/c/tjdevries(00:00) - Introduction (00:57) - Teej will have a course on Boot.dev! (01:35) - Why Memory Management is so important (05:17) - Go is a C-programmer minded language (07:00) - 25% off on boot.dev! (07:22) - How far in the curriculum will Teej's course be? (09:13) - Should you learn Rust or C first? (12:43) - Dropping out of college (13:49) - You should know WHY you're doing something (15:29) - Self motivated learning (18:52) - Internal Boot.dev tooling for this course (21:59) - OCamls' garbage collector (23:55) - Functional language, performance and immutability constraints (30:24) - Roc programming language (32:42) - Wasm (WebAssembly) vs Machine Code (36:07) - C's Standard Library vs Go's Standard Library (37:01) - Installing dependencies (41:09) - C as an educational tool (43:27) - You have to think when using C (45:42) - Enterprise machines are weaker compared to local machines (47:43) - Why this course is before the Job Search chapter (49:44) - Being curious gives you the advantage (51:16) - Every program uses memory, so we should have at least some level of understanding about it (54:28) - Just being able to speak like an engineer goes a long way (57:14) - There are still a ton of jobs that involve embedded systems, not just WebDev (01:00:13) - Be eager to learn (01:01:51) - Hiring Seniors vs Hiring Juniors (01:02:50) - You learn better if you understand fundamentals (01:04:10) - Analogy to Dota 2 (01:08:54) - Where to find Teej
In this video I talk about Apache Pulsar with Matteo Merli, CTO at StreamNative. This episode will provide you good insight about how Apache Pulsar works and more importantly differs with the most popular Pub/Sub and streaming platform Apache Kafka. Things like, what enables possibility of 1 million topics? Why is rebalancing not required? How does decoupled storage and compute architecture works? How it uses the concept of Subscriptions to avoid retaining data unnecessarily? And much more... Chapters: 00:00 Introduction and Guest Introduction 00:08 Understanding Apache Pulsar and its Origin 01:22 The Problem Apache Pulsar was Designed to Solve 02:35 The Evolution of Apache Pulsar 05:15 Understanding Basic Concepts of Apache Pulsar 09:27 Deep Dive into Apache Pulsar's Architecture 21:16 Understanding the Flow of Data in Apache Pulsar 28:54 Understanding Subscriptions in Apache Pulsar 31:57 Understanding End-to-End Latency and Subscription Creation 32:32 Broker's Role and Handling Metadata 33:05 Memory Management and Consumer Handling 34:07 Message Processing and Flow Control 34:32 Message Storage and Retrieval 36:00 Comparing Pulsar with Kafka 43:52 Understanding Multi-Tenancy in Pulsar 49:17 Exploring Tiered Storage and Future Developments Important links: StreamNative: https://streamnative.io/ Apache Pulsar: https://pulsar.apache.org/ Matteo Merli: https://twitter.com/merlimat =============================================================================== For discount on the below courses: Appsync: https://appsyncmasterclass.com/?affiliateId=41c07a65-24c8-4499-af3c-b853a3495003 Testing serverless: https://testserverlessapps.com/?affiliateId=41c07a65-24c8-4499-af3c-b853a3495003 Production-Ready Serverless: https://productionreadyserverless.com/?affiliateId=41c07a65-24c8-4499-af3c-b853a3495003 Use the button, Add Discount and enter "geeknarrator" discount code to get 20% discount. =============================================================================== Follow me on Linkedin and Twitter: https://www.linkedin.com/in/kaivalyaapte/ and https://twitter.com/thegeeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!
In this episode, we are joined by Steven, the CTO of PubNub, a company that has developed an edge net messaging network with over a billion connected devices. Steven explains that while message buses like Kafka or RabbitMQ are suitable for smaller scales, PubNub focuses on the challenges of connecting mobile devices and laptops at a web scale. They aim to provide instant signal delivery at a massive scale, prioritizing low latency for a seamless user experience. To achieve this, PubNub has architected their system to be globally distributed, running on AWS with Kubernetes clusters spread across all of Amazon's zones. They utilize GeoDNS to ensure users connect to the closest region for the lowest latency possible. Steven goes on to discuss the challenges they faced in building their system, particularly in terms of memory management and cleanup. They had to deal with issues such as segmentation faults and memory leaks, which caused runtime problems, outages, and potential data loss. PubNub had to invest in additional memory to compensate for these leaks and spend time finding and fixing the problems. While C was efficient, it came with significant engineering costs. As a solution, PubNub started adopting Rust, which helped alleviate some of these challenges. When they replaced a service with Rust, they observed a 5x improvement in memory and performance. Steven also talks about choosing programming languages for their platform and the difficulties in finding and retaining C experts. They didn't consider Java due to its perceived academic nature, and Go didn't make the list of options at the time. However, they now have services in production written in Go, though rewriting part of their PubSub bus in Go performed poorly compared to their existing C system. Despite this, they are favoring Rust as their language of choice for new services, citing its popularity and impressive results. The conversation delves into performance considerations with Python and the use of PyPy as a just-in-time compiler for optimization. While PyPy improved performance, it also required a lot of memory, which could be expensive. On the other hand, Rust provided a significant boost in both memory and performance, making it a favorable choice for PubNub. They also discuss provisioning, taking into account budget and aiming to be as close to what they need as possible. Kubernetes and auto scaling with HPAs (Horizontal Pod Autoscaling) are used to dynamically adjust resources based on usage. Integrating new services into PubNub's infrastructure involves both API-based communication and event-driven approaches. They use frameworks like Axiom for API-based communication and leverage Kafka with Protobuf for event sourcing. JSON is also utilized in some cases. Steven explains that they chose Protobuf for high-traffic topics and where stability is crucial. While the primary API for customers is JSON-based, PubNub recognizes the superior performance of Protobuf and utilizes it for certain cases, especially for shrinking down large character strings like booleans. They also discuss the advantages of compression enabled with Protobuf. The team reflects on the philosophy behind exploring Rust's potential for profit and its use in infrastructure and devices like IoT. Rust's optimization for smaller binaries is highlighted, and PubNub sees it as their top choice for reliability and performance. They mention developing a Rust SDK for customers using IoT devices. The open-source nature of Rust and its ability to integrate into projects and develop open standards are also praised. While acknowledging downsides like potential instabilities and longer compilation time, they remain impressed with Rust's capabilities. The conversation covers stability and safety in Rust, with the speaker expressing confidence in the compiler's ability to handle alpha software and packages. Relying on native primitives for concurrency in Rust adds to the speaker's confidence in the compiler's safety. The Rust ecosystem is seen as providing adequate coverage, although packages like libRDKafka, which are pre-1.0, can be challenging to set up or deploy. The speaker emphasizes simplicity in code and avoiding excessive abstractions, although they acknowledge the benefits of features like generics and traits in Rust. They suggest resources like a book by David McCloyd that focuses on learning Rust without overwhelming complexity. Expanding on knowledge sharing within the team, Stephen discusses how Rust advocates within the team have encouraged its use and the possibilities it holds for AI infrastructure platforms. They believe Rust could improve performance and reduce latency, particularly for CPU tasks in AI. They mention the adoption of Rust in the data science field, such as its use in the Parquet data format. The importance of tooling improvements, setting strict standards, and eliminating unsafe code is highlighted. The speaker expresses the desire for a linter that enforces a simplified version of Rust to enhance code readability, maintainability, and testability. They discuss the balance between functional and object-oriented programming in Rust, suggesting object-oriented programming for larger-scale code structure and functional paradigms within functions. Onboarding Rust engineers is also addressed, considering whether to prioritize candidates with prior Rust experience or train individuals skilled in another language on the job. Recognizing the shortage of Rust engineers, Stephen encourages those interested in Rust to pursue a career at PubNub, pointing to resources like their website and LinkedIn page for tutorials and videos. They emphasize the importance of latency in their edge messaging technology and invite users to try it out.
This week's Electromaker Show is now available on YouTube and everywhere you get your podcasts! Welcome to the Electromaker Show episode 125! Which Xiao board is best for your project? Find out (and maybe win 1 of 4) in today's episode! We also see a great getting started guide on the Milk-V Duo, meet our Electromaker of the Month, and much more! Tune in for the latest maker, tech, DIY, IoT, embedded, and crowdfunding news stories from the week. Watch the show! We publish a new show every week. Subscribe here: https://www.youtube.com/channel/UCiMO2NHYWNiVTzyGsPYn4DA?sub_confirmation=1 We stock the latest products from Adafruit, Seeed Studio, Pimoroni, Sparkfun, and many more! Browse our shop: https://www.electromaker.io/shop Join us on Discord! https://discord.com/invite/w8d7mkCkxj Follow us on Twitter: https://twitter.com/ElectromakerIO Like us on Facebook: https://www.facebook.com/electromaker.io/ Follow us on Instagram: https://www.instagram.com/electromaker_io/ Featured in this show: Seeed XIAO Microcontroller Boards: The Ultimate Guide for Makers + Giveaway! Home Assistant Green brings a Rockchip SoC and Simplicity Raspberry PI OS now uses Wayland Electromakers of the Month - September LoraWan sheep tracking episode Linux without Memory Management on a RISC V MCU Which PCB fab house is Best? Milk-V Duo in the Wild with Apalrd! Milk-V roundup from a previous show
Apple's use of AI in its new devices, including a new chip that includes improved data crunching capabilities and a four-core "Neural Engine" that can process machine learning tasks up to twice as quickly. Elon Musk's proposal for a federal department of AI after his Capitol Hill summit, citing the potential harm of unchecked AI development. Cutting-edge research on efficient memory management, mesa-optimization algorithms, and extremely parameter-efficient MoE. The proposed solutions to challenges in serving large language models efficiently, including PagedAttention and vLLM. Contact: sergi@earkind.com Timestamps: 00:34 Introduction 01:41 AI quietly reshapes Apple iPhones, Watches 03:06 Elon Musk calls for federal department of AI after Capitol Hill summit 04:34 Jason Wei tweets 06:15 Fake sponsor 08:06 Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning 09:33 Uncovering mesa-optimization algorithms in Transformers 11:06 Efficient Memory Management for Large Language Model Serving with PagedAttention 12:53 Outro
Some highlights from Linus' recent fireside chat, Qt gets a new leader and a Linux botnet we should probably take seriously.
Some highlights from Linus' recent fireside chat, Qt gets a new leader and a Linux botnet we should probably take seriously.
I discuss the idea of statically typed region-based memory management, proposed by Tofte and Talpin. The idea is to allow programmers to declare explicitly the region from which to satisfy individual allocation requests. Regions are created in a statically scoped way, so that after execution leaves the body of the region-creation construct, the entire region may be deallocated. Type inference is used to make sure that no dangling pointers are dereferenced (after the associated region is deallocated). This journal paper about the idea is not easy reading, but has a lot of good explanations in and around all the technicalities. Even better, I found, is this paper about region-based memory management in Cyclone. There are lots of intuitive explanations of the ideas, and not so much gory technicality.
In this episode, I start a new chapter (we are up to Chapter 16, here), about verifying safe manual management of memory. I have personally gotten pretty interested in this topic, having seen through some simple experiments with Haskell how much time can go into garbage collection for seemingly simple benchmarks. I also talk about why verifying memory-usage properties of programs is challenging in proof assistants.
Stephen Dolan works on Jane Street's Tools and Compilers team where he focuses on the OCaml compiler. In this episode, Stephen and Ron take a trip down memory lane, discussing how to manage computer memory efficiently and safely. They consider trade-offs between reference counting and garbage collection, the surprising gains achieved by prefetching, and how new language features like local allocation and unboxed types could give OCaml users more control over their memory.You can find the transcript for this episode on our website.Some links to topics that came up in the discussion:Stephen's command-line JSON processor, jqStephen's Cambridge dissertation, “Algebraic Subtyping”, and a protoype implementation of mlsub, a language based on those ideas.A post from Stephen on how to benchmark different memory allocators.A Jane Street tech talk on “Unboxed Types for OCaml”, and an RFC in the OCaml RFC repo.A paper from Stephen and KC Sivaramakrishnan called “Bounding Data Races in Space and Time”, which is all about a new and better memory model for Multicore OCaml.Another paper describing the design of OCaml's multicore GC.The Rust RFC for Higher-ranked trait bounds.
Bryan Boreham (Grafana Labs) and Jordan Lewis (Cockroach Labs) join Mat and Jon to talk about memory management in Go. We learn about the heap, the stack, and the garbage collector. There are also some absolute gems of wisdom scattered throughout this episode, don't miss it.
Bryan Boreham (Grafana Labs) and Jordan Lewis (Cockroach Labs) join Mat and Jon to talk about memory management in Go. We learn about the heap, the stack, and the garbage collector. There are also some absolute gems of wisdom scattered throughout this episode, don't miss it.
Single Page Applications (SPA) have many advantages, including increased interactivity, responsiveness, and user experience. However, a SPA often requires sending large chunks of JavaScript code to the client. This code must be downloaded and parsed by the client, not to mention the execution time required. There are many strategies to achieve a highly performant single-page application. One of the most commonly used strategies is to lazy-load as much of the application as possible. It's likely that the client does _not_ need the entire application's code upfront, rather, we can deliver a minimal amount of code to the client to bootstrap the application. We can then either prefetch and preload the remaining chunks or wait until the user interacts with a specific feature or module of our application, and then fetch and load the JavaScript that is necessary to render and execute.As Angular developers, you are likely familiar with the built-in router's ability to lazy load child modules via routing. This is a big win for all of us. But, what if you could dynamically load modules and components at runtime? The Angular Show panelists (Aaron, Jennifer, and Brian) sat down with Jay Cooper Bell to learn more about this solution and the approach that he has used. Jay is the CTO and co-founder of Trellis, a fundraising platform for non-profit organizations, and a frequent contributor and speaker in the Angular community. Jay, and the team at Trellis, created rx-dynamic-component, an open-source library that enables Angular developers to dynamically load and mount a component at runtime using RxJS Observables. Jay teaches us about what he learned building the library along with the advantages of lazy-loading components using an Observable's next notification as the observer.Don't forget to subscribe so you can continue to hang out with the Angular Show panelists as we learn from the Angular community!Show Notes:rx-dynamic-component source code:https://github.com/trellisorg/platform/tree/master/packages/rx-dynamic-component rx-dynamic-component demo:https://github.com/trellisorg/platform/tree/master/apps/rx-dynamic-component-demo https://blog.angular.io/ivys-internal-data-structures-f410509c7480Video about Memory Management: https://www.youtube.com/watch?v=Zs-d6_FPfMY&t=1sArticle: https://www.nytimes.com/interactive/2021/05/24/us/tulsa-race-massacre.htmlConnect with us:Jay Bell - @JayCooperBellBrian F Love - @brian_loveJennifer Wadella - @likeOMGitsFEDAYAaron Frost - @aaronfrost
Deploy Friday: hot topics for cloud technologists and developers
We continue our Language Spotlight series with Java, an object-oriented programming language that has shaped the course of the internet in enormous ways. Today we speak with two Java champions, Monica Beckwith and Geertjan Wielenga, to discuss Java's impact so far as well as its course for the future.What is a Java Champion?Since we have two Java champions as guests today, let's start off with defining a Java champion. Monica and Geertjan describe the term as a collection of expert knowledge, a sign that you have contributed to and are committed to the Java community, and last but not least, a group of friends. Geertjan adds, “It's a badge of honor as well.”With their extensive Java experience, Monica and Geertjan are well-positioned to define what has made Java one of the top languages in the world throughout its 25-year history.Java's defining features and attributesJava is a mature language supported by a huge and vibrant community, with extensive information and documentation available to all levels of learners. Our two guests also emphasize that Java has evolved with the times, and that this adaptability is a defining characteristic of the language. They list others, like:PortabilityGarbage collectionMemory managementStatic typing Functional programming When not to use JavaWhile Java is a great choice for many instances, our guests acknowledge that there are cases where it's best to use another language. Geertjan says Java isn't great for front-end work, and smaller applications, “... in particular, if you're creating a shopping cart, or you're creating some hotel booking system, or some other relatively lightweight project, Java is probably the wrong language.” Our guests' final words on JavaJava is a mature, approachable, adaptable language that's been extremely influential to the world of development. Geertjan sums it up. “Java is a vibrant community of enthusiastic and friendly people all over the world. If you want to learn a language to get started in programming, Java is definitely a good choice to make.”Try Java on Platform.shPlatform.shLearn more about us.Get started with a free trial.Have a question? Get in touch!Platform.sh on social mediaTwitter @platformshTwitter (France): @platformsh_frLinkedIn: Platform.shLinkedIn (France): Platform.shFacebook: Platform.shWatch, listen, subscribe to the Platform.sh Deploy Friday podcast:YouTubeApple PodcastsBuzzsproutPlatform.sh is a robust, reliable hosting platform that gives development teams the tools to build and scale applications efficiently. Whether you run one or one thousand websites, you can focus on creating features and functionality with your favorite tech stack.
Memory Tricks and Techniques | Dos and Donts for effective Memory Management
On this episode of DevTalk I speak to Konrad Kokosa about .NET Memory Management. Links: DotNetos Book Pro .NET Memory Management .NET GC Internals Mini-series .NET Memory Management Quiz
It's another customer story - and on this podcast hear about the amazing app that pairs with snowmobiles, motorcycles, off-road, and other powersports vehicles. Find out how the developers at Polaris created an app that allows you to find trails, and track your position along the trail - so you can ride safely. And make sure your friends do too. And it even generates 3D fly-thrus of your ride so you can share them later! All of that and more on this Xamarin Podcast! Show Notes Ride Command (https://snowmobiles.polaris.com/en-us/ride-command/) Memory Management and Profiling - The Xamarin Show (https://channel9.msdn.com/Shows/XamarinShow/Best-Practices-Memory-Management--Profiling--The-Xamarin-Show?WT.id=22472-mobile-masoucou) CollectionView Drag & Drop (https://devblogs.microsoft.com/xamarin/collectionview-drag-and-drop?WT.id=22472-mobile-masoucou) Follow Us: * Matt: Twitter (https://twitter.com/codemillmatt), Blog (https://codemilltech.com), GitHub (https://github.com/codemillmatt) Special Guest: Alexey Strakh.
Managing your translation memories well is crucial for long-term collaborations with clients. Plus, it really improves your productivity and ensures consistency throughout your work. Click HERE to watch some short videos about how CAT tools can help you. For a few weeks here at Translation Confessional, I'm highlighting some other aspects of CAT tools, because I don't want to overload anyone with too much technical information in a single episode. I'm also brainstorming an online session about it with some video demos, so stay tuned for upcoming newsflash episodes here. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* CHECK OUT THIS EPISODE'S SPONSORS: Swordfish ― a cross-platform CAT tool you can use in Windows, macOS, and Linux Caveday ― use promo code TRANSLATION (all caps) for a FREE 3-hour session *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* By the way, if you're interested in checking out "Tools and Technology in Translation," here are some links: » Book » Online Class » YouTube Channel » Podcast » Webinars » Facebook Page » Twitter » Website » Email *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Stay tuned for weekly episodes and subscribe to Translation Confessional through your favorite podcast app. To learn more about my background as a translator and translation instructor, visit my professional website at RafaLombardino.com Send me an email with feedback, ideas, and requests to RLombardino@WordAwareness.com --- Send in a voice message: https://anchor.fm/translation-confessional/message
Friends join us to discuss Cabin, a proposal that encourages more Linux apps and fewer distros. Plus, we debate the value that the Ubuntu community brings to Canonical, and share a pick for audiobook fans. Chapters: 0:00 Pre-Show 0:48 Intro 0:54 SPONSOR: A Cloud Guru 2:25 Future of Ubuntu Community 6:51 Ubuntu Community: Popey Responds 9:31 Ubuntu Community: Stuart Langridge Responds 16:26 Ubuntu Community: Mark Shuttleworth Responds 17:30 BTRFS Workflow Developments 19:09 Linux Kernel 5.9 Performance Regression 24:48 SPONSOR: Linode 27:34 Cabin 29:48 Cabin: More Apps, Fewer Distros 33:41 Cabin: Building Small Apps 36:40 Cabin: What is a Cabin App? 44:34 SPONSOR: A Cloud Guru 45:20 Feedback: Fedora 33 Bug-A-Thon 47:53 Goin' Indy Update 49:40 Submit Your Linux Prepper Ideas 50:11 Feedback: Dev IDEs 54:15 Feedback: Nextcloud 58:20 Picks: Cozy 1:00:25 Outro 1:01:38 Post-Show Special Guests: Alan Pope, Drew DeVore, and Stuart Langridge.
This week on the Virtually Speaking Podcast we welcome Frank Denneman to dive deep into dynamic memory management and resource allocation. read more The Virtually Speaking Podcast The Virtually Speaking Podcast is a weekly technical podcast dedicated to discussing VMware topics related to storage and availability. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and within the industry to discuss their respective areas of expertise. If you’re new to the Virtually Speaking Podcast check out all episodes on vSpeakingPodcast.com.
Mike and Wes dive into Bosque, Microsoft’s new research language, and debate if it represents the future of programming languages, or if we should all just be using F#. Plus some Qt license clarity, a handy new Rust feature, and your feedback.
Swift News is all about curating this week's latest news involving iOS Development and Swift. In this week's episode I discuss the what's coming in Swift 5.1, Memory Management, how to modularize your app using frameworks and building your own networking framework. Subscribe to stay up to date with the latest Swift News every Monday! Video Version: https://youtu.be/lb9l5OthbCI Link to my book - How I Became an iOS Developer: https://gumroad.com/l/sean-allen-origin Books, hoodies and goodies: https://seanallen.co/store If you're enjoying this podcast, I have another one called Swift Over Coffee w/ Paul Hudson of Hacking with Swift: https://itunes.apple.com/us/podcast/swift-over-coffee/id1435076502?mt=2 Twitter: https://www.twitter.com/seanallen_dev Instagram: @seanallen_dev Patreon: https://www.patreon.com/seanallen YouTube Channel: https://www.youtube.com/seanallen Portfolio: https://seanallen.co Book and learning recommendations (Affiliate Links): Ray Wenderlich Books: https://store.raywenderlich.com/a/20866/link/1 Ray Wenderlich Video Tutorials: https://store.raywenderlich.com/a/20866/link/24 Paul Hudson's Hacking With Swift: https://gumroad.com/a/762098803 Learn Advanced Swift Here: https://gumroad.com/a/656585843 My Developer & YouTube Setup: https://www.amazon.com/shop/seanallen --- Support this podcast: https://anchor.fm/seanallen/support
We debate Rust’s role as a replacement for C, and share our take on the future of gaming with Google's Stadia. Plus Objective-C's return to grace, Mike’s big bet on .NET, and more!
For Episode 3, Dariul curates a downtempo 60 minute of your late night sounds inspired by Zeds Dead’s Catching Z’s series. Kick back, relax, and get lost in the mix. Available on: Spotify, SoundCloud, and Apple Podcasts TRACKLIST 1) MOD SUN – Did I Ever Wake Up 2) LORN – Memory Management 3) Childish Gambino – […] The post Electric Hawk Radio | Episode 3 | Zeds Dead Catching Z’s Inspired appeared first on Electric Hawk.
Kathryn interviews Andrew E. Budson MD, author of “Seven Steps to Managing Your Memory: What's Normal, What's Not, and What to Do About It”. How can we differentiate between normal and abnormal memory changes, in loved ones and ourselves? From misplacing our keys to recalling a new acquaintance's name, memory dominates everyday life. Dr. Budson is a lecturer in neurology at Harvard Medical School, professor at Boston University School of Medicine. Kathryn also interviews lawyer, doctor, and former congressional candidate Marilyn Singleton MD, JD. Dr. Singleton combines her experience as a board-certified anesthesiologist with her education in constitutional and administrative law to analyze the current heath care environment, exposing the lobbyist interests of pharmaceutical companies and insurance providers. She graduated from Stanford U and UCSF Med School completing her anesthesia residency at Harvard's Beth Israel Hospital.
Kathryn interviews Andrew E. Budson MD, author of “Seven Steps to Managing Your Memory: What's Normal, What's Not, and What to Do About It”. How can we differentiate between normal and abnormal memory changes, in loved ones and ourselves? From misplacing our keys to recalling a new acquaintance's name, memory dominates everyday life. Dr. Budson is a lecturer in neurology at Harvard Medical School, professor at Boston University School of Medicine. Kathryn also interviews lawyer, doctor, and former congressional candidate Marilyn Singleton MD, JD. Dr. Singleton combines her experience as a board-certified anesthesiologist with her education in constitutional and administrative law to analyze the current heath care environment, exposing the lobbyist interests of pharmaceutical companies and insurance providers. She graduated from Stanford U and UCSF Med School completing her anesthesia residency at Harvard's Beth Israel Hospital.
See the full show notes for this episode on the website at pythonbytes.fm/48.
We welcome Mike Ash to the show to discuss the implementation details of weak references in Swift and how they've changed.
This week on BSD Now, Adrian Chadd on bringing up 802.11ac in FreeBSD, a PFsense and OpenVPN tutorial, and we talk about an interesting ZFS storage pool checkpoint project. This episode was brought to you by Headlines Bringing up 802.11ac on FreeBSD (http://adrianchadd.blogspot.com/2017/04/bringing-up-80211ac-on-freebsd.html) Adrian Chadd has a new blog post about his work to bring 802.11ac support to FreeBSD 802.11ac allows for speeds up to 500mbps and total bandwidth into multiple gigabits The FreeBSD net80211 stack has reasonably good 802.11n support, but no 802.11ac support. I decided a while ago to start adding basic 802.11ac support. It was a good exercise in figuring out what the minimum set of required features are and another excuse to go find some of the broken corner cases in net80211 that needed addressing. 802.11ac introduces a few new concepts that the stack needs to understand. I decided to use the QCA 802.11ac parts because (a) I know the firmware and general chip stuff from the first generation 11ac parts well, and (b) I know that it does a bunch of stuff (like rate control, packet scheduling, etc) so I don't have to do it. If I chose, say, the Intel 11ac parts then I'd have to implement a lot more of the fiddly stuff to get good behaviour. Step one - adding VHT channels. I decided in the shorter term to cheat and just add VHT channels to the already very large ieee80211channel map. The linux way of there being a channel context rather than hundreds of static channels to choose from is better in the long run, but I wanted to get things up and running. So, that's what I did first - I added VHT flags for 20, 40, 80, 80+80 and 160MHz operating modes and I did the bare work required to populate the channel lists with VHT channels as well. Then I needed to glue it into an 11ac driver. My ath10k port was far enough along to attempt this, so I added enough glue to say "I support VHT" to the iccaps field and propagated it to the driver for monitor mode configuration. And yes, after a bit of dancing, I managed to get a VHT channel to show up in ath10k in monitor mode and could capture 80MHz wide packets. Success! By far the most fiddly was getting channel promotion to work. net80211 supports the concept of dumb NICs (like atheros 11abgn parts) very well, where you can have multiple virtual interfaces but the "driver" view of the right configuration is what's programmed into the hardware. For firmware NICs which do this themselves (like basically everything sold today) this isn't exactly all that helpful. So, for now, it's limited to a single VAP, and the VAP configuration is partially derived from the global state and partially derived from the negotiated state. It's annoying, but it is adding to the list of things I will have to fix later. the QCA chips/firmware do 802.11 crypto offload. They actually pretend that there's no key - you don't include the IV, you don't include padding, or anything. You send commands to set the crypto keys and then you send unencrypted 802.11 frames (or 802.3 frames if you want to do ethernet only.) This means that I had to teach net80211 a few things: + frames decrypted by the hardware needed to have a "I'm decrypted" bit set, because the 802.11 header field saying "I'm decrypted!" is cleared + frames encrypted don't have the "i'm encrypted" bit set + frames encrypted/decrypted have no padding, so I needed to teach the input path and crypto paths to not validate those if the hardware said "we offload it all." Now comes the hard bit of fixing the shortcomings before I can commit the driver. There are .. lots. The first one is the global state. The ath10k firmware allows what they call 'vdevs' (virtual devices) - for example, multiple SSID/BSSID support is implemented with multiple vdevs. STA+WDS is implemented with vdevs. STA+P2P is implemented with vdevs. So, technically speaking I should go and find all of the global state that should really be per-vdev and make it per-vdev. This is tricky though, because a lot of the state isn't kept per-VAP even though it should be. Anyway, so far so good. I need to do some of the above and land it in FreeBSD-HEAD so I can finish off the ath10k port and commit what I have to FreeBSD. There's a lot of stuff coming - including all of the wave-2 stuff (like multiuser MIMO / MU-MIMO) which I just plainly haven't talked about yet. Viva la FreeBSD wireless! pfSense and OpenVPN Routing (http://www.terrafoundry.net/blog/2017/04/12/pfsense-openvpn/) This article tries to be a simple guide on how to enable your home (or small office) https://www.pfsense.org/ (pfSense) setup to route some traffic via the vanilla Internet, and some via a VPN site that you've setup in a remote location. Reasons to Setup a VPN: Control Security Privacy Fun VPNs do not instantly guarantee privacy, they're a layer, as with any other measure you might invoke. In this example I used a server that's directly under my name. Sure, it was a country with strict privacy laws, but that doesn't mean that the outgoing IP address wouldn't be logged somewhere down the line. There's also no reason you have to use your own OpenVPN install, there are many, many personal providers out there, who can offer the same functionality, and a degree of anonymity. (If you and a hundred other people are all coming from one IP, it becomes extremely difficult to differentiate, some VPN providers even claim a ‘logless' setup.) VPNs can be slow. The reason I have a split-setup in this article, is because there are devices that I want to connect to the internet quickly, and that I'm never doing sensitive things on, like banking. I don't mind if my Reddit-browsing and IRC messages are a bit slower, but my Nintendo Switch and PS4 should have a nippy connection. Services like Netflix can and do block VPN traffic in some cases. This is more of an issue for wider VPN providers (I suspect, but have no proof, that they just blanket block known VPN IP addresses.) If your VPN is in another country, search results and tracking can be skewed. This is arguable a good thing, who wants to be tracked? But it can also lead to frustration if your DuckDuckGo results are tailored to the middle of Paris, rather than your flat in Birmingham. The tutorial walks through the basic setup: Labeling the interfaces, configuring DHCP, creating a VPN: Now that we have our OpenVPN connection set up, we'll double check that we've got our interfaces assigned With any luck (after we've assigned our OPENVPN connection correctly, you should now see your new Virtual Interface on the pfSense Dashboard We're charging full steam towards the sections that start to lose people. Don't be disheartened if you've had a few issues up to now, there is no “right” way to set up a VPN installation, and it may be that you have to tweak a few things and dive into a few man-pages before you're set up. NAT is tricky, and frankly it only exists because we stretched out IPv4 for much longer than we should have. That being said it's a necessary evil in this day and age, so let's set up our connection to work with it. We need NAT here because we're going to masque our machines on the LAN interface to show as coming from the OpenVPN client IP address, to the OpenVPN server. Head over to Firewall -> NAT -> Outbound. The first thing we need to do in this section, is to change the Outbound NAT Mode to something we can work with, in this case “Hybrid.” Configure the LAN interface to be NAT'd to the OpenVPN address, and the INSECURE interface to use your regular ISP connection Configure the firewall to allow traffic from the LAN network to reach the INSECURE network Then add a second rule allowing traffic from the LAN network to any address, and set the gateway the the OPENVPN connection And there you have it, traffic from the LAN is routed via the VPN, and traffic from the INSECURE network uses the naked internet connection *** Switching to OpenBSD (https://mndrix.blogspot.co.uk/2017/05/switching-to-openbsd.html) After 12 years, I switched from macOS to OpenBSD. It's clean, focused, stable, consistent and lets me get my work done without any hassle. When I first became interested in computers, I thought operating systems were fascinating. For years I would reinstall an operating system every other weekend just to try a different configuration: MS-DOS 3.3, Windows 3.0, Linux 1.0 (countless hours recompiling kernels). In high school, I settled down and ran OS/2 for 5 years until I graduated college. I switched to Linux after college and used it exclusively for 5 years. I got tired of configuring Linux, so I switched to OS X for the next 12 years, where things just worked. But Snow Leopard was 7 years ago. These days, OS X is like running a denial of service attack against myself. macOS has a dozen apps I don't use but can't remove. Updating them requires a restart. Frequent updates to the browser require a restart. A minor XCode update requires me to download a 4.3 GB file. My monitors frequently turn off and require a restart to fix. A system's availability is a function (http://techthoughts.typepad.com/managing_computers/2007/11/availability-mt.html) of mean time between failure and mean time to repair. For macOS, both numbers are heading in the wrong direction for me. I don't hold any hard feelings about it, but it's time for me to get off this OS and back to productive work. I found OpenBSD very refreshing, so I created a bootable thumb drive and within an hour had it up and running on a two-year old laptop. I've been using it for my daily work for the past two weeks and it's been great. Simple, boring and productive. Just the way I like it. The documentation is fantastic. I've been using Unix for years and have learned quite a bit just by reading their man pages. OS releases come like clockwork every 6 months and are supported for 12. Security and other updates seem relatively rare between releases (roughly one small patch per week during 6.0). With syspatch in 6.1, installing them should be really easy too. ZFS Storage Pool Checkpoint Project (https://sdimitro.github.io/post/zpool-checkpoint) During the OpenZFS summit last year (2016), Dan Kimmel and I quickly hacked together the zpool checkpoint command in ZFS, which allows reverting an entire pool to a previous state. Since it was just for a hackathon, our design was bare bones and our implementation far from complete. Around a month later, we had a new and almost complete design within Delphix and I was able to start the implementation on my own. I completed the implementation last month, and we're now running regression tests, so I decided to write this blog post explaining what a storage pool checkpoint is, why we need it within Delphix, and how to use it. The Delphix product is basically a VM running DelphixOS (a derivative of illumos) with our application stack on top of it. During an upgrade, the VM reboots into the new OS bits and then runs some scripts that update the environment (directories, snapshots, open connections, etc.) for the new version of our app stack. Software being software, failures can happen at different points during the upgrade process. When an upgrade script that makes changes to ZFS fails, we have a corresponding rollback script that attempts to bring ZFS and our app stack back to their previous state. This is very tricky as we need to undo every single modification applied to ZFS (including dataset creation and renaming, or enabling new zpool features). The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn't corrupt your data). It remembers the entire state of the pool at the point that it was taken and the user can revert back to it later or discard it. Its generic use case is an administrator that is about to perform a set of destructive actions to ZFS as part of a critical procedure. She takes a checkpoint of the pool before performing the actions, then rewinds back to it if one of them fails or puts the pool into an unexpected state. Otherwise, she discards it. With the assumption that no one else is making modifications to ZFS, she basically wraps all these actions into a “high-level transaction”. I definitely see value in this for the appliance use case Some usage examples follow, along with some caveats. One of the restrictions is that you cannot attach, detach, or remove a device while a checkpoint exists. However, the zpool add operation is still possible, however if you roll back to the checkpoint, the device will no longer be part of the pool. Rather than a shortcoming, this seems like a nice feature, a way to help users avoid the most common foot shooting (which I witnessed in person at Linux Fest), adding a new log or cache device, but missing a keyword and adding it is a storage vdev rather than a aux vdev. This operation could simply be undone if a checkpoint where taken before the device was added. *** News Roundup Review of TrueOS (https://distrowatch.com/weekly.php?issue=20170501#trueos) TrueOS, which was formerly named PC-BSD, is a FreeBSD-based operating system. TrueOS is a rolling release platform which is based on FreeBSD's "CURRENT" branch, providing TrueOS with the latest drivers and features from FreeBSD. Apart from the name change, TrueOS has deviated from the old PC-BSD project in a number of ways. The system installer is now more streamlined (and I will touch on that later) and TrueOS is a rolling release platform while PC-BSD defaulted to point releases. Another change is PC-BSD used to allow the user to customize which software was installed at boot time, including the desktop environment. The TrueOS project now selects a minimal amount of software for the user and defaults to using the Lumina desktop environment. From the conclusions: What I took away from my time with TrueOS is that the project is different in a lot of ways from PC-BSD. Much more than just the name has changed. The system is now more focused on cutting edge software and features in FreeBSD's development branch. The install process has been streamlined and the user begins with a set of default software rather than selecting desired packages during the initial setup. The configuration tools, particularly the Control Panel and AppCafe, have changed a lot in the past year. The designs have a more flat, minimal look. It used to be that PC-BSD did not have a default desktop exactly, but there tended to be a focus on KDE. With TrueOS the project's in-house desktop, Lumina, serves as the default environment and I think it holds up fairly well. In all, I think TrueOS offers a convenient way to experiment with new FreeBSD technologies and ZFS. I also think people who want to run FreeBSD on a desktop computer may want to look at TrueOS as it sets up a graphical environment automatically. However, people who want a stable desktop platform with lots of applications available out of the box may not find what they want with this project. A simple guide to install Ubuntu on FreeBSD with byhve (https://www.davd.eu/install-ubuntu-on-freebsd-with-bhyve/) David Prandzioch writes in his blog: For some reasons I needed a Linux installation on my NAS. bhyve is a lightweight virtualization solution for FreeBSD that makes that easy and efficient. However, the CLI of bhyve is somewhat bulky and bare making it hard to use, especially for the first time. This is what vm-bhyve solves - it provides a simple CLI for working with virtual machines. More details follow about what steps are needed to setup vm_bhyve on FreeBSD Also check out his other tutorials on his blog: https://www.davd.eu/freebsd/ (https://www.davd.eu/freebsd/) *** Graphical Overview of the Architecture of FreeBSD (https://dspinellis.github.io/unix-architecture/arch.pdf) This diagram tries to show the different components that make up the FreeBSD Operating Systems It breaks down the various utilities, libraries, and components into some categories and sub-categories: User Commands: Development (cc, ld, nm, as, etc) File Management (ls, cp, cmp, mkdir) Multiuser Commands (login, chown, su, who) Number Processing (bc, dc, units, expr) Text Processing (cut, grep, sort, uniq, wc) User Messaging (mail, mesg, write, talk) Little Languages (sed, awk, m4) Network Clients (ftp, scp, fetch) Document Preparation (*roff, eqn, tbl, refer) Administrator and System Commands Filesystem Management (fsck, newfs, gpart, mount, umount) Networking (ifconfig, route, arp) User Management (adduser, pw, vipw, sa, quota*) Statistics (iostat, vmstat, pstat, gstat, top) Network Servers (sshd, ftpd, ntpd, routed, rpc.*) Scheduling (cron, periodic, rc.*, atrun) Libraries (C Standard, Operating System, Peripheral Access, System File Access, Data Handling, Security, Internationalization, Threads) System Call Interface (File I/O, Mountable Filesystems, File ACLs, File Permissions, Processes, Process Tracing, IPC, Memory Mapping, Shared Memory, Kernel Events, Memory Locking, Capsicum, Auditing, Jails) Bootstrapping (Loaders, Configuration, Kernel Modules) Kernel Utility Functions Privilege Management (acl, mac, priv) Multitasking (kproc, kthread, taskqueue, swi, ithread) Memory Management (vmem, uma, pbuf, sbuf, mbuf, mbchain, malloc/free) Generic (nvlist, osd, socket, mbuf_tags, bitset) Virtualization (cpuset, crypto, device, devclass, driver) Synchronization (lock, sx, sema, mutex, condvar_, atomic_*, signal) Operations (sysctl, dtrace, watchdog, stack, alq, ktr, panic) I/O Subsystem Special Devices (line discipline, tty, raw character, raw disk) Filesystems (UFS, FFS, NFS, CD9660, Ext2, UDF, ZFS, devfs, procfs) Sockets Network Protocols (TCP, UDP, UCMP, IPSec, IP4, IP6) Netgraph (50+ modules) Drivers and Abstractions Character Devices CAM (ATA, SATA, SAS, SPI) Network Interface Drivers (802.11, ifae, 100+, ifxl, NDIS) GEOM Storage (stripe, mirror, raid3, raid5, concat) Encryption / Compression (eli, bde, shsec, uzip) Filesystem (label, journal, cache, mbr, bsd) Virtualization (md, nop, gate, virtstor) Process Control Subsystems Scheduler Memory Management Inter-process Communication Debugging Support *** Official OpenBSD 6.1 CD - There's only One! (http://undeadly.org/cgi?action=article&sid=20170503203426&mode=expanded) Ebay auction Link (http://www.ebay.com/itm/The-only-Official-OpenBSD-6-1-CD-set-to-be-made-For-auction-for-the-project-/252910718452) Now it turns out that in fact, exactly one CD set was made, and it can be yours if you are the successful bidder in the auction that ends on May 13, 2017 (About 3 days from when this episode was recorded). The CD set is hand made and signed by Theo de Raadt. Fun Fact: The winning bidder will have an OpenBSD CD set that even Theo doesn't have. *** Beastie Bits Hardware Wanted by OpenBSD developers (https://www.openbsd.org/want.html) Donate hardware to FreeBSD developers (https://www.freebsd.org/donations/index.html#components) Announcing NetBSD and the Google Summer of Code Projects 2017 (https://blog.netbsd.org/tnf/entry/announcing_netbsd_and_the_google) Announcing FreeBSD GSoC 2017 Projects (https://wiki.freebsd.org/SummerOfCode2017Projects) LibreSSL 2.5.4 Released (https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.5.4-relnotes.txt) CharmBUG Meeting - Tor Browser Bundle Hack-a-thon (https://www.meetup.com/CharmBUG/events/238218840/) pkgsrcCon 2017 CFT (https://mail-index.netbsd.org/netbsd-advocacy/2017/05/01/msg000735.html) Experimental Price Cuts (https://blather.michaelwlucas.com/archives/2931) Linux Fest North West 2017: Three Generations of FreeNAS: The World's most popular storage OS turns 12 (https://www.youtube.com/watch?v=x6VznQz3VEY) *** Feedback/Questions Don - Reproducible builds & gcc/clang (http://dpaste.com/2AXX75X#wrap) architect - C development on BSD (http://dpaste.com/0FJ854X#wrap) David - Linux ABI (http://dpaste.com/2CCK2WF#wrap) Tom - ZFS (http://dpaste.com/2Z25FKJ#wrap) RAIDZ Stripe Width Myth, Busted (https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz) Ivan - Jails (http://dpaste.com/1Z173WA#wrap) ***
There’s still time! Check out and get your JS Remote Conf tickets! JavaScript Jabber Episode #184: Web Performance with Nik Molnar (Part 1) 02:04 - Nik Molnar Introduction Twitter GitHub Blog Glimpse [Pluralsight] WebPageTest Deep Dive 02:58 - RAIL (Response, Animation, Idle, Load) 06:03 - How do you know what is being kicked off? How do you avoid it? 08:15 - Frame Rates frames-per-second.appspot.com CSS Triggers 16:05 - Scrolling requestAnimationFrame 19:09 - The Web Animation API 21:40 - Animation Accessibility, Usability, and Speed haveibeenpwned.com Ilya Grigorik: Speed, Performance, and Human Perception @ Fluent 2014 27:14 - HTTP and Optimization Yesterday's perf best-practices are today's HTTP/2 anti-patterns by Ilya Grigorik Ruby Rogues Episode #135: HTTP 2.0 with Ilya Grigorik Hypertext Transfer Protocol Version 2 (HTTP/2) Can I use... Server Push 35:25 - ES6 and Performance ES6 Feature Performance six-speed 40:46 - Understanding the Scale Grace Hopper: Nanoseconds Grace Hopper on Letterman 43:30 RAIL (Response, Animation, Idle, Load) Cont’d 46:15 - Navigator.sendBeacon() 47:51 - Memory Management and Garbage Collection Memory Management Masterclass with Addy Osmani Addy Osmani: JavaScript Memory Management Masterclass Under the Hood of .NET Memory Management by Chris Farrell and Nick Harrison (Nik) Memory vs Performance Problems Rick Hudson: Go GC: Solving the Latency Problem @ GopherCon 2015 Picks Hardcore History Podcast (Jamison) Static vs. Dynamic Languages: A Literature Review (Jamison) TJ Fuller Tumblr (Jamison) Pickle Cat (Jamison) WatchMeCode (Aimee) Don’t jump around while learning in JavaScript (Aimee) P!nk - Bohemian Rhapsody (Joe) Rich Hickey: Design, Composition and Performance (Joe) Undisclosed Podcast (AJ) History of Gaming Historian - 100K Subscriber Special (AJ) 15 Minute Podcast Listener chat with Charles Wood (Chuck) JS Remote Conf (Chuck) All Remote Confs (Chuck) Clash of Clans (Chuck) Star Wars Commander (Chuck) Coin (Chuck) The Airhook (Chuck) GoldieBlox (Chuck)
There’s still time! Check out and get your JS Remote Conf tickets! JavaScript Jabber Episode #184: Web Performance with Nik Molnar (Part 1) 02:04 - Nik Molnar Introduction Twitter GitHub Blog Glimpse [Pluralsight] WebPageTest Deep Dive 02:58 - RAIL (Response, Animation, Idle, Load) 06:03 - How do you know what is being kicked off? How do you avoid it? 08:15 - Frame Rates frames-per-second.appspot.com CSS Triggers 16:05 - Scrolling requestAnimationFrame 19:09 - The Web Animation API 21:40 - Animation Accessibility, Usability, and Speed haveibeenpwned.com Ilya Grigorik: Speed, Performance, and Human Perception @ Fluent 2014 27:14 - HTTP and Optimization Yesterday's perf best-practices are today's HTTP/2 anti-patterns by Ilya Grigorik Ruby Rogues Episode #135: HTTP 2.0 with Ilya Grigorik Hypertext Transfer Protocol Version 2 (HTTP/2) Can I use... Server Push 35:25 - ES6 and Performance ES6 Feature Performance six-speed 40:46 - Understanding the Scale Grace Hopper: Nanoseconds Grace Hopper on Letterman 43:30 RAIL (Response, Animation, Idle, Load) Cont’d 46:15 - Navigator.sendBeacon() 47:51 - Memory Management and Garbage Collection Memory Management Masterclass with Addy Osmani Addy Osmani: JavaScript Memory Management Masterclass Under the Hood of .NET Memory Management by Chris Farrell and Nick Harrison (Nik) Memory vs Performance Problems Rick Hudson: Go GC: Solving the Latency Problem @ GopherCon 2015 Picks Hardcore History Podcast (Jamison) Static vs. Dynamic Languages: A Literature Review (Jamison) TJ Fuller Tumblr (Jamison) Pickle Cat (Jamison) WatchMeCode (Aimee) Don’t jump around while learning in JavaScript (Aimee) P!nk - Bohemian Rhapsody (Joe) Rich Hickey: Design, Composition and Performance (Joe) Undisclosed Podcast (AJ) History of Gaming Historian - 100K Subscriber Special (AJ) 15 Minute Podcast Listener chat with Charles Wood (Chuck) JS Remote Conf (Chuck) All Remote Confs (Chuck) Clash of Clans (Chuck) Star Wars Commander (Chuck) Coin (Chuck) The Airhook (Chuck) GoldieBlox (Chuck)
There’s still time! Check out and get your JS Remote Conf tickets! JavaScript Jabber Episode #184: Web Performance with Nik Molnar (Part 1) 02:04 - Nik Molnar Introduction Twitter GitHub Blog Glimpse [Pluralsight] WebPageTest Deep Dive 02:58 - RAIL (Response, Animation, Idle, Load) 06:03 - How do you know what is being kicked off? How do you avoid it? 08:15 - Frame Rates frames-per-second.appspot.com CSS Triggers 16:05 - Scrolling requestAnimationFrame 19:09 - The Web Animation API 21:40 - Animation Accessibility, Usability, and Speed haveibeenpwned.com Ilya Grigorik: Speed, Performance, and Human Perception @ Fluent 2014 27:14 - HTTP and Optimization Yesterday's perf best-practices are today's HTTP/2 anti-patterns by Ilya Grigorik Ruby Rogues Episode #135: HTTP 2.0 with Ilya Grigorik Hypertext Transfer Protocol Version 2 (HTTP/2) Can I use... Server Push 35:25 - ES6 and Performance ES6 Feature Performance six-speed 40:46 - Understanding the Scale Grace Hopper: Nanoseconds Grace Hopper on Letterman 43:30 RAIL (Response, Animation, Idle, Load) Cont’d 46:15 - Navigator.sendBeacon() 47:51 - Memory Management and Garbage Collection Memory Management Masterclass with Addy Osmani Addy Osmani: JavaScript Memory Management Masterclass Under the Hood of .NET Memory Management by Chris Farrell and Nick Harrison (Nik) Memory vs Performance Problems Rick Hudson: Go GC: Solving the Latency Problem @ GopherCon 2015 Picks Hardcore History Podcast (Jamison) Static vs. Dynamic Languages: A Literature Review (Jamison) TJ Fuller Tumblr (Jamison) Pickle Cat (Jamison) WatchMeCode (Aimee) Don’t jump around while learning in JavaScript (Aimee) P!nk - Bohemian Rhapsody (Joe) Rich Hickey: Design, Composition and Performance (Joe) Undisclosed Podcast (AJ) History of Gaming Historian - 100K Subscriber Special (AJ) 15 Minute Podcast Listener chat with Charles Wood (Chuck) JS Remote Conf (Chuck) All Remote Confs (Chuck) Clash of Clans (Chuck) Star Wars Commander (Chuck) Coin (Chuck) The Airhook (Chuck) GoldieBlox (Chuck)
This week on BSDNow - It's getting close to christmas and the This episode was brought to you by iX Systems Mission Complete (https://www.ixsystems.com/missioncomplete/) Submit your story of how you accomplished a mission with FreeBSD, FreeNAS, or iXsystems hardware, and you could win monthly prizes, and have your story featured in the FreeBSD Journal! *** Headlines n2k15 hackathon reports (http://undeadly.org/cgi?action=article&sid=20151208172029) tedu@ worked on rebound, malloc hardening, removing legacy code “I don't usually get too involved with the network stack, but sometimes you find yourself at a network hackathon and have to go with the flow. With many developers working in the same area, it can be hard to find an appropriate project, but fortunately there are a few dusty corners in networking land that can be swept up without too much disturbance to others.” “IPv6 is the future of networking. IPv6 has also been the future of networking for 20 years. As a result, a number of features have been proposed, implemented, then obsoleted, but the corresponding code never quite gets deleted. The IPsec stack has followed a somewhat similar trajectory” “I read through various networking headers in search of features that would normally be exposed to userland, but were instead guarded by ifdef _KERNEL. This identified a number of options for setsockopt() that had been officially retired from the API, but the kernel code retained to provide ABI compatibility during a transition period. That transition occurred more than a decade ago. Binary programs from that era no longer run for many other reasons, and so we can delete support. It's only a small improvement, but it gradually reduces the amount of code that needs to be reviewed when making larger more important changes” Ifconfig txpower got similar treatment, as no modern WiFi driver supports it Support for Ethernet Trailers, RFC 893 (https://tools.ietf.org/html/rfc893), enabled zero copy networking on a VAX with 512 byte hardware pages, the feature was removed even before OpenBSD was founded, but the ifconfig option was still in place Alexandr Nedvedicky (sashan@) worked on MP-Safe PF (http://undeadly.org/cgi?action=article&sid=20151207143819) “I'd like to thank Reyk for hackroom and showing us a Christmas market. It was also my pleasure to meet Mr. Henning in person. Speaking of Henning, let's switch to PF hacking.” “mpi@ came with patch (sent to priv. list only currently), which adds a new lock for PF. It's called PF big lock. The big PF lock essentially establishes a safe playground for PF hackers. The lock currently covers all pftest() function. The pftest() function parts will be gradually unlocked as the work will progress. To make PF big lock safe few more details must be sorted out. The first of them is to avoid recursive calls to pftest(). The pftest() could get entered recursively, when packet hits block rule with return-* action. This is no longer the case as ipsend() functions got introduced (committed change has been discussed privately). Packets sent on behalf of kernel are dispatched using softnet task queue now. We still have to sort out pfroute() functions. The other thing we need to sort out with respect to PF big lock is reference counting for statekey, which gets attached to mbuf. Patch has been sent to hackers, waiting for OK too. The plan is to commit reference counting sometimes next year after CVS will be unlocked. There is one more patch at tech@ waiting for OK. It brings OpenBSD and Solaris PF closer to each other by one tiny little step.” *** ACM Queue: Challenges of Memory Management on Modern NUMA System (http://queue.acm.org/detail.cfm?id=2852078) “Modern server-class systems are typically built as several multicore chips put together in a single system. Each chip has a local DRAM (dynamic random-access memory) module; together they are referred to as a node. Nodes are connected via a high-speed interconnect, and the system is fully coherent. This means that, transparently to the programmer, a core can issue requests to its node's local memory as well as to the memories of other nodes. The key distinction is that remote requests will take longer, because they are subject to longer wire delays and may have to jump several hops as they traverse the interconnect. The latency of memory-access times is hence non-uniform, because it depends on where the request originates and where it is destined to go. Such systems are referred to as NUMA (non-uniform memory access).” So, depending what core a program is running on, it will have different throughput and latency to specific banks of memory. Therefore, it is usually optimal to try to allocate memory from the bank of ram connected to the CPU that the program is running on, and to keep that program running on that same CPU, rather than moving it around There are a number of different NUMA strategies, including: Fixed, memory is always allocated from a specific bank of memory First Touch, which means that memory is allocated from the bank connected to the CPU that the application is running on when it requests the memory, which can increase performance if the application remains on that same CPU, and the load is balanced optimally Round Robin or Interleave, where memory is allocated evenly, each allocation coming from the next bank of memory so that all banks are used. This method can provide more uniform performance, because it ensures that all memory accesses have the same change to be local vs remote. If even performance is required, this method can be better than something more focused on locality, but that might fail and result in remote access AutoNUMA, A kernel task routinely iterates through the allocated memory of each process and tallies the number of memory pages on each node for that process. It also clears the present bit on the pages, which will force the CPU to stop and enter the page-fault handler when the page is next accessed. In the page-fault handler it records which node and thread is trying to access the page before setting the present bit and allowing execution to continue. Pages that are accessed from remote nodes are put into a queue to be migrated to that node. After a page has already been migrated once, though, future migrations require two recorded accesses from a remote node, which is designed to prevent excessive migrations (known as page bouncing). The paper also introduces a new strategy: Carrefour is a memory-placement algorithm for NUMA systems that focuses on traffic management: placing memory so as to minimize congestion on interconnect links or memory controllers. Trying to strike a balance between locality, and ensuring that the interconnect between a specific pair of CPUs does not become congested, which can make remote accesses even slower Carrefour uses three primary techniques: Memory collocation, Moving memory to a different node so that accesses will likely be local. Replication, Copying memory to several nodes so that threads from each node can access it locally (useful for read-only and read-mostly data). Interleaving, Moving memory such that it is distributed evenly among all nodes. FreeBSD is slowly gaining NUMA capabilities, and currently supports: fixed, round-robin, first-touch. Additionally, it also supports fixed-rr, and first-touch-rr, where if the memory allocation fails, because the fixed domain or first-touch domain is full, it falls back to round-robin. For more information, see numa(4) and numa_setaffinity(2) on 11-CURRENT *** Is that Linux? No it is PC-BSD (http://fossforce.com/2015/12/linux-no-pc-bsd/) Larry Cafiero continues to make some news about his switch to PC-BSD from Linux. This time in an blog post titled “Is that Linux? No, its PC-BSD” he describes an experience out and about where he was asked what is running on his laptop, and was unable for the first time in 9 years to answer, it's Linux. The blog then goes on to mention his experience up to now running PC-BSD, how the learning curve was fairly easy coming from a Linux background. He mentions that he has noticed an uptick in performance on the system, no specific benchmarks but this “Linux was fast enough on this machine. But in street racing parlance, with PC-BSD I'm burning rubber in all four gears.” The only major nits he mentions is having trouble getting a font to switch in FireFox, and not knowing how to enable GRUB quiet mode. (I'll have to add a knob back for that) *** Dual booting OS X and OpenBSD with full disk encryption (https://gist.github.com/jcs/5573685) New GPT and UEFI support allow OpenBSD to co-exist with Mac OS X without the need for Boot Camp Assistant or Hybrid MBRs This tutorial walks the read through the steps of installing OpenBSD side-by-side with Mac OS X First the HFS+ partition is shrunk to make room for a new OpenBSD partition Then the OpenBSD installer is run, and the available free space is setup as an encrypted softraid The OpenBSD installer will add itself to the EFI partition Rename the boot loader installed by OpenBSD and replace it with rEFInd, so you will get a boot menu allowing you to select between OpenBSD and OS X *** Interview - Paul Goyette - pgoyette@netbsd.org (mailto:pgoyette@netbsd.org) NetBSD Testing and Modularity *** iXsystems iXsystems Wins Press and Industry Analyst Accolades in Best in Biz Awards 2015 (http://www.virtual-strategy.com/2015/12/08/ixsystems-wins-press-and-industry-analyst-accolades-best-biz-awards-2015) *** News Roundup HOWTO: L2TP/IPSec with OpenBSD (https://www.geeklan.co.uk/?p=2019) *BSD contributor Sevan Janiyan provides an update on setting up a road-warrior VPN This first article walks through setting up the OpenBSD server side, and followup articles will cover configuring various client systems to connect to it The previous tutorial on this configuration is from 2012, and things have improved greatly since then, and is much easier to set up now The tutorial includes PF rules, npppd configuration, and how to enable isakmpd and ipsec L2TP/IPSec is chosen because most operating systems, including Windows, OS X, iOS, and Android, include a native L2TP client, rather than requiring some additional software to be installed *** DragonFly 4.4 Released (http://www.dragonflybsd.org/release44/) DragonFly BSD has made its 4.4 release official this week! A lot of big changes, but some of the highlights Radeon / i915 DRM support for up to Linux Kernel 3.18 Proper collation support for named locales, shared back to FreeBSD 11-CURRENT Regex Support using TRE “As a consequence of the locale upgrades, the original regex library had to be forced into POSIX (single-byte) mode always. The support for multi-byte characters just wasn't there. ” …. “TRE is faster, more capable, and supports multibyte characters, so it's a nice addition to this release.” Other noteworthy, iwm(4) driver, CPU power-saving improvements, import ipfw from FreeBSD (named ipfw3) An interesting tidbit is switching to the Gold linker (http://bsd.slashdot.org/story/15/12/04/2351241/dragonflybsd-44-switches-to-the-gold-linker-by-default) *** Guide to install Ajenti on Nginx with SSL on FreeBSD 10.2 (http://linoxide.com/linux-how-to/install-ajenti-nginx-ssl-freebsd-10-2/) Looking for a webmin-like interface to control your FreeBSD box? Enter Ajenti, and today we have a walkthrough posted on how to get it setup on a FreeBSD 10.2 system. The walkthrough is mostly straightforward, you'll need a FreeBSD box with root, and will need to install several packages / ports initially. Because there is no native package (yet), it guides you through using python's PIP installer to fetch and get Ajenti running. The author links to some pre-built rc.d scripts and other helpful config files on GitHub, which will further assist in the process of making it run on FreeBSD. Ajenti by itself may not be the best to serve publically, so it also provides instructions on how to protect the connection by serving it through nginx / SSL, a must-have if you plan on using this over unsecure networks. *** BSDCan 2016 CFP is up! (http://www.bsdcan.org/2016/papers.php) BSDCan is the biggest North American BSD conference, and my personal favourite The call for papers is now out, and I would like to see more first-time submitters this year If you do anything interesting with or on a BSD, please write a proposal Are the machines you run BSD on bigger or smaller than what most people have? Tell us about it Are you running a big farm that does something interesting? Is your university research using BSD? Do you have an idea for a great new subsystem or utility? Have you suffered through some horrible ordeal? Make sure the rest of us know the best way out when it happens to us. Did you build a radar that runs NetBSD? A telescope controlled by FreeBSD? Have you run an ISP at the north pole using Jails? Do you run a usergroup and have tips to share? Have you combined the features and tools of a BSD in a new and interesting way? Don't have a talk to give? Teach a tutorial! The conference will arrange your air travel and hotel, and you'll get to spend a few great days with the best community on earth Michael W. Lucas's post about the 2015 proposals and rejections (http://blather.michaelwlucas.com/archives/2325) *** Beastie Bits OpenBSD's lightweight web server now in FreeBSD's ports tree (http://www.freshports.org/www/obhttpd/) Stephen Bourne's NYCBUG talk is online (https://www.youtube.com/watch?v=FI_bZhV7wpI) Looking for owner to FreeBSDWiki (http://freebsdwiki.net/index.php/Main_Page) HOWTO: OpenBSD Mail Server (http://frozen-geek.net/openbsd-email-server-1/) A new magic getopt library (http://www.daemonology.net/blog/2015-12-06-magic-getopt.html) PXE boot OpenBSD from OpenWRT (http://uggedal.com/journal/pxe-boot-openbsd-from-openwrt/) Supporting the OpenBSD project (http://permalink.gmane.org/gmane.os.openbsd.misc/227054) Feedback/Questions Zachary - FreeBSD Jails (http://slexy.org/view/s20pbRLRRz) Robert - Iocage help! (http://slexy.org/view/s2jGy34fy2) Kjell - Server Management (http://slexy.org/view/s20Ht8JfpL) Brian - NAS Setup (http://slexy.org/view/s2GYtvd7hU) Mike - Radius Followup (http://slexy.org/view/s21EVs6aUg) Laszlo - Best Stocking Ever (http://slexy.org/view/s205zZiJCv) ***
Check out JS Remote Conf! Buy a ticket! Submit a CFP! 03:07 - Burke Holland Introduction Twitter GitHub Blog 04:01 - TJ Van Toll Introduction Twitter GitHub Blog 04:33 - Telerik Telerik Platform 04:57 - NativeScript JavaScriptCore JavaScript Jabber #128: JavaScriptCore with Cesare Rocchi React Native 07:41 - The Views 10:07 - Customizability, Styling, and Standardization 16:19 - React Native vs NativeScript 18:37 - APIs CocoaPods 21:17 - How NativeScript Works 23:04 - Edgecases? Message Passing Marshalling (Mapping) 26:12 - Memory Management 27:06 - UITableView 29:59 - NativeScript and Angular AngularConnect Talks on YouTube Sebastian Witalec: Building native mobile apps with Angular 2 0 and NativeScript 33:22 - Adding NativeScript to Existing Projects 33:51 - Building for Wearables and AppleTV Burke Holland: Apple Watch and the Cross-Platform Crisis 35:59 - Building Universal Applications 37:14 - Creating NativeScript Kendo UI 39:42 - Use Cases nativescript.org/app-samples-with-code 41:01 - Are there specific things NativeScript isn’t good for? npmjs.com search: nativescript 42:54 - Testing and Debugging 48:35 - Data Storage Picks Caddy (AJ) OC ReMix #505: Top Gear 'Track 1 (Final Nitro Mix)' by Rayza (AJ) Jamie Talbot: What are Bloom filters? A tale of code, dinner, and a favour with unexpected consequences (Aimee) Mike Gehard (@mikegehard) (Aimee) Joe Eames: Becoming Betazoid: How to Listen and Empathize with Others in the Workplace @ AngularConnect 2015 (Dave) Exercise (Chuck) Sleep (Chuck) electron (Aaron) The Synchronicity War Series by Dietmar Wehr (Aaron) PAUSE (Burke) Outlander (TJ)
Check out JS Remote Conf! Buy a ticket! Submit a CFP! 03:07 - Burke Holland Introduction Twitter GitHub Blog 04:01 - TJ Van Toll Introduction Twitter GitHub Blog 04:33 - Telerik Telerik Platform 04:57 - NativeScript JavaScriptCore JavaScript Jabber #128: JavaScriptCore with Cesare Rocchi React Native 07:41 - The Views 10:07 - Customizability, Styling, and Standardization 16:19 - React Native vs NativeScript 18:37 - APIs CocoaPods 21:17 - How NativeScript Works 23:04 - Edgecases? Message Passing Marshalling (Mapping) 26:12 - Memory Management 27:06 - UITableView 29:59 - NativeScript and Angular AngularConnect Talks on YouTube Sebastian Witalec: Building native mobile apps with Angular 2 0 and NativeScript 33:22 - Adding NativeScript to Existing Projects 33:51 - Building for Wearables and AppleTV Burke Holland: Apple Watch and the Cross-Platform Crisis 35:59 - Building Universal Applications 37:14 - Creating NativeScript Kendo UI 39:42 - Use Cases nativescript.org/app-samples-with-code 41:01 - Are there specific things NativeScript isn’t good for? npmjs.com search: nativescript 42:54 - Testing and Debugging 48:35 - Data Storage Picks Caddy (AJ) OC ReMix #505: Top Gear 'Track 1 (Final Nitro Mix)' by Rayza (AJ) Jamie Talbot: What are Bloom filters? A tale of code, dinner, and a favour with unexpected consequences (Aimee) Mike Gehard (@mikegehard) (Aimee) Joe Eames: Becoming Betazoid: How to Listen and Empathize with Others in the Workplace @ AngularConnect 2015 (Dave) Exercise (Chuck) Sleep (Chuck) electron (Aaron) The Synchronicity War Series by Dietmar Wehr (Aaron) PAUSE (Burke) Outlander (TJ)
Check out JS Remote Conf! Buy a ticket! Submit a CFP! 03:07 - Burke Holland Introduction Twitter GitHub Blog 04:01 - TJ Van Toll Introduction Twitter GitHub Blog 04:33 - Telerik Telerik Platform 04:57 - NativeScript JavaScriptCore JavaScript Jabber #128: JavaScriptCore with Cesare Rocchi React Native 07:41 - The Views 10:07 - Customizability, Styling, and Standardization 16:19 - React Native vs NativeScript 18:37 - APIs CocoaPods 21:17 - How NativeScript Works 23:04 - Edgecases? Message Passing Marshalling (Mapping) 26:12 - Memory Management 27:06 - UITableView 29:59 - NativeScript and Angular AngularConnect Talks on YouTube Sebastian Witalec: Building native mobile apps with Angular 2 0 and NativeScript 33:22 - Adding NativeScript to Existing Projects 33:51 - Building for Wearables and AppleTV Burke Holland: Apple Watch and the Cross-Platform Crisis 35:59 - Building Universal Applications 37:14 - Creating NativeScript Kendo UI 39:42 - Use Cases nativescript.org/app-samples-with-code 41:01 - Are there specific things NativeScript isn’t good for? npmjs.com search: nativescript 42:54 - Testing and Debugging 48:35 - Data Storage Picks Caddy (AJ) OC ReMix #505: Top Gear 'Track 1 (Final Nitro Mix)' by Rayza (AJ) Jamie Talbot: What are Bloom filters? A tale of code, dinner, and a favour with unexpected consequences (Aimee) Mike Gehard (@mikegehard) (Aimee) Joe Eames: Becoming Betazoid: How to Listen and Empathize with Others in the Workplace @ AngularConnect 2015 (Dave) Exercise (Chuck) Sleep (Chuck) electron (Aaron) The Synchronicity War Series by Dietmar Wehr (Aaron) PAUSE (Burke) Outlander (TJ)
Richard chats with Clint Huffman about memory management in Windows. But first a quick conversation about the state of affairs these days, including Clint's work on the Windows Performance Field Guide, due to be published in early 2014. Clint runs down the various elements that matter about memory, the effects of running 32 bit apps in a 64 bit OS and more. And along the way, he mentions lots of great resources, including the PFE Performance Guide, the Windows Performance Toolkit and a knowledge base article on How to Use Poolmon. Check 'em out!
The panelists discuss memory management.
The panelists discuss memory management.
This week Ben and Merrick interview Michael Dominic and ask him all the hard questions. Dominics new job http://edufii.com/ Self taught vs formally educated programmers What’s wrong with the CS Education in this country, State of? ARC (Automated Resourcing Countint) in Objective-c Objectice-c blocks vs delegates source code samples at interviews Ramiro Gómez – Venn [...]
In this episode of The Treehouse Show, Jim Hoskins (@jimrhoskins) and Jason Seifer (@jseifer) talk about Memory management, wireframing, and JavaScript frameworks.
In this episode of The Treehouse Show, Jim Hoskins (@jimrhoskins) and Jason Seifer (@jseifer) talk about Memory management, wireframing, and JavaScript frameworks.
Besser spät als nie: die Neue Revision ist da! Es gab viel zu besprechen, dafür aber keine Keine Schaunotizen. Schaunotizen 00:00:24.000 Google Takeout Google ermöglicht das Exportieren von Daten, die man in den diversen Google-Produkten hinterlassen hat (Buzz-Posts, Fotos, Profilinfos) in einem brauchbaren Format. Warum können das nicht andere Dienste (Facebook) auch anbieten? 00:08:35.000 WebCL […]
Paul Hegarty continues his lecture on foundation frameworks and then covers objective-c memory management and introspection. (September 30, 2010)
Paul Hegarty continues his lecture on foundation frameworks and then covers objective-c memory management and introspection. (September 30, 2010)
RIApodcast - we discuss the latest news and topics in Rich Internet Applications and Technologies
Cathi on mice testing for memory erasing Nora on multicolr from Idee
Carl and Richard have a discussion with a vegetable. Yeah, that's right, we interviewed a leek named Ricky Leeks. Beyond all the endless puns, is a great conversation about memory management in .NET. Ricky also lets us know about a free e-book on .NET Memory Management you can download from the links in the show. Check it out!Support this podcast at — https://redcircle.com/net-rocks/donations