POPULARITY
Categories
Slide into our VMs
We slide into our VMs
Interview with Max Porterfield, President & CEO of Visionary Copper & Gold Inc.Our previous interview: https://www.cruxinvestor.com/posts/visionary-copper-gold-mines-tsxvvgc-pitch-perfect-9001Recording date: 4th March 2026Visionary Copper & Gold is focused on unlocking the value of Point Leamington, a polymetallic VMS deposit in Newfoundland, Canada, that holds over 20 million tonnes of gold, silver, copper, and zinc mineralisation and has seen no modern exploration since 2004. The company's current Phase 1 drilling programme is delivering early results that management believes confirm the deposit's potential to grow significantly from its existing resource base.The deposit's existing pit-constrained resource contains approximately 500,000 ounces of gold, 8 million ounces of silver, 170 million pounds of copper, and 700 million pounds of zinc. Gold currently accounts for approximately 55% of contained metal value. However, the most consequential development from recent drilling is not within the known resource but within the footwall beneath it.Visionary has confirmed a new copper zone, named Kraken, in the footwall to Point Leamington's main massive sulphide lens. This type of structure is a defining characteristic of the world's largest VMS deposits. At Ming in Newfoundland, a copper stringer zone sits below the main lens. At Flin Flon Bay in Manitoba, the copper-rich footwall accompanies a zinc-dominant main horizon. The first Kraken hole returned a 76-metre interval at 0.45% copper and a second intersection of 23 metres at 1.5% copper which are consistent with the early-stage definition of footwall zones that have materially expanded comparable systems.Alongside the Kraken results, the company has extended the confirmed strike length of Point Leamington to over one kilometre. This is an important structural distinction. Most large VMS systems — including Ming, Lalor, and 777 in Manitoba — have strike extents of 200 to 250 metres, with their tonnage coming primarily from depth. The small number of VMS systems with significantly longer strike extents, such as Kidd Creek and Flin Flon, have produced some of the largest total resource outcomes globally. Point Leamington's confirmed kilometre-plus strike places it in this rarer category.The development plan for 2026 is clear and sequenced. Phase 1 drilling will continue stepping out around the Kraken intersection. Phase 2 will follow after ground conditions improve, targeting resource upgrades and further Kraken delineation. A two-phase metallurgical programme will collect data to support economic studies, and the company is targeting an updated resource estimate and a preliminary economic assessment within the year.Beyond Newfoundland, Visionary holds the Rainbow/Pine Bay copper asset in Manitoba. An advanced exploration permit application is currently under review, and an Environmental Act licence for full-scale production is being prepared. The asset sits within trucking distance of three concentrators and approximately 30 minutes from a rail line linked to Canada's proposed Churchill port export corridor — a route that has attracted attention from federal and provincial governments in the context of critical mineral supply chain security.The entire Eastern Canada portfolio was assembled in 2016 for $1.1 million. With gold above $3,000 per ounce and structural copper deficits widening, Visionary is positioned at an early stage of what could be a significant resource expansion cycle at one of Canada's most overlooked large-scale VMS systems.View Visionary Copper & Gold's company profile: https://www.cruxinvestor.com/companies/visionary-copper-gold-minesSign up for Crux Investor: https://cruxinvestor.com
Slide into our VMs.
All speakers are announced at AIE EU, schedule coming soon. Join us there or in Miami with the renowned organizers of React Miami! Singapore CFP also open!We've called this out a few times over in AINews, but the overwhelming consensus in the Valley is that “the IDE is Dead”. In November it was just a gut feeling, but now we actually have data: even at the canonical “VSCode Fork” company, people are officially using more agents than tab autocomplete (the first wave of AI coding):Cursor has launched cloud agents for a few months now, and this specific launch is around Computer Use, which has come a long way since we first talked with Anthropic about it in 2024, and which Jonas productized as Autotab:We also take the opportunity to do a live demo, talk about slash commands and subagents, and the future of continual learning and personalized coding models, something that Sam previously worked on at New Computer. (The fact that both of these folks are top tier CEOs of their own startups that have now joined the insane talent density gathering at Cursor should also not be overlooked).Full Episode on YouTube!please like and subscribe!Timestamps00:00 Agentic Code Experiments00:53 Why Cloud Agents Matter02:08 Testing First Pillar03:36 Video Reviews Second Pillar04:29 Remote Control Third Pillar06:17 Meta Demos and Bug Repro13:36 Slash Commands and MCPs18:19 From Tab to Team Workflow31:41 Minimal Web UI Philosophy32:40 Why No File Editor34:38 Full Stack Cursor Debate36:34 Model Choice and Auto Routing38:34 Parallel Agents and Best Of N41:41 Subagents and Context Management44:48 Grind Mode and Throughput Future01:00:24 Cloud Agent Onboarding and MemoryTranscriptEP 77 - CURSOR - Audio version[00:00:00]Agentic Code ExperimentsSamantha: This is another experiment that we ran last year and didn't decide to ship at that time, but may come back to LM Judge, but one that was also agentic and could write code. So it wasn't just picking but also taking the learnings from two models or and models that it was looking at and writing a new diff.And what we found was that there were strengths to using models from different model providers as the base level of this process. Basically you could get almost like a synergistic output that was better than having a very unified like bottom model tier.Jonas: We think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we'll be making the pipe much wider and so paralyzing more, whether that's swarms of agents or parallel agents, both of those are things that contribute to getting much more done in the same amount of time.Why Cloud Agents Matterswyx: This week, one of the biggest launches that Cursor's ever done is cloud agents. I think you, you had [00:01:00] cloud agents before, but this was like, you give cursor a computer, right? Yeah. So it's just basically they bought auto tab and then they repackaged it. Is that what's going on, or,Jonas: that's a big part of it.Yeah. Cloud agents already ran in their own computers, but they were sort of site reading code. Yeah. And those computers were not, they were like blank VMs typically that were not set up for the Devrel X for whatever repo the agents working on. One of the things that we talk about is if you put yourself in the model shoes and you were seeing tokens stream by and all you could do was cite read code and spit out tokens and hope that you had done the right thing,swyx: no chanceJonas: I'd be so bad.Like you obviously you need to run the code. And so that I think also is probably not that contrarian of a take, but no one has done that yet. And so giving the model the tools to onboard itself and then use full computer use end-to-end pixels in coordinates out and have the cloud computer with different apps in it is the big unlock that we've seen internally in terms of use usage of this going from, oh, we use it for little copy changes [00:02:00] to no.We're really like driving new features with this kind of new type of entech workflow. Alright, let's see it. Cool.Live Demo TourJonas: So this is what it looks like in cursor.com/agents. So this is one I kicked off a while ago. So on the left hand side is the chat. Very classic sort of agentic thing. The big new thing here is that the agent will test its changes.So you can see here it worked for half an hour. That is because it not only took time to write the tokens of code, it also took time to test them end to end. So it started Devrel servers iterate when needed. And so that's one part of it is like model works for longer and doesn't come back with a, I tried some things pr, but a I tested at pr that's ready for your review.One of the other intuition pumps we use there is if a human gave you a PR asked you to review it and you hadn't, they hadn't tested it, you'd also be annoyed because you'd be like, only ask me for a review once it's actually ready. So that's what we've done withTesting Defaults and Controlsswyx: simple question I wanted to gather out front.Some prs are way smaller, [00:03:00] like just copy change. Does it always do the video or is it sometimes,Jonas: Sometimes.swyx: Okay. So what's the judgment?Jonas: The model does it? So we we do some default prompting with sort. What types of changes to test? There's a slash command that people can do called slash no test, where if you do that, the model will not test,swyx: but the default is test.Jonas: The default is to be calibrated. So we tell it don't test, very simple copy changes, but test like more complex things. And then users can also write their agents.md and specify like this type of, if you're editing this subpart of my mono repo, never tested ‘cause that won't work or whatever.Videos and Remote ControlJonas: So pillar one is the model actually testing Pillar two is the model coming back with a video of what it did.We have found that in this new world where agents can end-to-end, write much more code, reviewing the code is one of these new bottlenecks that crop up. And so reviewing a video is not a substitute for reviewing code, but it is an entry point that is much, much easier to start with than glancing at [00:04:00] some giant diff.And so typically you kick one off you, it's done you come back and the first thing that you would do is watch this video. So this is a, video of it. In this case I wanted a tool tip over this button. And so it went and showed me what that looks like in, in this video that I think here, it actually used a gallery.So sometimes it will build storybook type galleries where you can see like that component in action. And so that's pillar two is like these demo videos of what it built. And then pillar number three is I have full remote control access to this vm. So I can go heat in here. I can hover things, I can type, I have full control.And same thing for the terminal. I have full access. And so that is also really useful because sometimes the video is like all you need to see. And oftentimes by the way, the video's not perfect, the video will show you, is this worth either merging immediately or oftentimes is this worth iterating with to get it to that final stage where I am ready to merge in.So I can go through some other examples where the first video [00:05:00] wasn't perfect, but it gave me confidence that we were on the right track and two or three follow-ups later, it was good to go. And then I also have full access here where some things you just wanna play around with. You wanna get a feel for what is this and there's no substitute to a live preview.And the VNC kind of VM remote access gives you that.swyx: Amazing What, sorry? What is VN. AndJonas: just the remote desktop. Remote desktop. Yeah.swyx: Sam, any other details that you always wanna call out?Samantha: Yeah, for me the videos have been super helpful. I would say, especially in cases where a common problem for me with agents and cloud agents beforehand was almost like under specification in my requests where our plan mode and going really back and forth and getting detailed implementation spec is a way to reduce the risk of under specification, but then similar to how human communication breaks down over time, I feel like you have this risk where it's okay, when I pull down, go to the triple of pulling down and like running this branch locally, I'm gonna see that, like I said, this should be a toggle and you have a checkbox and like, why didn't you get that detail?And having the video up front just [00:06:00] has that makes that alignment like you're talking about a shared artifact with the agent. Very clear, which has been just super helpful for me.Jonas: I can quickly run through some other Yes. Examples.Meta Agents and More DemosJonas: So this is a very front end heavy one. So one question I wasswyx: gonna say, is this only for frontJonas: end?Exactly. One question you might have is this only for front end? So this is another example where the thing I wanted it to implement was a better error message for saving secrets. So the cloud agents support adding secrets, that's part of what it needs to access certain systems. Part of onboarding that is giving access.This is cloud is working onswyx: cloud agents. Yes.Jonas: So this is a fun thing isSamantha: it can get super meta. ItJonas: can get super meta, it can start its own cloud agents, it can talk to its own cloud agents. Sometimes it's hard to wrap your mind around that. We have disabled, it's cloud agents starting more cloud agents. So we currently disallow that.Someday you might. Someday we might. Someday we might. So this actually was mostly a backend change in terms of the error handling here, where if the [00:07:00] secret is far too large, it would oh, this is actually really cool. Wow. That's the Devrel tools. That's the Devrel tools. So if the secret is far too large, we.Allow secrets above a certain size. We have a size limit on them. And the error message there was really bad. It was just some generic failed to save message. So I was like, Hey, we wanted an error message. So first cool thing it did here, zero prompting on how to test this. Instead of typing out the, like a character 5,000 times to hit the limit, it opens Devrel tools, writes js, or to paste into the input 5,000 characters of the letter A and then hit save, closes the Devrel tools, hit save and gets this new gets the new error message.So that looks like the video actually cut off, but here you can see the, here you can see the screenshot of the of the error message. What, so that is like frontend backend end-to-end feature to, to get that,swyx: yeah.Jonas: Andswyx: And you just need a full vm, full computer run everything.Okay. Yeah.Jonas: Yeah. So we've had versions of this. This is one of the auto tab lessons where we started that in 2022. [00:08:00] No, in 2023. And at the time it was like browser use, DOM, like all these different things. And I think we ended up very sort of a GI pilled in the sense that just give the model pixels, give it a box, a brain in a box is what you want and you want to remove limitations around context and capabilities such that the bottleneck should be the intelligence.And given how smart models are today, that's a very far out bottleneck. And so giving it its full VM and having it be onboarded with Devrel X set up like a human would is just been for us internally a really big step change in capability.swyx: Yeah I would say, let's call it a year ago the models weren't even good enough to do any of this stuff.SoSamantha: even six months ago. Yeah.swyx: So yeah what people have told me is like round about Sonder four fire is when this started being good enough to just automate fully by pixel.Jonas: Yeah, I think it's always a question of when is good enough. I think we found in particular with Opus 4 5, 4, 6, and Codex five three, that those were additional step [00:09:00] changes in the autonomy grade capabilities of the model to just.Go off and figure out the details and come back when it's done.swyx: I wanna appreciate a couple details. One 10 Stack Router. I see it. Yeah. I'm a big fan. Do you know any, I have to name the 10 Stack.Jonas: No.swyx: This just a random lore. Some buddy Sue Tanner. My and then the other thing if you switch back to the video.Jonas: Yeah.swyx: I wanna shout out this thing. Probably Sam did it. I don't knowJonas: the chapters.swyx: What is this called? Yeah, this is called Chapters. Yeah. It's like a Vimeo thing. I don't know. But it's so nice the design details, like the, and obviously a company called Cursor has to have a beautiful cursorSamantha: and it isswyx: the cursor.Samantha: Cursor.swyx: You see it branded? It's the cursor. Cursor, yeah. Okay, cool. And then I was like, I complained to Evan. I was like, okay, but you guys branded everything but the wallpaper. And he was like, no, that's a cursor wallpaper. I was like, what?Samantha: Yeah. Rio picked the wallpaper, I think. Yeah. The video.That's probably Alexi and yeah, a few others on the team with the chapters on the video. Matthew Frederico. There's been a lot of teamwork on this. It's a huge effort.swyx: I just, I like design details.Samantha: Yeah.swyx: And and then when you download it adds like a little cursor. Kind of TikTok clip. [00:10:00] Yes. Yes.So it's to make it really obvious is from Cursor,Jonas: we did the TikTok branding at the end. This was actually in our launch video. Alexi demoed the cloud agent that built that feature. Which was funny because that was an instance where one of the things that's been a consequence of having these videos is we use best of event where you run head to head different models on the same prompt.We use that a lot more because one of the complications with doing that before was you'd run four models and they would come back with some giant diff, like 700 lines of code times four. It's what are you gonna do? You're gonna review all that's horrible. But if you come back with four 22nd videos, yeah, I'll watch four 22nd videos.And then even if none of them is perfect, you can figure out like, which one of those do you want to iterate with, to get it over the line. Yeah. And so that's really been really fun.Bug Repro WorkflowJonas: Here's another example. That's we found really cool, which is we've actually turned since into a slash command as well slash [00:11:00] repro, where for bugs in particular, the model of having full access to the to its own vm, it can first reproduce the bug, make a video of the bug reproducing, fix the bug, make a video of the bug being fixed, like doing the same pattern workflow with obviously the bug not reproducing.And that has been the single category that has gone from like these types of bugs, really hard to reproduce and pick two tons of time locally, even if you try a cloud agent on it. Are you confident it actually fixed it to when this happens? You'll merge it in 90 seconds or something like that.So this is an example where, let me see if this is the broken one or the, okay, this is the fixed one. Okay. So we had a bug on cursor.com/agents where if you would attach images where remove them. Then still submit your prompt. They would actually still get attached to the prompt. Okay. And so here you can see Cursor is using, its full desktop by the way.This is one of the cases where if you just do, browse [00:12:00] use type stuff, you'll have a bad time. ‘cause now it needs to upload files. Like it just uses its native file viewer to do that. And so you can see here it's uploading files. It's going to submit a prompt and then it will go and open up. So this is the meta, this is cursor agent, prompting cursor agent inside its own environment.And so you can see here bug, there's five images attached, whereas when it's submitted, it only had one image.swyx: I see. Yeah. But you gotta enable that if you're gonna use cur agent inside cur.Jonas: Exactly. And so here, this is then the after video where it went, it does the same thing. It attaches images, removes, some of them hit send.And you can see here, once this agent is up, only one of the images is left in the attachments. Yeah.swyx: Beautiful.Jonas: Okay. So easy merge.swyx: So yeah. When does it choose to do this? Because this is an extra step.Jonas: Yes. I think I've not done a great job yet of calibrating the model on when to reproduce these things.Yeah. Sometimes it will do it of its own accord. Yeah. We've been conservative where we try to have it only do it when it's [00:13:00] quite sure because it does add some amount of time to how long it takes it to work on it. But we also have added things like the slash repro command where you can just do, fix this bug slash repro and then it will know that it should first make you a video of it actually finding and making sure it can reproduce the bug.swyx: Yeah. Yeah. One sort of ML topic this ties into is reward hacking, where while you write test that you update only pass. So first write test, it shows me it fails, then make you test pass, which is a classic like red green.Jonas: Yep.swyx: LikeJonas: A-T-D-D-T-D-Dswyx: thing.No, very cool. Was that the last demo? Is thereJonas: Yeah.Anything I missed on the demos or points that you think? I think thatSamantha: covers it well. Yeah.swyx: Cool. Before we stop the screen share, can you gimme like a, just a tour of the slash commands ‘cause I so God ready. Huh, what? What are the good ones?Samantha: Yeah, we wanna increase discoverability around this too.I think that'll be like a future thing we work on. Yeah. But there's definitely a lot of good stuff nowJonas: we have a lot of internal ones that I think will not be that interesting. Here's an internal one that I've made. I don't know if anyone else at Cursor uses this one. Fix bb.Samantha: I've never heard of it.Jonas: Yeah.[00:14:00]Fix Bug Bot. So this is a thing that we want to integrate more tightly on. So you made it forswyx: yourself.Jonas: I made this for myself. It's actually available to everyone in the team, but yeah, no one knows about it. But yeah, there will be Bug bot comments and so Bug Bot has a lot of cool things. We actually just launched Bug Bot Auto Fix, where you can click a button and or change a setting and it will automatically fix its own things, and that works great in a bunch of cases.There are some cases where having the context of the original agent that created the PR is really helpful for fixing the bugs, because it might be like, oh, the bug here is that this, is a regression and actually you meant to do something more like that. And so having the original prompt and all of the context of the agent that worked on it, and so here I could just do, fix or we used to be able to do fixed PB and it would do that.No test is another one that we've had. Slash repro is in here. We mentioned that one.Samantha: One of my favorites is cloud agent diagnosis. This is one that makes heavy use of the Datadog MCP. Okay. And I [00:15:00] think Nick and David on our team wrote, and basically if there is a problem with a cloud agent we'll spin up a bunch of subs.Like a singleswyx: instance.Samantha: Yeah. We'll take the ideas and argument and spin up a bunch of subagents using the Datadog MCP to explore the logs and find like all of the problems that could have happened with that. It takes the debugging time, like from potentially you can do quick stuff quickly with the Datadog ui, but it takes it down to, again, like a single agent call as opposed to trolling through logs yourself.Jonas: You should also talk about the stuff we've done with transcripts.Samantha: Yes. Also so basically we've also done some things internally. There'll be some versions of this as we ship publicly soon, where you can spit up an agent and give it access to another agent's transcript to either basically debug something that happened.So act as an external debugger. I see. Or continue the conversation. Almost like forking it.swyx: A transcript includes all the chain of thought for the 11 minutes here. 45 minutes there.Samantha: Yeah. That way. Exactly. So basically acting as a like secondary agent that debugs the first, so we've started to push more andswyx: they're all the same [00:16:00] code.It is just the different prompts, but the sa the same.Samantha: Yeah. So basically same cloud agent infrastructure and then same harness. And then like when we do things like include, there's some extra infrastructure that goes into piping in like an external transcript if we include it as an attachment.But for things like the cloud agent diagnosis, that's mostly just using the Datadog MCP. ‘Cause we also launched CPS along with along with this cloud agent launch, launch support for cloud agent cps.swyx: Oh, that was drawn out.Jonas: We won't, we'll be doing a bigger marketing moment for it next week, but, and you can now use CPS andswyx: People will listen to it as well.Yeah,Jonas: they'llSamantha: be ahead of the third. They'll be ahead. And I would I actually don't know if the Datadog CP is like publicly available yet. I realize this not sure beta testing it, but it's been one of my favorites to use. Soswyx: I think that one's interesting for Datadog. ‘cause Datadog wants to own that site.Interesting with Bits. I don't know if you've tried bits.Samantha: I haven't tried bits.swyx: Yeah.Jonas: That's their cloud agentswyx: product. Yeah. Yeah. They want to be like we own your logs and give us our, some part of the, [00:17:00] self-healing software that everyone wants. Yeah. But obviously Cursor has a strong opinion on coding agents and you, you like taking away from the which like obviously you're going to do, and not every company's like Cursor, but it's interesting if you're a Datadog, like what do you do here?Do you expose your logs to FDP and let other people do it? Or do you try to own that it because it's extra business for you? Yeah. It's like an interesting one.Samantha: It's a good question. All I know is that I love the Datadog MCP,Jonas: And yeah, it is gonna be no, no surprise that people like will demand it, right?Samantha: Yeah.swyx: It's, it's like anysystemswyx: of record company like this, it's like how much do you give away? Cool. I think that's that for the sort of cloud agents tour. Cool. And we just talk about like cloud agents have been when did Kirsten loves cloud agents? Do you know, in JuneJonas: last year.swyx: June last year. So it's been slowly develop the thing you did, like a bunch of, like Michael did a post where himself, where he like showed this chart of like ages overtaking tap. And I'm like, wow, this is like the biggest transition in code.Jonas: Yeah.swyx: Like in, in [00:18:00] like the last,Jonas: yeah. I think that kind of got turned out.Yeah. I think it's a very interest,swyx: not at all. I think it's been highlighted by our friend Andre Kati today.Jonas: Okay.swyx: Talk more about it. What does it mean? Yeah. Is I just got given like the cursor tab key.Jonas: Yes. Yes.swyx: That's that'sSamantha: cool.swyx: I know, but it's gonna be like put in a museum.Jonas: It is.Samantha: I have to say I haven't used tab a little bit myself.Jonas: Yeah. I think that what it looks like to code with AI code generally creates software, even if you want to go higher level. Is changing very rapidly. No, not a hot take, but I think from our vendor's point at Cursor, I think one of the things that is probably underappreciated from the outside is that we are extremely self-aware about that fact and Kerscher, got its start in phase one, era one of like tab and auto complete.And that was really useful in its time. But a lot of people start looking at text files and editing code, like we call it hand coding. Now when you like type out the actual letters, it'sswyx: oh that's cute.Jonas: Yeah.swyx: Oh that's cute.Jonas: You're so boomer. So boomer. [00:19:00] And so that I think has been a slowly accelerating and now in the last few months, rapidly accelerating shift.And we think that's going to happen again with the next thing where the, I think some of the pains around tab of it's great, but I actually just want to give more to the agent and I don't want to do one tab at a time. I want to just give it a task and it goes off and does a larger unit of work and I can.Lean back a little bit more and operate at that higher level of abstraction that's going to happen again, where it goes from agents handing you back diffs and you're like in the weeds and giving it, 32nd to three minute tasks, to, you're giving it, three minute to 30 minute to three hour tasks and you're getting back videos and trying out previews rather than immediately looking at diffs every single time.swyx: Yeah. Anything to add?Samantha: One other shift that I've noticed as our cloud agents have really taken off internally has been a shift from primarily individually driven development to almost this collaborative nature of development for us, slack is actually almost like a development on [00:20:00] Id basically.So Iswyx: like maybe don't even build a custom ui, like maybe that's like a debugging thing, but actually it's that.Samantha: I feel like, yeah, there's still so much to left to explore there, but basically for us, like Slack is where a lot of development happens. Like we will have these issue channels or just like this product discussion channels where people are always at cursing and that kicks off a cloud agent.And for us at least, we have team follow-ups enabled. So if Jonas kicks off at Cursor in a thread, I can follow up with it and add more context. And so it turns into almost like a discussion service where people can like collaborate on ui. Oftentimes I will kick off an investigation and then sometimes I even ask it to get blame and then tag people who should be brought in. ‘cause it can tag people in Slack and then other people will comeswyx: in, can tag other people who are not involved in conversation. Yes. Can just do at Jonas if say, was talking to,Samantha: yeah.swyx: That's cool. You should, you guys should make a big good deal outta that.Samantha: I know. It's a lot to, I feel like there's a lot more to do with our slack surface area to show people externally. But yeah, basically like it [00:21:00] can bring other people in and then other people can also contribute to that thread and you can end up with a PR again, with the artifacts visible and then people can be like, okay, cool, we can merge this.So for us it's like the ID is almost like moving into Slack in some ways as well.swyx: I have the same experience with, but it's not developers, it's me. Designer salespeople.Samantha: Yeah.swyx: So me on like technical marketing, vision, designer on design and then salespeople on here's the legal source of what we agreed on.And then they all just collaborate and correct. The agents,Jonas: I think that we found when these threads is. The work that is left, that the humans are discussing in these threads is the nugget of what is actually interesting and relevant. It's not the boring details of where does this if statement go?It's do we wanna ship this? Is this the right ux? Is this the right form factor? Yeah. How do we make this more obvious to the user? It's like those really interesting kind of higher order questions that are so easy to collaborate with and leave the implementation to the cloud agent.Samantha: Totally. And no more discussion of am I gonna do this? Are you [00:22:00] gonna do this cursor's doing it? You just have to decide. You like it.swyx: Sometimes the, I don't know if there's a, this probably, you guys probably figured this out already, but since I, you need like a mute button. So like cursor, like we're going to take this offline, but still online.But like we need to talk among the humans first. Before you like could stop responding to everything.Jonas: Yeah. This is a design decision where currently cursor won't chime in unless you explicitly add Mention it. Yeah. Yeah.Samantha: So it's not always listening.Yeah.Jonas: I can see all the intermediate messages.swyx: Have you done the recursive, can cursor add another cursor or spawn another cursor?Samantha: Oh,Jonas: we've done some versions of this.swyx: Because, ‘cause it can add humans.Jonas: Yes. One of the other things we've been working on that's like an implication of generating the code is so easy is getting it to production is still harder than it should be.And broadly, you solve one bottleneck and three new ones pop up. Yeah. And so one of the new bottlenecks is getting into production and we have a like joke internally where you'll be talking about some feature and someone says, I have a PR for that. Which is it's so easy [00:23:00] to get to, I a PR for that, but it's hard still relatively to get from I a PR for that to, I'm confident and ready to merge this.And so I think that over the coming weeks and months, that's a thing that we think a lot about is how do we scale up compute to that pipeline of getting things from a first draft An agent did.swyx: Isn't that what Merge isn't know what graphite's for, likeJonas: graphite is a big part of that. The cloud agent testingswyx: Is it fully integrated or still different companiesJonas: working on I think we'll have more to share there in the future, but the goal is to have great end-to-end experience where Cursor doesn't just help you generate code tokens, it helps you create software end-to-end.And so review is a big part of that, that I think especially as models have gotten much better at writing code, generating code, we've felt that relatively crop up more,swyx: sorry this is completely unplanned, but like there I have people arguing one to you need ai. To review ai and then there is another approach, thought school of thought where it's no, [00:24:00] reviews are dead.Like just show me the video. It's it like,Samantha: yeah. I feel again, for me, the video is often like alignment and then I often still wanna go through a code review process.swyx: Like still look at the files andSamantha: everything. Yeah. There's a spectrum of course. Like the video, if it's really well done and it does like fully like test everything, you can feel pretty competent, but it's still helpful to, to look at the code.I make hep pay a lot of attention to bug bot. I feel like Bug Bot has been a great really highly adopted internally. We often like, won't we tell people like, don't leave bug bot comments unaddressed. ‘cause we have such high confidence in it. So people always address their bug bot comments.Jonas: Once you've had two cases where you merged something and then you went back later, there was a bug in it, you merged, you went back later and you were like, ah, bug Bot had found that I should have listened to Bug Bot.Once that happens two or three times, you learn to wait for bug bot.Samantha: Yeah. So I think for us there's like that code level review where like it's looking at the actual code and then there's like the like feature level review where you're looking at the features. There's like a whole number of different like areas.There'll probably eventually be things like performance level review, security [00:25:00] review, things like that where it's like more more different aspects of how this feature might affect your code base that you want to potentially leverage an agent to help with.Jonas: And some of those like bug bot will be synchronous and you'll typically want to wait on before you merge.But I think another thing that we're starting to see is. As with cloud agents, you scale up this parallelism and how much code you generate. 10 person startups become, need the Devrel X and pipelines that a 10,000 person company used to need. And that looks like a lot of the things I think that 10,000 person companies invented in order to get that volume of software to production safely.So that's things like, release frequently or release slowly, have different stages where you release, have checkpoints, automated ways of detecting regressions. And so I think we're gonna need stacks merg stack diffs merge queues. Exactly. A lot of those things are going to be importantswyx: forward with.I think the majority of people still don't know what stack stacks are. And I like, I have many friends in Facebook and like I, I'm pretty friendly with graphite. I've just, [00:26:00] I've never needed it ‘cause I don't work on that larger team and it's just like democratization of no, only here's what we've already worked out at very large scale and here's how you can, it benefits you too.Like I think to me, one of the beautiful things about GitHub is that. It's actually useful to me as an individual solo developer, even though it's like actually collaboration software.Jonas: Yep.swyx: And I don't think a lot of Devrel tools have figured that out yet. That transition from like large down to small.Jonas: Yeah. Kers is probably an inverse story.swyx: This is small down toJonas: Yeah. Where historically Kers share, part of why we grew so quickly was anyone on the team could pick it up and in fact people would pick it up, on the weekend for their side project and then bring it into work. ‘cause they loved using it so much.swyx: Yeah.Jonas: And I think a thing that we've started working on a lot more, not us specifically, but as a company and other folks at Cursor, is making it really great for teams and making it the, the 10th person that starts using Cursor in a team. Is immediately set up with things like, we launched Marketplace recently so other people can [00:27:00] configure what CPS and skills like plugins.So skills and cps, other people can configure that. So that my cursor is ready to go and set up. Sam loves the Datadog, MCP and Slack, MCP you've also been using a lot butSamantha: also pre-launch, but I feel like it's so good.Jonas: Yeah, my cursor should be configured if Sam feels strongly that's just amazing and required.swyx: Is it automatically shared or you have to go and.Jonas: It depends on the MCP. So some are obviously off per user. Yeah. And so Sam can't off my cursor with my Slack MCP, but some are team off and those can be set up by admins.swyx: Yeah. Yeah. That's cool. Yeah, I think, we had a man on the pod when cursor was five people, and like everyone was like, okay, what's the thing?And then it's usually something teams and org and enterprise, but it's actually working. But like usually at that stage when you're five, when you're just a vs. Code fork it's like how do you get there? Yeah. Will people pay for this? People do pay for it.Jonas: Yeah. And I think for cloud agents, we expect.[00:28:00]To have similar kind of PLG things where I think off the bat we've seen a lot of adoption with kind of smaller teams where the code bases are not quite as complex to set up. Yes. If you need some insane docker layer caching thing for builds not to take two hours, that's going to take a little bit longer for us to be able to support that kind of infrastructure.Whereas if you have front end backend, like one click agents can install everything that they need themselves.swyx: This is a good chance for me to just ask some technical sort of check the box questions. Can I choose the size of the vm?Jonas: Not yet. We are planning on adding that. Weswyx: have, this is obviously you want like LXXL, whatever, right?Like it's like the Amazon like sort menu.Jonas: Yes, exactly. We'll add that.swyx: Yeah. In some ways you have to basically become like a EC2, almost like you rent a box.Jonas: You rent a box. Yes. We talk a lot about brain in a box. Yeah. So cursor, we want to be a brain in a box,swyx: but is the mental model different? Is it more serverless?Is it more persistent? Is. Something else.Samantha: We want it to be a bit persistent. The desktop should be [00:29:00] something you can return to af even after some days. Like maybe you go back, they're like still thinking about a feature for some period of time. So theswyx: full like sus like suspend the memory and bring it back and then keep going.Samantha: Exactly.swyx: That's an interesting one because what I actually do want, like from a manna and open crawl, whatever, is like I want to be able to log in with my credentials to the thing, but not actually store it in any like secret store, whatever. ‘cause it's like this is the, my most sensitive stuff.Yeah. This is like my email, whatever. And just have it like, persist to the image. I don't know how it was hood, but like to rehydrate and then just keep going from there. But I don't think a lot of infra works that way. A lot of it's stateless where like you save it to a docker image and then it's only whatever you can describe in a Docker file and that's it.That's the only thing you can cl multiple times in parallel.Jonas: Yeah. We have a bunch of different ways of setting them up. So there's a dockerfile based approach. The main default way is actually snapshottingswyx: like a Linux vmJonas: like vm, right? You run a bunch of install commands and then you snapshot more or less the file system.And so that gets you set up for everything [00:30:00] that you would want to bring a new VM up from that template basically.swyx: Yeah.Jonas: And that's a bit distinct from what Sam was talking about with the hibernating and re rehydrating where that is a full memory snapshot as well. So there, if I had like the browser open to a specific page and we bring that back, that page will still be there.swyx: Was there any discussion internally and just building this stuff about every time you shoot a video it's actually you show a little bit of the desktop and the browser and it's not necessary if you just show the browser. If, if you know you're just demoing a front end application.Why not just show the browser, right? Like it Yeah,Samantha: we do have some panning and zooming. Yeah. Like it can decide that when it's actually recording and cutting the video to highlight different things. I think we've played around with different ways of segmenting it and yeah. There's been some different revs on it for sure.Jonas: Yeah. I think one of the interesting things is the version that you see now in cursor.com actually is like half of what we had at peak where we decided to unshift or unshipped quite a few things. So two of the interesting things to talk about, one is directly an answer to your [00:31:00] question where we had native browser that you would have locally, it was basically an iframe that via port forwarding could load the URL could talk to local host in the vm.So that gets you basically, so inswyx: your machine's browser,likeJonas: in your local browser? Yeah. You would go to local host 4,000 and that would get forwarded to local host 4,000 in the VM via port forward. We unshift that like atswyx: Eng Rock.Jonas: Like an Eng Rock. Exactly. We unshift that because we felt that the remote desktop was sufficiently low latency and more general purpose.So we build Cursor web, but we also build Cursor desktop. And so it's really useful to be able to have the full spectrum of things. And even for Cursor Web, as you saw in one of the examples, the agent was uploading files and like I couldn't upload files and open the file viewer if I only had access to the browser.And we've thought a lot about, this might seem funny coming from Cursor where we started as this, vs. Code Fork and I think inherited a lot of amazing things, but also a lot [00:32:00] of legacy UI from VS Code.Minimal Web UI SurfacesJonas: And so with the web UI we wanted to be very intentional about keeping that very minimal and exposing the right sum of set of primitive sort of app surfaces we call them, that are shared features of that cloud.Environment that you and the agent both use. So agent uses desktop and controls it. I can use desktop and controlled agent runs terminal commands. I can run terminal commands. So that's how our philosophy around it. The other thing that is maybe interesting to talk about that we unshipped is and we may, both of these things we may reship and decide at some point in the future that we've changed our minds on the trade offs or gotten it to a point where, putswyx: it out there.Let users tell you they want it. Exactly. Alright, fine.Why No File EditorJonas: So one of the other things is actually a files app. And so we used to have the ability at one point during the process of testing this internally to see next to, I had GID desktop and terminal on the right hand side of the tab there earlier to also have a files app where you could see and edit files.And we actually felt that in some [00:33:00] ways, by restricting and limiting what you could do there, people would naturally leave more to the agent and fall into this new pattern of delegating, which we thought was really valuable. And there's currently no way in Cursor web to edit these files.swyx: Yeah. Except you like open up the PR and go into GitHub and do the thing.Jonas: Yeah.swyx: Which is annoying.Jonas: Just tell the agent,swyx: I have criticized open AI for this. Because Open AI is Codex app doesn't have a file editor, like it has file viewer, but isn't a file editor.Jonas: Do you use the file viewer a lot?swyx: No. I understand, but like sometimes I want it, the one way to do it is like freaking going to no, they have a open in cursor button or open an antigravity or, opening whatever and people pointed that.So I was, I was part of the early testers group people pointed that and they were like, this is like a design smell. It's like you actually want a VS. Code fork that has all these things, but also a file editor. And they were like, no, just trust us.Jonas: Yeah. I think we as Cursor will want to, as a product, offer the [00:34:00] whole spectrum and so you want to be able to.Work at really high levels of abstraction and double click and see the lowest level. That's important. But I also think that like you won't be doing that in Slack. And so there are surfaces and ways of interacting where in some cases limiting the UX capabilities makes for a cleaner experience that's more simple and drives people into these new patterns where even locally we kicked off joking about this.People like don't really edit files, hand code anymore. And so we want to build for where that's going and not where it's beenswyx: a lot of cool stuff. And Okay. I have a couple more.Full Stack Hosting Debateswyx: So observations about the design elements about these things. One of the things that I'm always thinking about is cursor and other peers of cursor start from like the Devrel tools and work their way towards cloud agents.Other people, like the lovable and bolts of the world start with here's like the vibe code. Full cloud thing. They were already cloud edges before anyone else cloud edges and we will give you the full deploy platform. So we own the whole loop. We own all the infrastructure, we own, we, we have the logs, we have the the live site, [00:35:00] whatever.And you can do that cycle cursor doesn't own that cycle even today. You don't have the versal, you don't have the, you whatever deploy infrastructure that, that you're gonna have, which gives you powers because anyone can use it. And any enterprise who, whatever you infra, I don't care. But then also gives you limitations as to how much you can actually fully debug end to end.I guess I'm just putting out there that like is there a future where there's like full stack cursor where like cursor apps.com where like I host my cursor site this, which is basically a verse clone, right? I don't know.Jonas: I think that's a interesting question to be asking, and I think like the logic that you laid out for how you would get there is logic that I largely agree with.swyx: Yeah. Yeah.Jonas: I think right now we're really focused on what we see as the next big bottleneck and because things like the Datadog MCP exist, yeah. I don't think that the best way we can help our customers ship more software. Is by building a hosting solution right now,swyx: by the way, these are things I've actually discussed with some of the companies I just named.Jonas: Yeah, for sure. Right now, just this big bottleneck is getting the code out there and also [00:36:00] unlike a lovable in the bolt, we focus much more on existing software. And the zero to one greenfield is just a very different problem. Imagine going to a Shopify and convincing them to deploy on your deployment solution.That's very different and I think will take much longer to see how that works. May never happen relative to, oh, it's like a zero to one app.swyx: I'll say. It's tempting because look like 50% of your apps are versal, superb base tailwind react it's the stack. It's what everyone does.So I it's kinda interesting.Jonas: Yeah.Model Choice and Auto Routingswyx: The other thing is the model select dying. Right now in cloud agents, it's stuck down, bottom left. Sure it's Codex High today, but do I care if it's suddenly switched to Opus? Probably not.Samantha: We definitely wanna give people a choice across models because I feel like it, the meta change is very frequently.I was a big like Opus 4.5 Maximalist, and when codex 5.3 came out, I hard, hard switch. So that's all I use now.swyx: Yeah. Agreed. I don't know if, but basically like when I use it in Slack, [00:37:00] right? Cursor does a very good job of exposing yeah. Cursors. If people go use it, here's the model we're using.Yeah. Here's how you switch if you want. But otherwise it's like extracted away, which is like beautiful because then you actually, you should decide.Jonas: Yeah, I think we want to be doing more with defaults.swyx: Yeah.Jonas: Where we can suggest things to people. A thing that we have in the editor, the desktop app is auto, which will route your request and do things there.So I think we will want to do something like that for cloud agents as well. We haven't done it yet. And so I think. We have both people like Sam, who are very savvy and want know exactly what model they want, and we also have people that want us to pick the best model for them because we have amazing people like Sam and we, we are the experts.Yeah. We have both the traffic and the internal taste and experience to know what we think is best.swyx: Yeah. I have this ongoing pieces of agent lab versus model lab. And to me, cursor and other companies are example of an agent lab that is, building a new playbook that is different from a model lab where it's like very GP heavy Olo.So obviously has a research [00:38:00] team. And my thesis is like you just, every agent lab is going to have a router because you're going to be asked like, what's what. I don't keep up to every day. I'm not a Sam, I don't keep up every day for using you as sample the arm arbitrator of taste. Put me on CRI Auto.Is it free? It's not free.Jonas: Auto's not free, but there's different pricing tiers. Yeah.swyx: Put me on Chris. You decide from me based on all the other people you know better than me. And I think every agent lab should basically end up doing this because that actually gives you extra power because you like people stop carrying or having loyalty with one lab.Jonas: Yeah.Best Of N and Model CouncilsJonas: Two other maybe interesting things that I don't know how much they're on your radar are one the best event thing we mentioned where running different models head to head is actually quite interesting becauseswyx: which exists in cursor.Jonas: That exists in cur ID and web. So the problem is where do you run them?swyx: Okay.Jonas: And so I, I can share my screen if that's interesting. Yeahinteresting.swyx: Yeah. Yeah. Obviously parallel agents, very popal.Jonas: Yes, exactly. Parallel agentsswyx: in you mind. Are they the same thing? Best event and parallel agents? I don't want to [00:39:00] put words in your mouth.Jonas: Best event is a subset of parallel agents where they're running on the same prompt.That would be my answer. So this is what that looks like. And so here in this dropdown picker, I can just select multiple models.swyx: Yeah.Jonas: And now if I do a prompt, I'm going to do something silly. I am running these five models.swyx: Okay. This is this fake clone, of course. The 2.0 yeah.Jonas: Yes, exactly. But they're running so the cursor 2.0, you can do desktop or cloud.So this is cloud specifically where the benefit over work trees is that they have their own VMs and can run commands and won't try to kill ports that the other one is running. Which are some of the pains. These are allswyx: called work trees?Jonas: No, these are all cloud agents with their own VMs.swyx: Okay. ButJonas: When you do it locally, sometimes people do work trees and that's been the main way that people have set out parallel so far.I've gotta say.swyx: That's so confusing for folks.Jonas: Yeah.swyx: No one knows what work trees are.Jonas: Exactly. I think we're phasing out work trees.swyx: Really.Jonas: Yeah.swyx: Okay.Samantha: But yeah. And one other thing I would say though on the multimodel choice, [00:40:00] so this is another experiment that we ran last year and the decide to ship at that time but may come back to, and there was an interesting learning that's relevant for, these different model providers. It was something that would run a bunch of best of ends but then synthesize and basically run like a synthesizer layer of models. And that was other agents that would take LM Judge, but one that was also agentic and could write code. So it wasn't just picking but also taking the learnings from two models or, and models that it was looking at and writing a new diff.And what we found was that at the time at least, there were strengths to using models from different model providers as the base level of this process. Like basically you could get almost like a synergistic output that was better than having a very unified, like bottom model tier. So it was really interesting ‘cause it's like potentially, even though even in the future when you have like maybe one model as ahead of the other for a little bit, there could be some benefit from having like multiple top tier models involved in like a [00:41:00] model swarm or whatever agent Swarm that you're doing, that they each have strengths and weaknesses.Yeah.Jonas: Andre called this the council, right?Samantha: Yeah, exactly. We actually, oh, that's another internal command we have that Ian wrote slash council. Oh, and they some, yeah.swyx: Yes. This idea is in various forms everywhere. And I think for me, like for me, the productization of it, you guys have done yeah, like this is very flexible, but.If I were to add another Yeah, what your thing is on here it would be too much. I what, let's say,Samantha: Ideally it's all, it's something that the user can just choose and it all happens under the hood in a way where like you just get the benefit of that process at the end and better output basically, but don't have to get too lost in the complexity of judging along the way.Jonas: Okay.Subagents for ContextJonas: Another thing on the many agents, on different parallel agents that's interesting is an idea that's been around for a while as well that has started working recently is subagents. And so this is one other way to get agents of the different prompts and different goals and different models, [00:42:00] different vintages to work together.Collaborate and delegate.swyx: Yeah. I'm very like I like one of my, I always looking for this is the year of the blah, right? Yeah. I think one of the things on the blahs is subs. I think this is of but I haven't used them in cursor. Are they fully formed or how do I honestly like an intro because do I form them from new every time?Do I have fixed subagents? How are they different for slash commands? There's all these like really basic questions that no one stops to answer for people because everyone's just like too busy launching. We have toSamantha: honestly, you could, you can see them in cursor now if you just say spin up like 50 subagents to, so cursor definesswyx: what Subagents.Yeah.Samantha: Yeah. So basically I think I shouldn't speak for the whole subagents team. This is like a different team that's been working on this, but our thesis or thing that we saw internally is that like they're great for context management for kind of long running threads, or if you're trying to just throw more compute at something.We have strongly used, almost like a generic task interface where then the main agent can define [00:43:00] like what goes into the subagent. So if I say explore my code base, it might decide to spin up an explore subagent and or might decide to spin up five explore subagent.swyx: But I don't get to set what those subagent are, right?It's all defined by a model.Samantha: I think. I actually would have to refresh myself on the sub agent interface.Jonas: There are some built-in ones like the explore subagent is free pre-built. But you can also instruct the model to use other subagents and then it will. And one other example of a built-in subagent is I actually just kicked one off in cursor and I can show you what that looks like.swyx: Yes. Because I tried to do this in pure prompt space.Jonas: So this is the desktop app? Yeah. Yeah. And that'sswyx: all you need to do, right? Yeah.Jonas: That's all you need to do. So I said use a sub agent to explore and I think, yeah, so I can even click in and see what the subagent is working on here. It ran some fine command and this is a composer under the hood.Even though my main model is Opus, it does smart routing to take, like in this instance the explorer sort of requires reading a ton of things. And so a faster model is really useful to get an [00:44:00] answer quickly, but that this is what subagent look like. And I think we wanted to do a lot more to expose hooks and ways for people to configure these.Another example of a cus sort of builtin subagent is the computer use subagent in the cloud agents, where we found that those trajectories can be long and involve a lot of images obviously, and execution of some testing verification task. We wanted to use that models that are particularly good at that.So that's one reason to use subagents. And then the other reason to use subagents is we want contexts to be summarized reduced down at a subagent level. That's a really neat boundary at which to compress that rollout and testing into a final message that agent writes that then gets passed into the parent rather than having to do some global compaction or something like that.swyx: Awesome. Cool. While we're in the subagents conversation, I can't do a cursor conversation and not talk about listen stuff. What is that? What is what? He built a browser. He built an os. Yes. And he [00:45:00] experimented with a lot of different architectures and basically ended up reinventing the software engineer org chart.This is all cool, but what's your take? What's, is there any hole behind the side? The scenes stories about that kind of, that whole adventure.Samantha: Some of those experiments have found their way into a feature that's available in cloud agents now, the long running agent mode internally, we call it grind mode.And I think there's like some hint of grind mode accessible in the picker today. ‘cause you can do choose grind until done. And so that was really the result of experiments that Wilson started in this vein where he I think the Ralph Wigga loop was like floating around at the time, but it was something he also independently found and he was experimenting with.And that was what led to this product surface.swyx: And it is just simple idea of have criteria for completion and do not. Until you complete,Samantha: there's a bit more complexity as well in, in our implementation. Like there's a specific, you have to start out by aligning and there's like a planning stage where it will work with you and it will not get like start grind execution mode until it's decided that the [00:46:00] plan is amenable to both of you.Basically,swyx: I refuse to work until you make me happy.Jonas: We found that it's really important where people would give like very underspecified prompt and then expect it to come back with magic. And if it's gonna go off and work for three minutes, that's one thing. When it's gonna go off and work for three days, probably should spend like a few hours upfront making sure that you have communicated what you actually want.swyx: Yeah. And just to like really drive from the point. We really mean three days that No, noJonas: human. Oh yeah. We've had three day months innovation whatsoever.Samantha: I don't know what the record is, but there's been a long time with the grantsJonas: and so the thing that is available in cursor. The long running agent is if you wanna think about it, very abstractly that is like one worker node.Whereas what built the browser is a society of workers and planners and different agents collaborating. Because we started building the browser with one worker node at the time, that was just the agent. And it became one worker node when we realized that the throughput of the system was not where it needed to be [00:47:00] to get something as large of a scale as the browser done.swyx: Yeah.Jonas: And so this has also become a really big mental model for us with cloud, cloud agents is there's the classic engineering latency throughput trade-offs. And so you know, the code is water flowing through a pipe. The, we think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we'll be making the pipe much wider and so ing more, whether that's swarms of agents or parallel agents, both of those are things that contribute to getting.Much more done in the same amount of time, but any one of those tasks doesn't necessarily need to get done that quickly. And throughput is this really big thing where if you see the system of a hundred concurrent agents outputting thousands of tokens a second, you can't go back like that.Just you see a glimpse of the future where obviously there are many caveats. Like no one is using this browser. IRL. There's like a bunch of things not quite right yet, but we are going to get to systems that produce real production [00:48:00] code at the scale much sooner than people think. And it forces you to think what even happens to production systems. Like we've broken our GitHub actions recently because we have so many agents like producing and pushing code that like CICD is just overloaded. ‘cause suddenly it's like effectively weg grew, cursor's growing very quickly anyway, but you grow head count, 10 x when people run 10 x as many agents.And so a lot of these systems, exactly, a lot of these systems will need to adapt.swyx: It also reminds me, we, we all, the three of us live in the app layer, but if you talk to the researchers who are doing RL infrastructure, it's the same thing. It's like all these parallel rollouts and scheduling them and making sure as much throughput as possible goes through them.Yeah, it's the same thing.Jonas: We were talking briefly before we started recording. You were mentioning memory chips and some of the shortages there. The other thing that I think is just like hard to wrap your head around the scale of the system that was building the browser, the concurrency there.If Sam and I both have a system like that running for us, [00:49:00] shipping our software. The amount of inference that we're going to need per developer is just really mind-boggling. And that makes, sometimes when I think about that, I think that even with, the most optimistic projections for what we're going to need in terms of buildout, our underestimating, the extent to which these swarm systems can like churn at scale to produce code that is valuable to the economy.And,swyx: yeah, you can cut this if it's sensitive, but I was just Do you have estimates of how much your token consumption is?Jonas: Like per developer?swyx: Yeah. Or yourself. I don't need like comfy average. I just curious. ISamantha: feel like I, for a while I wasn't an admin on the usage dashboard, so I like wasn't able to actually see, but it was a,swyx: mine has gone up.Samantha: Oh yeah.swyx: But I thinkSamantha: it's in terms of how much work I'm doing, it's more like I have no worries about developers losing their jobs, at least in the near term. ‘cause I feel like that's a more broad discussion.swyx: Yeah. Yeah. You went there. I didn't go, I wasn't going there.I was just like how much more are you using?Samantha: There's so much stuff to be built. And so I feel like I'm basically just [00:50:00] trying to constantly I have more ambitions than I did before. Yes. Personally. Yes. So can't speak to the broader thing. But for me it's like I'm busier than ever before.I'm using more tokens and I am also doing more things.Jonas: Yeah. Yeah. I don't have the stats for myself, but I think broadly a thing that we've seen, that we expect to continue is J'S paradox. Whereswyx: you can't do it in our podcast without seeingJonas: it. Exactly. We've done it. Now we can wrap. We've done, we said the words.Phase one tab auto complete people paid like 20 bucks a month. And that was great. Phase two where you were iterating with these local models. Today people pay like hundreds of dollars a month. I think as we think about these highly parallel kind of agents running off for a long times in their own VM system, we are already at that point where people will be spending thousands of dollars a month per human, and I think potentially tens of thousands and beyond, where it's not like we are greedy for like capturing more money, but what happens is just individuals get that much more leverage.And if one person can do as much as 10 people, yeah. That tool that allows ‘em to do that is going to be tremendously valuable [00:51:00] and worth investing in and taking the best thing that exists.swyx: One more question on just the cursor in general and then open-ended for you guys to plug whatever you wanna put.How is Cursor hiring these days?Samantha: What do you mean by how?swyx: So obviously lead code is dead. Oh,Samantha: okay.swyx: Everyone says work trial. Different people have different levels of adoption of agents. Some people can really adopt can be much more productive. But other people, you just need to give them a little bit of time.And sometimes they've never lived in a token rich place like cursor.And once you live in a token rich place, you're you just work differently. But you need to have done that. And a lot of people anyway, it was just open-ended. Like how has agentic engineering, agentic coding changed your opinions on hiring?Is there any like broad like insights? Yeah.Jonas: Basically I'm asking this for other people, right? Yeah, totally. Totally. To hear Sam's opinion, we haven't talked about this the two of us. I think that we don't see necessarily being great at the latest thing with AI coding as a prerequisite.I do think that's a sign that people are keeping up and [00:52:00] curious and willing to upscale themselves in what's happening because. As we were talking about the last three months, the game has completely changed. It's like what I do all day is very different.swyx: Like it's my job and I can't,Jonas: Yeah, totally.I do think that still as Sam was saying, the fundamentals remain important in the current age and being able to go and double click down. And models today do still have weaknesses where if you let them run for too long without cleaning up and refactoring, the coke will get sloppy and there'll be bad abstractions.And so you still do need humans that like have built systems before, no good patterns when they see them and know where to steer things.Samantha: I would agree with that. I would say again, cursor also operates very quickly and leveraging ag agentic engineering is probably one reason why that's possible in this current moment.I think in the past it was just like people coding quickly and now there's like people who use agents to move faster as well. So it's part of our process will always look for we'll select for kind of that ability to make good decisions quickly and move well in this environment.And so I think being able to [00:53:00] figure out how to use agents to help you do that is an important part of it too.swyx: Yeah. Okay. The fork in the road, either predictions for the end of the year, if you have any, or PUDs.Jonas: Evictions are not going to go well.Samantha: I know it's hard.swyx: They're so hard. Get it wrong.It's okay. Just, yeah.Jonas: One other plug that may be interesting that I feel like we touched on but haven't talked a ton about is a thing that the kind of these new interfaces and this parallelism enables is the ability to hop back and forth between threads really quickly. And so a thing that we have,swyx: you wanna show something or,Jonas: yeah, I can show something.A thing that we have felt with local agents is this pain around contact switching. And you have one agent that went off and did some work and another agent that, that did something else. And so here by having, I just have three tabs open, let's say, but I can very quickly, hop in here.This is an example I showed earlier, but the actual workflow here I think is really different in a way that may not be obvious, where, I start t
Please visit answersincme.com/DZU860 to participate, download slides and supporting materials, complete the post test, and get a certificate. Presented by Michelle Jacobson, MD, MHSc, FRCSC, MSCP; Nadia Harbeck, MD, PhD; and Renate Haidinger. In this activity, experts in breast cancer and menopause discuss the burden of vasomotor symptoms (VMS) due to breast cancer treatment and the emerging role of neurokinin (NK) receptor antagonists in alleviating these symptoms in practice, with insights from a patient advocate. Upon completion of this activity, participants should be better able to: Recognize VMS as a consequence of breast cancer treatment; Outline the clinical rationale for novel therapeutic approaches to manage VMS associated with breast cancer treatment; Evaluate the efficacy and safety of NK receptor antagonists for breast cancer treatment–associated VMS; and Implement patient-centered clinical approaches to elevate the quality of life of patients experiencing breast cancer–associated VMS.
Slide into our VMs.
#340: The smartest ops people are often the most likely to resist new technology -- and they're not wrong. If you don't change anything, nothing breaks, and nobody blames you. That's a completely rational choice. It's also the one that guarantees you fall behind. Bare metal to VMs, VMs to cloud, cloud to Kubernetes -- every time, the teams that played it safe ended up scrambling to catch up two years later. The safe bet isn't safe. It just feels that way. It gets worse when you look at where the tools come from. Kubernetes? Built by developers. Terraform? Developers. Containers? Developers. The tools ops teams depend on were made by a different tribe. So the pushback isn't really about whether the tech is ready or whether the risk is too high. It's about identity. 'Not my people' is a harder objection to overcome than 'not ready yet,' because no amount of documentation or proof-of-concepts answers it. And about proof -- everyone wants it before they'll move. But the proof already exists. It's the tool someone on your team has been running in shadow IT for a year without any official support. If it survived that long on its own, that's stronger evidence than any pilot program. That's your roadmap. And the way in is small chunks, not grand plans. Move one service. Learn something. Adjust. Repeat. AI in ops follows the exact same pattern. A tool that gets you 50% of the way there for free means you can focus your expertise on the other 50%. That's a win. But the people waiting for AI to be perfect before they'll touch it? They're making the same mistake as the teams that waited for perfect proof before migrating to the cloud. Different decade, same trap. YouTube channel: https://youtube.com/devopsparadox Review the podcast on Apple Podcasts: https://www.devopsparadox.com/review-podcast/ Slack: https://www.devopsparadox.com/slack/ Connect with us at: https://www.devopsparadox.com/contact/
The headlines are moving fast, but this Market Sense special report will tackle the escalating conflict in Iran and what this new geopolitical risk could mean for the markets and the global economy. Our Fidelity leaders discuss the impact on oil prices, and how higher energy costs could affect consumers. We'll also discuss why safe-haven assets like gold are climbing and how uncertainty is driving volatility across stocks and bonds. Bookmark the Fidelity Viewpoints Market Insights webpage for the latest on market headlines and volatility. Read the full transcript View the slides Watch the video replay
Slide into our VMs.
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
Hey, it's Alex, let me tell you why I think this week is an inflection point.Just this week: Everyone is launching autonomous agents or features inspired by OpenClaw (Devin 2.2, Cursor, Claude Cowork, Microsoft, Perplexity and Nous announced theirs), METR and ArcAGI 2,3 benchmarks are getting saturated, 1 person companies nearing 1M ARR within months of operation by running AI agents 24/7 (we chatted with one of them on the show today, live as he broke $700K ARR barrier) and the US Department of War gives Anthropic an ultimatum to remove nearly all restrictions on Claude for war and Anthropic says NO. I've been covering AI for 3 years every week, and this week feels, different. So if we are nearing the singularity, let me at least keep you up to date
Hardware scarcity and price hikes spread to hard drives, the Bcachefs dev thinks his AI is ‘fully conscious’, an agent might have gone after a FOSS maintainer, Jim is disappointed with an Ars author, and ZFS and VMs in the homelab. Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Piss up at The Shipwrights Arms (just next to London Bridge station) on Saturday 27th June from 6pm until late Pool and VDEV Topology for Proxmox Workloads News/discussion AI blamed again as hard drives are sold out for this year Hard drive pricing in the UK is so high someone flew to the US to buy drives, saving money despite flight and hotel costs Bcachefs creator claims his custom LLM is ‘fully conscious’ An AI Agent Published a Hit Piece on Me An AI Agent Published a Hit Piece on Me – More Things Have Happened An AI Agent Published a Hit Piece on Me – Forensics and More Fallout An AI Agent Published a Hit Piece on Me – The Operator Came Forward Editor's Note: Retraction of article containing fabricated quotations Sorry all this is my fault Free consulting We were asked about ZFS and VMs in the homelab. See our contact page for ways to get in touch.
Hardware scarcity and price hikes spread to hard drives, the Bcachefs dev thinks his AI is ‘fully conscious’, an agent might have gone after a FOSS maintainer, Jim is disappointed with an Ars author, and ZFS and VMs in the homelab. Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Piss up at The Shipwrights Arms (just next to London Bridge station) on Saturday 27th June from 6pm until late Pool and VDEV Topology for Proxmox Workloads News/discussion AI blamed again as hard drives are sold out for this year Hard drive pricing in the UK is so high someone flew to the US to buy drives, saving money despite flight and hotel costs Bcachefs creator claims his custom LLM is ‘fully conscious’ An AI Agent Published a Hit Piece on Me An AI Agent Published a Hit Piece on Me – More Things Have Happened An AI Agent Published a Hit Piece on Me – Forensics and More Fallout An AI Agent Published a Hit Piece on Me – The Operator Came Forward Editor's Note: Retraction of article containing fabricated quotations Sorry all this is my fault Free consulting We were asked about ZFS and VMs in the homelab. See our contact page for ways to get in touch.
In a landmark ruling, The Supreme Court has struck down the sweeping global tariffs. On this episode of Market Sense, Fidelity leaders unpack the implications of this decision, explore possible next steps, and discuss what it could mean for the markets, the economy, and consumers. Bookmark the Fidelity Viewpoints Market Insights webpage for the latest on market headlines and volatility. Read the full transcript View the slides Watch the video replay
AI agents differ from chatbots by pursuing autonomous goals through the ReACT loop rather than responding to turn-based prompts. While coding agents are currently the most reliable due to verifiable feedback loops, the market is expanding into desktop and browser automation via tools like Claude co-work and open claw. Links Notes and resources at ocdevel.com/mlg/mla-28 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Fundamental Definitions Agent vs. Chatbot: Chatbots are turn-based and human-driven. Agents receive objectives and dynamically direct their own processes. The ReACT Loop: Every modern agent uses the cycle: Thought -> Action -> Observation. This interleaved reasoning and tool usage allows agents to update plans and handle exceptions. Performance: Models using agentic loops with self-correction outperform stronger zero-shot models. GPT-3.5 with an agent loop scored 95.1% on HumanEval, while zero-shot GPT-4 scored 67.0%. The Agentic Spectrum Chat: No tools or autonomy. Chat + Tools: Human-driven web search or code execution. Workflows: LLMs used in predefined code paths. The human designs the flow, the AI adds intelligence at specific nodes. Agents: LLMs dynamically choose their own path and tools based on observations. Tool Categories and Market Players Developer Frameworks: Use LangGraph for complex, stateful graphs or CrewAI for role-based multi-agent delegation. OpenAI Agents SDK provides minimalist primitives (Handoffs, Sessions), while the Claude Agent SDK focuses on local computer interaction. Workflow Automation: n8n and Zapier provide low-code interfaces. These are stable for repeatable business tasks but limited by fixed paths and a lack of persistent memory between runs. Coding Agents: Claude Code, Cursor, and GitHub Copilot are the most advanced agents. They succeed because code provides an unambiguous feedback loop (pass/fail) for the ReACT cycle. Desktop and Browser Agents: Claude Cowork( (released Jan 2026) operates in isolated VMs to produce documents. ChatGPT Atlas is a Chromium-based browser with integrated agent capabilities for web tasks. Autonomous Agents: open claw is an open-source, local system with broad permissions across messaging, file systems, and hardware. While powerful, it carries high security risks, including 512 identified vulnerabilities and potential data exfiltration. Infrastructure and Standards MCP (Model Context Protocol): A universal standard for connecting agents to tools. It has 10,000+ servers and is used by Anthropic, OpenAI, and Google. Future Outlook: By 2028, multi-agent coordination will be the default architecture. Gartner predicts 38% of organizations will utilize AI agents as formal team members, and the developer role will transition primarily to objective specification and output evaluation.
In this episode, we are joined by Ralph Shearing, CEO of Goldgroup Mining (TSX.V: GGA | OTC:GGAZF), for an in-depth discussion on the transformative business combination with Gold Resource Corp (NYSE-A: GORO). This merger is set to reposition the company as a diversified, multi-asset gold and silver producer with a significant footprint across Mexico and a high-potential development project in the United States. Key Discussion Points: Strategic Merger Overview: Ralph explains how joining forces with Gold Resource Corp transforms both entities from single-asset risk profiles into a diversified, multi-mine producer with sights set on mid-tier status. Production and Asset Roadmap: A deep dive into the 2026 production targets, including the restart of the San Francisco Mine and the expansion of the Don David Mine in Oaxaca. The Back Forty Project: An update on the advanced-stage VMS deposit in Michigan, detailing the 4-5 year timeline to production following upcoming feasibility studies and permitting. Leadership and Strategy: Insight into the new executive team and the group's aggressive strategy for further M&A in Mexico. Please email any questions you have for the team at Goldgroup - Fleck@kereport.com & Shad@kereport.com. Click here to visit the Goldgroup Mining website - https://goldgroupmining.com/ --------------- For more market commentary & interview summaries, subscribe to our Substacks: The KE Report: https://kereport.substack.com/ Shad's resource market commentary: https://excelsiorprosperity.substack.com/ Investment disclaimer: This content is for informational and educational purposes only and does not constitute investment advice, an offer, or a solicitation to buy or sell any security or investment product. Investing in equities, commodities, really everything involves risk, including the possible loss of principal. Do your own research and consult a licensed financial advisor before making any investment decisions. Guests and hosts may own shares in companies mentioned.
In this episode of Resilient Cyber, we will be sat down with Ari Marzuk, the researcher who published "IDEsaster", A Novel Vulnerability Class in AI IDE's.We will be discussing the rise of AI-driven development and modern AI coding assistants, tools and agents, and how Ari discovered 30+ vulnerabilities impacting some of the most widely used AI coding tools and the broader risks around AI coding.Ari's background in offensive security — Ari has spent the past decade in offensive security, including time with Israeli military intelligence, NSO Group, Salesforce, and currently Microsoft, with a focus on AI security for the last two to three years.IDEsaster: a new vulnerability class — Ari's research uncovered 30+ vulnerabilities and 24 CVEs across AI-powered IDEs, revealing not just individual bugs but an entirely new vulnerability class rooted in the shared base IDE layer that tools like Cursor, Copilot, and others are built on."Secure for AI" as a design principle — Ari argues that legacy IDEs were never built with autonomous AI agents in mind, and that the same gap likely exists across CI/CD pipelines, cloud environments, and collaboration tools as organizations race to bolt on AI capabilities.Low barrier to exploitation — The vulnerabilities Ari found don't require nation-state sophistication to exploit; techniques like remote JSON schema exfiltration can be carried out with relatively simple prompt engineering and publicly known attack vectors.Human-in-the-loop is losing its effectiveness — Even with diff preview and approval controls enabled, exfiltration attacks still triggered in Ari's testing, and approval fatigue from hundreds of agent-generated actions is pushing developers toward YOLO mode.Least privilege and the capability vs. security trade-off — The same unrestricted access that makes AI coding agents so productive is what makes them vulnerable, and history suggests organizations will continue to optimize for utility over security without strong guardrails.Top defensive recommendations — Ari emphasized isolation (containers, VMs) as the single most important control, followed by enforcing secure defaults that can't be easily overridden, and applying enterprise-level monitoring and governance to AI agent usage.What's next — Ari is turning his attention to newer AI tools and attack surfaces but isn't naming targets yet. You can follow his work on LinkedIn, X, and his blog at makarita.com.
Slide into our VMs.
What's really going on with gold and silver? Whether you're just curious, feeling cautious, or feeling a little FOMO, this episode of Market Sense tackles the questions viewers keep asking about precious metals. A veteran portfolio manager joins us to talk about what he thinks is behind the record surge and recent volatility, why it's happening now, and what it could mean for your investments going forward. Click here to research the latest stock pricing and performance as well as investment ideas using Fidelity's free stock screener tool. Read the full transcript View the slides Watch the video replay
In this episode, hosts Lois Houston and Nikita Abraham take you inside how Oracle brings its industry-leading database technology directly to AWS customers. Senior Principal OCI Instructor Susan Jang unpacks what the OCI child site is, how Exadata hardware is deployed inside AWS data centers, and how the ODB network enables secure, low-latency connections so your mission-critical workloads can run seamlessly alongside AWS services. Susan also walks through the differences between Exadata Database Service and Autonomous Database, helping teams choose the right level of control and automation for their cloud databases. Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi there! Last week, we talked about multicloud and the partnerships Oracle has with Microsoft Azure, Google Cloud, and Amazon Web Services. If you missed that episode, do listen to it as it sets the foundation for today's discussion, which is going to be about Oracle Database@AWS. 00:59 Nikita: That's right. And we're joined by Susan Jang, a Senior Principal OCI Instructor. Susan, thanks for being here. To start us off, what is Oracle Database@AWS? Susan: Oracle Database@AWS is a service that allows Oracle Exadata infrastructure that is managed by Oracle Cloud Infrastructure, or OCI, to run directly inside an AWS data center. 01:25 Lois: Susan, can you go through the key architecture components and networking relationships involved in this? Susan: The AWS Cloud is the Amazon Web Service. It's a cloud computing platform. The AWS region is a distinct, isolated geographic location with multiple physically separated data center, also known as availability zone. The availability zone is really a physically isolated data center with its own independent power, cooling, and network connectivity. When we speak of the AWS data center, it's a highly secured, specialized physical facility that houses the computing storage, the compute servers, the storage server, and the networking equipment. The VPC, the Virtual Private Cloud, is a logical, isolated virtual network. The AWS ODB network is a private user-created network that connects the virtual private cloud network of Amazon resources with an Oracle Cloud Infrastructure Exadata system. This is all within an AWS data center. The AWS-ADB peering is really an established private network connection that's between the Oracle VPC, the Virtual Private Cloud, and the Oracle Database@AWS network. And that would be the ODB. Within the AWS data center, you have something that you see called the child site. Now, an OCI child site is really a physical data center that is managed by Oracle within the AWS data center. It's a seamless extension of the Oracle Cloud Infrastructure. The site is hosting the Exadata infrastructure that's running the Oracle databases. The Oracle Database@AWS service brings the power as well as the performance of an Oracle Exadata infrastructure that is managed by Oracle Cloud Infrastructure to run directly in an AWS data center. 03:57 Nikita: So essentially, Oracle Database@AWS lets you to run your mission-critical Oracle data load close to your AWS application, while keeping management simple. Susan, what advantages does Oracle Database@AWS bring to the table? Susan: Oracle Database@AWS offers a powerful and flexible solution for running Oracle workloads natively within AWS. Oracle Database@AWS streamlines the process of moving your existing Oracle Database to AWS, making migration faster as well as easier. You get direct, low latency connectivity between your application and Oracle databases, ensuring a high performance for your mission-critical workloads. Billing, resource management, and operational tasks are unified, allowing you to manage everything through similar tools with reduce complexity. And finally, Oracle Database@AWS is designed to integrate smoothly with your AWS environments' workloads, making it so much easier to build, deploy, and scale your solutions. 05:15 Lois: You mentioned the OCI child site earlier. What part does it play in how Oracle Database@AWS works? Susan: The OCI child site really gives you the capability to combine the physical proximity and resources of AWS with the logical management and the capability of Oracle Cloud Infrastructure. This integrated approach allows us to enable the ability for you to run and manage your Oracle databases seamlessly in your AWS environment while still leveraging the power of OCI, our Oracle Cloud Infrastructure. 06:03 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure for subscribers! Whether you're interested in multicloud, databases, networking, security, AI, or machine learning, there's something for everyone. So, what are you waiting for? Pick your topic and get started by visiting mylearn.oracle.com. 06:29 Nikita: Welcome back! Susan, I'm curious about the Exadata infrastructure inside AWS. What does that setup look like? Susan: The Exadata Infrastructure consists of physical database, as well as storage servers. That is deployed-- the database and the storage servers are interconnected using a high-speed, low-latency network fiber, ensuring optimal performance and reliable data transfer. Each of the database server runs one or more Virtual Machines, or VMs, as we refer to them, providing flexible compute resources for different workloads. You can create, as well as manage your virtual machine, your VM clusters in this infrastructure using various methods. Your AWS console, Command-Line Interface, CLI, or Application Program Interface, that's your API, giving you various options, several options for automating, as well as integrating your existing tools. When you're creating your Exadata Infrastructure, there are a few things you need to define and set up. You need to define the total number of your database servers, the total number of your storage server, the model of your Exadata system, as well as the availability zone where all these resources will be deployed. This architecture delivers a high-performance resiliency and flexible management capability for running your Oracle Database on AWS. 08:18 Lois: Susan, can you explain the network architecture for Oracle Database deployments on AWS? Susan: The ODB network is an isolated network within the AWS that is designed specifically for Exadata deployments. It includes both the client, as well as the backup subnet, which are essential for securing and efficient database operations. When you create your Exadata Infrastructure, you need to specify the ODB network as you need the connectivity. This network is mapped directly to the corresponding network in the OCI child site. This will enable seamless communication between AWS, as well as the Oracle Cloud Infrastructure. The ODB network requires two separate CIDR ranges. And in addition, the client subnet is used for the Exadata VM cluster, providing connectivity for database operations. Well, you do also have another subnet. And that subnet is the backup subnet. And it's used to manage database backups of those VM cluster, ensuring not only data protection, but also data recovery. Within your AWS region and availability zone, the ODB network contains these dedicated client, as well as backup subnet. It basically isolates the Exadata traffic for both the day-to-day access, and that would be for the client, as well as the backup operations, and that would be for the backup subnet. This network design supports secure, high performance, and connectivity in a reliable backup management of the Oracle Database deployments that is running on AWS. 10:23 Nikita: Since we're on the topic of networking, can you tell us about ODB peering within the Oracle Database architecture? Susan: The ODB peering establishes a secure private connection between your AWS Virtual Private Cloud, your VPC, then the Oracle Database, the ODB network that contains your Exadata Infrastructure. This connection makes it possible for application servers that's running in your VPC, such as your Amazon EC2 instances to access your Oracle databases that is being hosted on Exadata within your ODB network. You specify the ODB network when you set up your infrastructure, specifically the Exadata Infrastructure. This network includes dedicated client, as well as backup subnets for an efficient and secure connectivity. If you wish to enable multiple VPCs to connect to the same ODB network and access the Oracle Database@AWS resources, you can leverage AWS Transit Gateways or even an AWS Cloud WAN for scalable and centralized connectivity. The virtual private cloud contains your application server, and that's securely paired with the Oracle Database network, creating a seamless, high-performance path to your application to interact with your Oracle Database. ODB peering simplifies the connectivity between the AWS application environments and the Oracle Exadata Infrastructure, thus supporting a flexible, high performance, and secure database access. 12:23 Lois: Now, before we close, can you compare two key databases that are available with Oracle Database@AWS: Oracle Exadata Database Service and Oracle Autonomous Database Service? Susan: The Exadata Database Service offers a fully managed and dedicated infrastructure with operational monitoring that is handled by you, the customer. In contrast, the Autonomous Database is fully managed by Oracle, taking care of all the operational monitoring. Exadata provides very high scalability though resources, such as disk and compute, must be sized manually. Where in the Autonomous Database, it offers high scalability through automatic elastic scaling. When we speak of performance, both service deliver strong results. Exadata offers ultra-low latency and Exadata-level performance, while the Autonomous Database delivers optimal performance with automation. Both services provide high migration capability. Exadata offers full compatibility and the Autonomous Database includes a robust set of migration tools. When it comes to management, Exadata requires manual management and administration. And that's really in a way to provide you the ability to customize it in the manner you desire, making it meets your very specific business needs, especially your database needs. In contrast, the Autonomous Database is fully managed by Oracle, including automated administration tasks, optimal self-tuning features to further reduce any management overhead. When we speak of the feature sets, the Exadata delivers a full suite of Oracle features, including the RAC application cluster, or the Real Application Cluster, RAC, whereas the Autonomous offers a complete feature set, but specifically that is designed for optimized Autonomous operations. Finally, when we speak of integration, integration for both of this service integrates seamlessly with AWS service, such as your EC2, your network, the VPC, your policies, the Identity and Access Management, your IAM, the monitoring with your CloudWatch, and of course, your storage, your SC, ensuring a consistent experience within your AWS ecosystem. 15:21 Nikita: So, you could say that the Exadata Database Service is better for customers who want dedicated infrastructure with granular control, while the Autonomous Database is built for customers who want a fully automated experience. Thank you, Susan, for taking the time to talk to us about Oracle Database@AWS. Lois: That's all we have for today. If you want to learn more about the topics we discussed, head over to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. In our next episode, we'll find out how to get started with the Oracle Database@AWS service. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:06 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Interview with Patrick Cruickshank, Director & CEO of Nine Mile MetalsOur previous interview: https://www.cruxinvestor.com/posts/nine-mile-metals-csenine-vms-drilling-unlocks-expansion-potential-5410Recording date: 5th February 2026Nine Miles Metals (CSE: NINE) is emerging as a significant explorer in New Brunswick's Bathurst Mining Camp, controlling 140 square kilometers of continuous land across four volcanogenic massive sulfide (VMS) projects. Following a transformative year of advanced geophysical work, CEO Patrick Cruickshank has outlined an aggressive 2026 exploration strategy backed by recently completed $5.5 million financing providing two years of capital.The company's flagship Nine Mile Brook project hosts what Cruickshank describes as the highest-grade lens ever recorded in Bathurst: 12% copper, 38% lead-zinc, 1,200 grams per ton silver, and 2.4 grams per ton gold over 15 meters of solid mineralization. While these exceptional grades present processing challenges—the hybrid nature requires specialized milling solutions—the company is developing a unique bulk sample processing approach with updates expected shortly.The historic Wedge mine represents a different opportunity. This past-producing deposit operated by Caminco in the 1950s-60s was abandoned when its head pillar collapsed, leaving two-thirds of known mineralization in place. Recent drilling discovered an eastern extension with intercepts up to 134 meters, while a just-completed program tested depth and southwestern extensions with assays pending. Management targets proving 7-10 million tons at Wedge to reach economic thresholds.Nine Miles Metals' competitive edge lies in systematic application of modern technology. The company flew 1,400 line kilometers of UAV drone geophysics, identifying 11 new mineralized trends. This approach delivered an 80% drill success rate at California Lake East, hitting mineralization in eight of ten holes.With 185 million shares outstanding, reduced overhead ($300,000 annual savings from office relocation), and drilling costs of just $85-100 per meter in a jurisdiction offering 30-day permitting, the company is positioned to execute multiple 2026 catalysts: pending Wedge assays, April drilling resumption at both Nine Mile Brook and Wedge, and government-funded California Lake exploration—all while maintaining optionality for strategic partnerships or sector consolidation.View Nine Mile Metals' company profile: https://www.cruxinvestor.com/companies/nine-mile-metalsSign up for Crux Investor: https://cruxinvestor.com
Malicious code is making its way into VS Code extensions this week, as two Chinese-based AI coding assistants are identified as capturing every file on a user's computer and sending it to servers in China without their knowledge or consent. Please just be cautious about what you're installing on your machines, folks.In related news, the Deno team has introduced Deno sandboxes to create and deploy secure, isolated VMs in the cloud. Strict permissions, network policies, directories, and isolated secrets—make these sandboxes great for AI agents, or any other dynamic workload where speed and security are paramount.And the software going viral this week is OpenClaw (aka Clawdbot aka Moltbot), which is an open source, autonomous AI agent that runs locally on a user's machine. OpenClaw can connect to LLMs and perform tasks like managing emails, scheduling, reorganizing local files or other daily tasks, and is designed to be proactive rather than just reacting to prompts. It's truly the Wild West giving an AI agent access to read all the files on a machine or respond to emails on its own, so be careful out there, folks.Timestamps:1:07 - VS Code AI plugins are stealing data10:47 - Deno sandboxes16:09 - OpenClaw43:41 - More Gemini features are coming to Chrome45:33 - Apple containers46:44 - What's making us happyNews:Paige - VS Code AI plugins are stealing all the data of users computers (silently)Jack - Deno sandboxesTJ - OpenClaw (aka Clawdbot aka Moltbot) and our lack of trust for AI agentsLightning News: More Gemini features are coming to ChromeApple ContainersWhat Makes Us Happy this Week:Paige - Claude CodeJack - Sneakers movieTJ - Firefox browserThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or talk to us on X, Bluesky, or YouTube.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.comSubscribe to our YouTube channel @Front-EndFirePodcast
Join Exploring Mining as we dive into Arizona Eagle Mining's exciting revival of the historic McCabe Mine in mining-friendly Yavapai County, Arizona. Host Cali Van Zant speaks with Kevin Reid, President and CEO of Arizona Eagle Mining. Featuring a past-producing high-grade gold-silver deposit with a historic resource of over 800,000 ounces gold, and high grades of 11.7 g/t plus significant silver credits.The Eagle Claims cover a broad area that 205 acres of patented land holdings and 3,350 acres of BLM ground available for mineral exploration. The Company's principal asset is the past-producing high-grade gold McCabe mine, which is located on 87 acres of patented land that includes water rights. With a 4,500m inaugural drill program now underway to test extensions of the high-grade McCabe structure, plus VMS potential and strong funding, discover why this could be the next major gold exploration breakthrough. Arizona Eagle Mining ("AEM") owns 100% of the Eagle Project in Yavapai County. The Eagle Project is made up of 225 acres of patented (private) land and 3,350 acres of unpatented Bureau of Land Management (BLM) claims. At the centre of the Eagle Project is the past-producing McCabe Mine, located on patented land, which has a historic resource estimate of 878,000 ounces of gold at a grade of 11.7 g/t, as well as 5 million ounces of silver at a grade of 69 g/t*. The McCabe deposit was last in production in 1987 and is open for expansion at depth and on strike. Surface sampling and helicopter VTEM completed by AEM has identified at least 12 parallel mineralized structures which have not been drill tested.https://www.arizonaeaglemining.com/ About Investorideas.com - Big Investing Ideas Investorideas.com is the go-to platform for big investing ideas. From breaking stock news to top-rated investing podcasts, we cover it all.Disclaimer/Disclosure: This podcast and article featuring Arizona Eagle Mining is paid for content as part of a monthly featured mining stock service (payment disclosure). Our site does not make recommendations for purchases or sale of stocks, services or products. Nothing on our sites should be construed as an offer or solicitation to buy or sell products or securities. All investing involves risk and possible losses. This is not investment opinion. This site is currently compensated for news publication and distribution, social media and marketing, content creation and more. Disclosure is posted for each compensated news release, content published /created if required but otherwise the news was not compensated for and was published for the sole interest of our readers and followers. Contact management and IR of each company directly regarding specific questions. More disclaimer info: https://www.investorideas.com/About/Disclaimer.asp Learn more about publishing your news release and our other news services on the Investorideas.com newswire https://www.investorideas.com/News-Upload/ Global investors must adhere to regulations of each country. Please read Investorideas.com privacy policy: https://www.investorideas.com/About/Private_Policy.aspFollow us on X @investorideas @Exploringmining
Slide into our VMs.
What's the next big stock? On this episode of Market Sense, we're sitting down with a Fidelity portfolio manager who specializes in spotting opportunities in mid‑cap companies—the potential “sweet spot” of the market. These businesses have often moved beyond the risky startup stage but still have plenty of room to grow, offering a blend of stability and potential upside. We'll dig into the trends shaping this space, from commercial aerospace and the evolving defense sector to the companies powering the AI build‑out. And of course, we'll bring you all the latest market headlines—all in just 20 minutes. If you want more insights from our Fidelity portfolio managers, bookmark this page for the latest articles and videos regarding potential investing opportunities and risks. Read the full transcript View the slides Watch the video replay
On this episode of Market Sense, a veteran Fidelity bond manager explores how bond investors might navigate periods of geopolitical tension and uncertainty, how to use bonds for diversification in the current environment, and what changes at the Fed may mean for the outlook for interest rates. Plus, Fidelity's Director of Global Macro breaks down what the latest headlines might mean for your portfolio. Want to research rates? Head here to look up CUSIP numbers, bonds yields, and CDs. Read the full transcript Watch the video replay
Slide into our VMs!
GoldQuest Mining Corp. (CVE: GQC) CEO Luis Santana joins Mining Stock Daily to provide a comprehensive update on the Romero gold-copper project in the Dominican Republic. Santana outlines the path to full permitting and a bankable feasibility study in 2026, plans for construction later this decade, and the ongoing drilling aimed at expanding the high-grade VMS deposit. He also discusses Romero's responsible underground mine design, community engagement strategy, infrastructure build-out, and how higher gold and copper prices are reshaping project economics.
Lords: * Cort * Ben Topics: * Oh man, remember the Geek Code? * https://en.wikipedia.org/wiki/Geek_Code * I discovered a fun fact about my neighborhood * Every 5x6 Nonogram * https://puzzarium.com/every-5x6-nonogram * The Calf Path by Sam Foss * https://poets.org/poem/calf-path * Phillip Rivers Microtopics: * Secret Playstation Things. * Transposing a matrix in SIMD. * Reclaiming your plug. * Trying and failing to use Libby. * Failing to check out library books from your toilet. * State-sanctioned piracy. * The restaurant where you had the dinner for your wedding. * The "my wife" voice. * Suddenly having nieces. * Hadad and Barney on Black Square Day. * The Talos Principal DLC. * The Geek Code Era. * VMS or OS/2 as a defining feature of your personality. * Dilbert, Perl, Doom, and X-Files: all equally culturally relevant to this day. * The Natural Bears Classification System for Gay Men. * The Human Code. * Your favorite grocery store's end-of-year recap. * The story of a Venezuelan woman between age 30 and 55, and her top 2000 interests. * Your favorite vowel in text messages you've sent this year. * What they call that AE vowel. * Speedrunning your entire email history in a weekend. * The Miracle Mile, where all the tar pits are. * Noted monster Sean "Diddy" Combs. * Two hip hop producers getting mad at each other. * Putting a plaque at your hotel explaining that Jim Morrison did not die at this hotel. * Being the plaque you want to see in the world. * Whether the Museum of Jurassic Technology has reopened after the fire. * A very earnest museum about a history that never was. * Wandering around dazed after every 5x5 nonogram is solved. * Doing your part to serve humanity by solving nonograms. * Do you remember where you were when Every 5x5 Nonogram Section 303 was finished? * Why a dippy bird can't keep you online. * Gesturing at the idea of collectively solving a problem. * Why we haven't heard from Peter Molyneux in a while. * What three digit numbers nonogram solvers think are interesting. * Sending 96 million solved nonograms into space as proof to alien life that humans are still capable of collective action. * Feeling bad about having installed an ad blocker and loading up a four hour block of ads from 80s TV. * Being prosecuted for use of ad blockers. * Look at that smirk. That's a man who knows he's preaching but getting away with it. * Getting a deck of flash cards to learn all of the pentameters. * Three iambs and a reverse iamb. * Your favorite good poems and your favorite shitpost poems. * Tony Gang Flame War. * Refusing to tackle the 40 year old quarterback after his wallet falls out on the field and the photos of his ten children unfold. * The median age of football players rising into retirement age as teenagers learn about the health risks and refuse to participate. * Using NBA 2K as a metaphor for the decline of civilization. * LA finally getting a football team again. (They have two now.) * Whether they're still playing Starcraft. * Broken 19 year olds who can't play Starcraft any more because their APMs are too low. * The experience of attending a live e-Sports event. * Whether they sell hot dogs and beer to the crowd at the live League of Legends event or if it's all GamerGrub and Feastables. * A Youtube shitpost made in Garry's Mod. * Channing Tatum playing the toilet in the Skibidi Toilet movie. * Al Gore: still alive? * Whether Wilford Brimley got plastic surgery to look that old.
If you like what you hear, please subscribe, leave us a review and tell a friend!
We're tugging on your heartstrings this week as Wells introduces us to Maya, the very cute foster pup who may have found a permanent home in the Adams family… just maybe not at Wells' house. More on that. Brandi joins with freshly washed hair — a rare and celebrated occasion — and opens up about fine-hair problems, greasy recording days, and the humbling return of Pilates. The engagement glow continues too, with a very valid question on the table: why don't men get engagement rings? Hit us up in the DMs and VMs with your hot takes.Wells recaps a chaotic week that included a trip to Phoenix to watch Ole Miss lose (Hotty Toddy forever), and then it's all about Traitors, baby — because this season is FIRE. Your hosts do a full cast vibe check, from loving Jonny Weir and Lisa Rinna to embracing Michael Rapaport's chaos and pitching a Yam Yam / Rapaport spinoff we'd absolutely watch. Fave things round out the episode with rom-com love, vacation movies that hit just right, standout reads like Ready Player One, and TV recs including The Night Manager and 11.22.63. It's good to be back!Thanks to our awesome sponsors for supporting this episode! Quince: Treat your closet to a little summer glow-up with Quince. Go to Quince.com/yft for free shipping on your order and 365 day returns.Pique Life: Secure 20% off your order and begin your intentional wellness journey today at Piquelife.com/yft.BetterHelp: BetterHelp makes it easy to get matched online with a qualified therapist. Sign up and get 10% off at BetterHelp.com/yft.Don't forget to rate, review, and follow Your Favorite Podcast! Plus, keep up with us between episodes on our Instagram pages, @yftpodcast @wellsadams and @brandicyrus and be sure to leave us a voicemail with your fave things at 858-630-1856! This podcast is brought to you by Podcast Nation.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
An airhacks.fm conversation with Alvaro Hernandez (@ahachete) about: discussion about LLMs generating Java code with BCE patterns and architectural rules, Java being 20-30% better for LLM code generation than python and typescript, embedding business knowledge in Java source code for LLM context, stackgres as a curated opinionated stack for running Postgres on kubernetes, Postgres requiring external tools for connection pooling and high availability and backup and monitoring, StackGres as a Helm package and Kubernetes operator, comparison with oxide hardware for on-premise cloud environments, experimenting with Incus for system containers and VMS, limitations of Ansible for infrastructure automation and code reuse, Kubernetes as an API-driven architecture abstracting compute and storage, Custom Resource Definitions (CRDs) for declarative Postgres cluster management, StackGres supporting sharding with automated multi-cluster deployment, 13 lines of YAML to create 60-node sharded clusters, three interfaces for StackGres including CRDs and web console and REST API, operator written in Java with quarkus unlike typical Go-based operators, Google study showing Java faster than Go, GraalVM native compilation for 80MB container images versus 400-500MB JVM images, fabric8 Kubernetes client for API communication, reconciliation cycle running every 10 seconds to maintain desired state, pod local controller as Quarkus sidecar for local Postgres operations, dynamic extension installation without rebuilding container images, grpc bi-directional communication between control plane and control nodes, inverse connection pattern where nodes initiate connections to control plane, comparison with Jini and JavaSpaces leasing concepts from Sun Microsystems, quarter million lines of Java code in the operator mostly POJOs predating records, PostgreSQL configuration validation with 300+ parameters, automated tuning applied by default in StackGres, potential for LLM-driven optimization with clone clusters for testing, Framework Computer laptop automation with Ubuntu auto-install and Ansible and Nix, five to ten minute full system reinstall including BIOS updates Alvaro Hernandez on twitter: @ahachete
On this episode of Market Sense, Fidelity leaders discuss why diversification is especially important in today's environment of geopolitical risks and concentrated exposure to tech and AI stocks. They highlight practical steps like rebalancing, adding international exposure, and considering bonds and alternatives to potentially reduce risk. Plus, all your latest market headlines in 20 minutes. We'd love to hear what you think! Take a minute to fill out our short feedback survey to help us make Market Sense even more valuable to you. Read the full transcript View the slides Watch the video replay
Adam Bell and Peter Nikolaidis The Blurring The Lines Podcast In this episode of Blurring the Lines, Adam Bell and Peter Nikolaidis kick things off with New Year's gym crowds and the realities of accountability-driven fitness before drifting—naturally—into a wide-ranging conversation on modern work and personal tech. From the evolution of cloud-based document storage and the frustrations of local file sync, to Windows 365, cloud VMs, and why Microsoft still struggles with documentation, the hosts unpack what actually works versus what just sounds good on paper. Along the way, they compare cloud desktops to local virtualization, debate frugal tech decisions, and wrap things up with home sauna experiments, cold plunges, wearable heart-rate sensors, vision correction tech, and the little tools that make everyday life more livable. Links: Upgraded Version Portable Sauna for Home Full Body Personal Sauna Steam Sauna Tent - https://amzn.to/49phLl6 Polar Verity Sense Optical Heart Rate Sensors - https://amzn.to/49vLzfV Youtube - https://youtu.be/-R3OhsWJp-Y Learn More About the Hosts: PeterNikolaidis.com https://YogaWithPeter.com https://friendswithbrews.com Adam Bell Roaming Roan Lavender Farm https://rrlavenderfarm.com
Discover how shifting risks can unlock new opportunities. Join the Market Sense team and Fidelity leaders for a practical playbook to help inform your investing strategy in the new year. Bookmark the Fidelity Viewpoints 2026 Investing Outlook page for the latest on market headlines, risks, opportunities, and ways to potentially safeguard your retirement. Read the full transcript View the slides Watch the video replay
In this episode, John and Dave talk about discovering ZimaOS and building an affordable home lab using repurposed hardware - and how that single setup quickly expands into media streaming, network-wide ad blocking, ROM organization, browser-based retro gaming, backups, and local services. Along the way, they also touch on the Commodore 64 Ultimate, a Sega Channel preservation project, a retro electronics repair simulator, and why digital preservation and self-hosting matter more than ever. Topics & sub-topics discussed:ZimaOS - app-based operating system that makes home labs approachablehttps://www.zimaspace.com/zimaosHome lab fundamentals - NAS, backups, ad blocking, VMs, and always-on serversROMM - organizing ROM libraries with metadata, artwork, and downloadshttps://github.com/rommapp/rommEmulatorJS - play retro games directly in your web browserhttps://emulatorjs.orgJellyfin - self-hosted media streaming without subscriptionshttps://jellyfin.orgThe 8-Bit Files Podcast & Discordhttps://the8bitfiles.com
On Healthy Mind, Healthy Life, host Sayan sits down with Eric D. Coonrod, a boutique investment banker, to unpack the entrepreneurial mindset behind scaling a company and selling it without burning out. This episode gets real about why exits can feel like an identity shock, how deal stress quietly tanks decision quality, and why psychological readiness should be treated like a core KPI alongside valuation. Eric breaks down exit paths like full sale vs majority recap, then shares practical ways founders can protect sleep, health, and relationships while navigating diligence, negotiation, and team communication. About the Guest: Eric D. Coonrod is a 20-year investment banking veteran involved in 100+ transactions across healthcare, food and beverage, VMS, and sports. He is the founder of E. Coonrod and Company, LLC, advising founders on deals, debt, and capital raises with a full-stack approach that includes tax, communications, and personal health awareness. Key Takeaways: Selling a business can hit harder emotionally than founders expect because the company often becomes their identity. A big exit check does not automatically fix burnout. It can even amplify it if you have no plan post-deal. Prep for the exit should include psychological readiness, not just financial readiness and deal structure. Consider exit options early. A majority recap can reduce admin burden while keeping you close to the work you actually love. Neglecting self-care during a sale increases poor decisions. Deal fatigue is real and it kills outcomes. The sale process adds major weekly workload on top of running operations. Protect your health to protect judgment. Keep a non-negotiable wellness anchor. Training, walking, or family time keeps mental clarity stable under pressure. Employees fear layoffs and culture shifts. Timing and messaging matter, so disclose only when the deal is mature. Coach the transition narrative with the buyer. Clear reassurance reduces churn and stabilizes performance pre-close. Post-exit peace requires an actual plan. Define what you will do in the first 6 months and what comes after. How to Connect with the Guest: Website: http://www.ecoonrodco.com/ Email: https://www.podmatch.com/hostdetailpreview/avik Disclaimer: This video is for educational and informational purposes only. The views expressed are the personal opinions of the guest and do not reflect the views of the host or Healthy Mind By Avik™️. We do not intend to harm, defame, or discredit any person, organization, brand, product, country, or profession mentioned. All third-party media used remain the property of their respective owners and are used under fair use for informational purposes. By watching, you acknowledge and accept this disclaimer. Healthy Mind By Avik™️ is a global platform redefining mental health as a necessity, not a luxury. Born during the pandemic, it's become a sanctuary for healing, growth, and mindful living. Hosted by Avik Chakraborty. Storyteller, survivor, wellness advocate. This channel shares powerful podcasts and soul-nurturing conversations on: • Mental Health & Emotional Well-being • Mindfulness & Spiritual Growth • Holistic Healing & Conscious Living • Trauma Recovery & Self-Empowerment With over 4,400+ episodes and 168.4K+ global listeners, join us as we unite voices, break stigma, and build a world where every story matters. Subscribe and be part of this healing journey. Contact Brand: Healthy Mind By Avik™ Email: www.healthymindbyavik.com Based in: India & USA Open to collaborations, guest appearances, coaching, and strategic partnerships. Let's connect to create a ripple effect of positivity. CHECK PODCAST SHOWS & BE A GUEST: Listen our 17 Podcast Shows Here: https://www.podbean.com/podcast-network/healthymindbyavik Be a guest on our other shows: https://www.healthymindbyavik.com/beaguest Video Testimonial: https://www.healthymindbyavik.com/testimonials Join Our Guest & Listener Community: https://nas.io/healthymind Subscribe To Newsletter: https://healthymindbyavik.substack.com/ OUR SERVICES Business Podcast Management. https://ourofferings.healthymindbyavik.com/corporatepodcasting/ Individual Podcast Management. https://ourofferings.healthymindbyavik.com/Podcasting/ Share Your Story With World. https://ourofferings.healthymindbyavik.com/shareyourstory STAY TUNED AND FOLLOW US! Medium. https://medium.com/@contentbyavik YouTube. https://www.youtube.com/@healthymindbyavik Instagram. https://www.instagram.com/healthyminds.pod/ Facebook. https://www.facebook.com/podcast.healthymind Linkedin Page. https://www.linkedin.com/company/healthymindbyavik LinkedIn. https://www.linkedin.com/in/avikchakrabortypodcaster/ Twitter. https://twitter.com/podhealthclub Pinterest. https://www.pinterest.com/Avikpodhealth/ SHARE YOUR REVIEW Share your Google Review. https://www.podpage.com/bizblend/reviews/new/ Share a video Testimonial and it will be displayed on our website. https://famewall.healthymindbyavik.com/ Because every story matters and yours could be the one that lights the way! #podmatch #healthymind #healthymindbyavik #wellness #HealthyMindByAvik #MentalHealthAwareness #comedypodcast #truecrimepodcast #historypodcast, #startupspodcast #podcasthost #podcasttips, #podcaststudio #podcastseries #podcastformentalhealth #podcastforentrepreneurs, #podcastformoms #femalepodcasters, #podcastcommunity #podcastgoals #podcastrecommendations #bestpodcast, #podcastlovers, #podcastersofinstagram #newpodcastalert #podcast #podcasting #podcastlife #podcasts #spotifypodcast #applepodcasts #podbean #podcastcommunity #podcastgoals #bestpodcast #podcastlovers #podcasthost #podcastseries #podcastforspeakers #StorytellingAsMedicine #PodcastLife #PersonalDevelopment #ConsciousLiving #GrowthMindset #MindfulnessMatters #VoicesOfUnity #InspirationDaily #podcast #podcasting #podcaster #podcastlife #podcastlove #podcastshow #podcastcommunity #newpodcast #podcastaddict #podcasthost #podcastepisode #podcastinglife #podrecommendation #wellnesspodcast #healthpodcast #mentalhealthpodcast #wellbeing #selfcare #mentalhealth #mindfulness #healthandwellness #wellnessjourney #mentalhealthmatters #mentalhealthawareness #healthandwellnesspodcast #fyp #foryou #foryoupage #viral #trending #tiktok #tiktokviral #explore #trendingvideo #youtube #motivation #inspiration #positivity #mindset #selflove #success
Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators. To address these issues, organizations are turning to platform engineering and internal developer platforms that abstract infrastructure complexity and provide standardized “golden paths” for deployment. Integrated development environments (IDEs) further reduce friction by embedding capabilities like observability and security. Nutanix contributes through its hyper converged platform, which unifies compute and storage while supporting both VMs and containers. At KubeCon North America, Nutanix announced version 2.0 of Nutanix Data Services for Kubernetes (NDK), adding advanced data protection, fault-tolerant replication, and enhanced security through a partnership with Canonical to deliver a hardened operating system for Kubernetes environments.Learn more from The New Stack about operational complexity in cloud native environments:Q&A: Nutanix CEO Rajiv Ramaswami on the Cloud Native Enterprise Kubernetes Complexity Realigns Platform Engineering Strategy Platform Engineering on the Brink: Breakthrough or Bust? Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Wes and Scott talk about their evolving home-server setups—Synology rigs, Mac minis, Docker vs. VMs, media servers, backups, Cloudflare Tunnels, and the real-world pros and cons of running your own hardware. Show Notes 00:00 Welcome to Syntax! 01:35 Why use a home server? 07:29 Apps for home servers 16:23 Home server hardware 18:27 Brought to you by Sentry.io 20:45 VMs vs containers and choosing the right software 25:53 How to expose services to the internet safely 30:38 Securing access to your server Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Maged Harby, General Partner at VMS, joins Jeremy Au to share his journey from publishing to building one of the Middle East's earliest EdTech venture programs, explain how Egypt and Saudi Arabia differ as innovation ecosystems, and guide founders on how to enter the region with cultural fit and strong partnerships. They discuss how EdTech adoption accelerated during COVID, why parents still steer children toward traditional fields, and how Gen Z is shifting toward entrepreneurship. Their conversation explores the contrast between Egypt's talent depth and Saudi Arabia's purchasing power, the need for localization in pricing and UX, and why Middle Eastern markets must be treated as distinct rather than homogeneous. Maged also outlines what he hopes to see next in personalized learning and why teacher training remains the region's biggest unlock. 00:25 VMS: Corporate Venture studio based in Saudi Arabia and provide several program to help and support startup to grow such as Bridge program that support startups that need to expand their business to Saudi Arabia and other programs 03:00 Parents push traditional paths: Egypt's university admissions are rigid and most families still guide children toward engineering or medicine. 07:00 EdVentures built from zero: Maged grew EdVentures into a major EdTech incubator and accelerator with more than 90 graduated startups and 23 investments. 14:00 Gen Z shifts to entrepreneurship: Young people are increasingly drawn to building startups and solving real problems instead of following traditional job tracks. 16:00 Localization defines success: Middle Eastern markets differ in pricing, UX, language and regulation which makes adaptation essential for expansion. 19:00 Competition varies by country: FinTech is saturated in Saudi Arabia while EdTech and health tech remain more open in Egypt and the UAE. 27:00 Teacher quality is the bottleneck: Universities must modernize teacher training so classrooms can match Gen Z and Gen Alpha digital habits. Watch, listen or read the full insight at https://www.bravesea.com/blog/maged-harby-middle-east-playbook Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts #MiddleEastTech #EdTechInnovation #SaudiArabiaStartups #EgyptEcosystem #GenZEntrepreneurs #LocalizationStrategy #VentureStudios #GCCExpansion #PersonalizedLearning #BRAVEpodcast
Hola friends! My week has very much been about trying to turnaround pentest dropboxes as quickly as possible. In that adventure, I came across two time-saving discoveries: Using a Proxmox LXC as a persistent remote access method Writing a Proxmox post-deployment script that installs Splashtop on the Windows VM, and resets the admin passwords on both VMs, all from the Proxmox SSH console without touching the console on either VM If you feel some of this is better seen than said, on this week's 7MinSec.club Tuesday TOOLSday broadcast we show this in more detail.
Over the years I've had no shortage of licensing questions for SQL Server. At times it's felt a little crazy. Look at the licensing guide. Choose EE or SE and the number of cores. Then check if you're using VMs. Oh, and consider the cloud, and which cloud you're running a workload on. It's simple right? It can seem confusing, and at times I've wished Microsoft would make it simpler. And perhaps even give us some add-ons, like adding some additional hardware capabilities (cough more RAM *cough) in SE. Read the rest of SQL Server Licensing is Simple
professorjrod@gmail.comOne idea rewired computing: what if one machine could convincingly pretend to be many? We start with mainframes and time sharing, then move through hypervisors, virtual machines, and the moment virtualization became the backbone of the cloud. From there, the story accelerates into public, private, hybrid, and community cloud models, and the service stack that defines modern IT—Infrastructure as a Service, Platform as a Service, and Software as a Service—so you know exactly who manages what and why it matters.We break down how to build reliable VMs, choose the right network mode, and use snapshots, templates, and live migration to save time and prevent outages. Then we climb the stack to cloud storage options like object, block, and file; explore VDI with thin and zero clients for secure, centralized desktops; and map the virtual network layer with routers, firewalls, load balancers, and VPN gateways. Security threads through everything: shared responsibility, IAM, MFA, encryption, and redundancy. You'll learn practical troubleshooting for slow VMs, networking quirks, sync failures, and common cloud gotchas like quota limits and VPN interference.Containers and Kubernetes bring speed and resilience, turning monoliths into microservices that scale on demand. We close on the frontier—edge and fog computing reduce latency at the source, while serverless lets code run only when needed. By the end, you'll see how virtualization delivered flexibility, the cloud delivered scale, containers delivered agility, and serverless delivered automation. Subscribe, share with a teammate, and leave a quick review to help more tech learners tap into the roots and future of cloud computing.Support the showArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod
Interview with Trey Wasser, CEO of Dryden Gold Corp.Our previous interview: https://www.cruxinvestor.com/posts/dryden-gold-tsxvdry-centerra-backed-explorer-targets-district-scale-gold-in-ontario-8109Recording date: 17th November 2025Dryden Gold Corp (TSXV: DRY) has emerged as a compelling strategic acquisition target in Ontario's gold sector following successful execution of its 2025 exploration program and explicit endorsement from major mining company partners. The company controls 70,000 hectares in northwest Ontario hosting multiple high-grade gold discoveries across four distinct mineralization types, with fully funded drilling planned for 2026 under management explicitly targeting a Great Bear Resources-style exit.The investment thesis centers on systematic district-scale exploration designed to attract strategic buyers rather than pursue standalone mine development. Recent drilling fundamentally reshaped the geological understanding at the Gold Rock target area, revealing nine interconnected high-grade structures within a 300-meter span—including intercepts of 300 grams per ton over 3.9 meters and 55 grams per ton over 3.5 meters—connected by continuous one gram per ton mineralization. This discovery transformed what appeared to be isolated veins into an integrated system where lower-grade material provides economic continuity while high-grade shoots create exploration upside.Strategic validation provides perhaps the most compelling near-term catalyst. Centerra Gold invested in 2024 and has explicitly directed management to continue district-scale exploration rather than focus exclusively on infill drilling at known high-grade zones. Alamos Gold maintains similar engagement, while additional confidentiality agreements with unnamed major and mid-tier mining companies indicate active corporate interest. These sophisticated mining companies endorse the systematic approach because it generates the comprehensive geological understanding and high-quality data they require for acquisition decisions.The technical team significantly de-risks execution. President Maura Kolb led the Red Lake mine exploration team for five years, managing 90 personnel and a $50 million annual budget while reducing finding costs from $500 to $50 per ounce. Her major-mine experience directly informs Dryden's exploration protocols including oriented core drilling, 100% core assaying, and property-wide geochemical surveys—practices that distinguish systematic explorers from promotion-focused juniors. Kolb's team discovered the hanging wall structures specifically because they assayed all rock types rather than only visible quartz veins.The property's geological diversity creates multiple value pathways. Beyond the Archean lode gold system at Gold Rock—which Kolb compares directly to Red Lake geology—the company has confirmed intrusive-related mineralization at Sherridon, granite diorite-hosted stockwork at Hyndman analogous to NexGold's 1.5-million-ounce Goliath Gold project, and VMS-style mineralization elsewhere. CEO Trey Wasser characterizes this as a "Timmins-like camp" where exceptional gold endowment manifests across multiple geological settings, creating optionality for project-specific joint ventures or staged transactions.Infrastructure advantages reduce development risk and enhance acquisition appeal. Highway 502 provides direct access from Sherridon through Gold Rock to the town of Dryden, while the Trans-Canada Highway accesses Hyndman. Both regional projects have been clear-cut for logging, creating existing access roads. The northwest Ontario location provides political stability, established mining regulations, available contractors and skilled labor, and proximity to operating mines including Red Lake—attributes that command premium valuations as mining companies reassess exposure to jurisdictions with increasing political risk.Dryden enters 2026 fully funded from August 2025 financing to complete 20,000-25,000 meters of drilling, with approximately 50% dedicated to Gold Rock expansion and the remainder advancing multiple district targets. At $4,000 gold, the company offers leveraged exposure to exploration success, strategic transaction, or both, backed by partner validation and systematic technical approach designed specifically for strategic buyer requirements.View Dryden Gold's company profile: https://www.cruxinvestor.com/companies/dryden-goldSign up for Crux Investor: https://cruxinvestor.com
Marina Moore, a security researcher and the co-chair of the security and compliance TAG of CNCF, shares her concerns about the security vulnerabilities of containers. She explains where the issues originate, providing solutions and discussing alternative routes to using micro-VMs rather than containers. Additionally, she highlights the risks associated with AI inference. Read a transcript of this interview: https://bit.ly/4qUCcyi Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) https://qconlondon.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
Dive into KubeVirt with the vBrownBag crew and guest Eric Shanks!
Slide into our VMs kicks off with a punch! Steve got face full of snow, Taryn got a face full of dirt, but this caller got Mötley Crüe Tickets!? Tune in to hear an awesome hero story about bullying. Then, stay tuned for a fun trend involving pumpkins and chickens!
[Ep 4 of 4 and more to come] We were all left with burning questions after the first three episodes. In the final installment of the original ‘NO' series from 2017, KP and the team play VMs and read letters from the audience and have a big 'ol processing party with Samara Breger, former Heart producer, sex educator and more recently: queer romance novelist. What do I tell my kids? What if my no DOES mean yes? How bad DOES he have to feel? How are racial dynamics similar? How do I not be depressed about humanity? This is an extended version of the original episode from 2017, including more questions and more grappling. SEND KP YOUR THOUGHTS in VM form for the update-isode!DONATE to support the creation of a final episode telling the story of all that happened after this famed series aired; friendships rekindled, apologies made, mistakes repeated, lessons learned and unlearned and never learned and will she ever learn. THANK YOU to all who have donated 100+ dollars! Look out for an email soon where you will receive unreleased writing and the story of what happened to KP and Jay after she made the series. Learn about your ad choices: dovetail.prx.org/ad-choices