Genus of birds
POPULARITY
Listener, what's the craziest thing you've found growing where it shouldn't...? Whilst you get to leaving your answers in the comments, Roddy is taking us all through some of the wildest things that Then it's time for another feather segment with our pals Green Feathers, where we go all shmancy and don our Eider down jackets to learn about the extraordinary feathers (and prices) that these ducks are responsible for. Then it's our fight segment where the feathers will literally fly as Roddy does battle with the terminator finch hell bent on making us all have to deal with cherry stones - the Hawfinch. Then we atone for our wildlife brawling sins by putting our minds to a question on thinking outside the box to develop a new conservation strategy... Get your digital window to the natural world here with Green Feathers - https://www.green-feathers.co.uk Need more geese? Our Patreon is up and running! Come join the flock for extra episodes - https://www.patreon.com/howmanygeese
La Taberna 1053 Un espacio musical que pretende difundir la música de diversos géneros. Viajamos por los sonidos de Chucho Valdés, Paquito D'Rivera, Escalandrum, Carlos Miyares, Carlos Averhoff Jr, ATK Epop, Eider, Heitxi, Caudi Pedro, Vënkman, Sonido Vegetal, ATK Epop y Marie Petit. Suscribete a nuestros episodios en ivoox y no te pierdas ninguno. A Lucana Radio la radio que se lee se ve y se escucha Alfonso Puyod en las melodias, JV en la producción y Francho Martínez en la locución. Cada semana en Alto jalon Radio, Radio Sobrarbe, Radio Albada y Radio Monegros
WISSEN SCHAFFT GELD - Aktien und Geldanlage. Wie Märkte und Finanzen wirklich funktionieren.
Heute der 3.te Teil zu den wichtigsten Erkenntnisse von dem Nr. 1 Bestseller Autor Tony Robbins mit seinem Buch "Money - Die sieben einfachen Schritte zur finanziellen Freiheit". Bei Interesse und/oder für mehr Informationen zu meinem 2-Tägigen Finanzseminar (Frühjahr 2025), schreibe mir einfach eine kurze E-Mail an: krapp@abatus-beratung.com Viel Spaß beim Hören,Dein Matthias Krapp(Transkript dieser Folge weiter unten) NEU!!! Hier kannst Du Dich kostenlos für meinen Minikurs registrieren und reinschauen. Es lohnt sich: https://portal.abatus-beratung.com/geldanlage-kurs/
What do eiders eat, how do they feed, why is their poop so stinky, how do they taste, how many are there, and what are the most pressing conservation concerns affecting them? These and other questions are explored as Kate Martin and Dr. Sarah Gutowsky rejoin Dr. Mike Brasher to wrap up our common eider species profile. Also discussed is new research that is improving our knowledge of eider population trends and identifying important breeding and winter sites. New telemetry results are revealing fascinating insights about eider migration, and we learn of the important cultural and economic tie between common eiders and local communities, including why an eider down duvet could set you back $10,000!Listen now: www.ducks.org/DUPodcastSend feedback: DUPodcast@ducks.org
Common eiders are the largest duck in the Northern Hemisphere, with some tipping the scales at nearly 6 pounds. They are also the most widely distributed and heavily harvested sea duck in the world. In North America alone, there are 4 subspecies of the common eider. On this episode, Dr. Sarah Gutowsky and Kate Martin join Dr. Mike Brasher for Part One of our in depth discussion about this highly prized bird. This episode covers all the basics, including how to identify them, where they breed and winter, what their nests look like, and what we've learned from recent research about their ecology and unexpected shenanigans during the nesting season. Tune in for a wealth of information as we lay the foundation for even more discussions to come.Listen now: www.ducks.org/DUPodcastSend feedback: DUPodcast@ducks.org
Gear Guy Gavin is fresh off a trip to St. Paul Island to hunt King Eider. For most of us, this would be considered a dream bucket list type hunt. Even the pinnacle trip for many. Gavin relives the trip and gives us a look at what it really takes to make a trip for king eiders work. From the planning and how to get there to the actual hunting, he gives a great view of what that trip really involves. Thanks so much for listening and be sure to subscribe and review! New Waterfowl Film out now! Dream Job: Nick Johnson Stay comfortable, dry and warm: First Lite (Code MWF20) Go to OnXHunt to be better prepared for your hunt: OnX Learn more about better ammo: Migra Ammunitions Weatherby Sorix: Weatherby Support Conservation: DU (Code: Flyways) Stop saying "Huh?" with better hearing protection: Soundgear Real American Light Beer: Outlaw Beer Live Free: Turtlebox Add motion to your spread: Flashback Better Merch: /SHOP
Season 4, Episode 36 - We've shared over the years all the great places and features across the region that make the Poconos so special - but one thing that we have as a major source of pride is our people. The folks on the frontlines of the hospitality industry who welcome, serve, and give of themselves day in, day out. One employee in particular stood out in 2024 more than all the others: Eider Prados at Camelback Resort. We featured Prados in our January Pocono Mountains Magazine after he took home the Hospitality Worker of the Year Award! The Poconos is a year-round destination for millions and with 24-hundred square miles of mountains, forests, lakes and rivers with historic downtowns and iconic family resorts, it's the perfect getaway for a weekend or an entire week. You can always find out more on PoconoMountains.com or watch Pocono Television Network streaming live 24/7.
El podcast del espacio de radio La Taberna La Taberna Nos visitan Atk Epop y Valeria, Chucho Valdés, Villanueva, Tcheka, Eider, STR, Vënkman, Ruge Boreal, Cosmética, CapiNàs, Julio Cable, El Mismo. La Taberna es un espacio musical que pretende difundir la música de raiz. Todas las semanas en Alto Jalon Radio, Radio Sobrarbe, Radio Monegros y Radio Albada. Suscribete a nuestros episodios en ivoox y no te pierdas ninguno. A Lucana Radio la radio que se lee se ve y se escucha Alfonso Puyod en las melodias, JV en la producción y Francho Martínez en la locución.
Labrador Morning from CBC Radio Nfld. and Labrador (Highlights)
For two unlucky eider ducks, Point Amour was "le Point de la Mort." Labrador Morning's Stephen Roberts learned how two ducks crashed through the lantern at the Point Amour Lighthouse recently. We hear that tale.
Hirugarren denboraldi honetan, Bonus Tracka izango du Kinka saioak. Hamabostean behin argitaratutako ale nagusiaren osagarri izango da; izan ere, galdera berdin bati erantzungo diote elkarrizketatu guztiek, eta euren lan, jardun edo ikerketa arlotik etorkizuna nola amesten duten azalduko dute. Hemengo honetan, Ibone Ametzaga Unesco Katedrako kideak eta Eider Gotxi Guggenheim Urdaibai Stop plataformako kideak erantzun diote galdera horri.
Gipuzkoako Diputatu nagusiak onartu du Faktoria albistegian, koalizio abertzalearen jarrerak haserretu duela, 'hartu duten jarrera ez da bidezkoa ez Aldundia eta ezta gipuzkoarrentzat ere'...
Elkarrekin-Gipuzkoako batzarkide taldeko bozeramaileak argi utzi du Gipuzkoan, EAJ-PSE foru gobernuaren esku dagoela aurrekontuak norekin adostu nahi duten erabakitzea, jakinarazi dietenez, "ezer ez dagoelako sinatuta eta hori horrela, jokoak zabalik jarraitzen duelako"....
We have announced our first speaker, friend of the show Dylan Patel, and topic slates for Latent Space LIVE! at NeurIPS. Sign up for IRL/Livestream and to debate!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!The vibe shift we observed in July - in favor of Claude 3.5 Sonnet, first introduced in June — has been remarkably long lived and persistent, surviving multiple subsequent updates of 4o, o1 and Gemini versions, for Anthropic's Claude to end 2024 as the preferred model for AI Engineers and even being the exclusive choice for new code agents like bolt.new (our next guest on the pod!), which unlocked so much performance from Claude Sonnet that it went from $0 to $4m ARR in 4 weeks when it launched last month.Anthropic has now raised an additional $4b from Amazon and made an incredibly well received update of Claude 3.5 Sonnet (and Haiku), making significant improvements in performance over its predecessors:Solving SWE-BenchAs part of the October Sonnet release, Anthropic teased a blink-and-you'll miss it result:The updated Claude 3.5 Sonnet shows wide-ranging improvements on industry benchmarks, with particularly strong gains in agentic coding and tool use tasks. On coding, it improves performance on SWE-bench Verified from 33.4% to 49.0%, scoring higher than all publicly available models—including reasoning models like OpenAI o1-preview and specialized systems designed for agentic coding. It also improves performance on TAU-bench, an agentic tool use task, from 62.6% to 69.2% in the retail domain, and from 36.0% to 46.0% in the more challenging airline domain. The new Claude 3.5 Sonnet offers these advancements at the same price and speed as its predecessor.This was followed up by a blogpost a week later from today's guest, Erik Schluntz, the engineer who implemented and scored this SOTA result using a simple, non-overengineered version of the SWE-Agent framework (you can see the submissions here). We have previously covered the SWE-Bench story extensively:* Speaking with SWEBench/SWEAgent authors at ICLR* Speaking with Cosine Genie, the previous SOTA (43.8%) on SWEBench Verified (with brief update at DevDay 2024)* Speaking with Shunyu Yao on SWEBench and the ReAct paradigm driving SWE-AgentOne of the notable inclusions in this blogpost are the tools that Erik decided to give Claude, e.g. the “Edit Tool”:The tools teased in the SWEBench submission/blogpost were then polished up and released with Computer Use…And you can also see even more computer use tools given in the new Model Context Protocol servers:Claude Computer UseBecause it is one of the best received AI releases of the year, we recommend watching the 2 minute Computer Use intro (and related demos) in its entirety:Eric also worked on Claude's function calling, tool use, and computer use APIs, so we discuss that in the episode.Erik [00:53:39]: With computer use, just give the thing a browser that's logged into what you want to integrate with, and it's going to work immediately. And I see that reduction in friction as being incredibly exciting. Imagine a customer support team where, okay, hey, you got this customer support bot, but you need to go integrate it with all these things. And you don't have any engineers on your customer support team. But if you can just give the thing a browser that's logged into your systems that you need it to have access to, now, suddenly, in one day, you could be up and rolling with a fully integrated customer service bot that could go do all the actions you care about. So I think that's the most exciting thing for me about computer use, is reducing that friction of integrations to almost zero.As you'll see, this is very top of mind for Erik as a former Robotics founder who's company basically used robots to interface with human physical systems like elevators.Full Video episodePlease like and subscribe!Show Notes* Eric Schluntz* “Raising the bar on SWE-Bench Verified”* Cobalt Robotics* SWE-Bench* SWE-Bench Verified* Human Eval & other benchmarks* Anthropic Workbench* Aider* Cursor* Fireworks AI* E2B* Amanda Askell* Toyota Research* Physical Intelligence (Pi)* Chelsea Finn* Josh Albrecht* Eric Jang* 1X* Dust* Cosine Episode* Bolt* Adept Episode* TauBench* LMSys EpisodeTimestamps* [00:00:00] Introductions* [00:03:39] What is SWE-Bench?* [00:12:22] SWE-Bench vs HumanEval vs others* [00:15:21] SWE-Agent architecture and runtime* [00:21:18] Do you need code indexing?* [00:24:50] Giving the agent tools* [00:27:47] Sandboxing for coding agents* [00:29:16] Why not write tests?* [00:30:31] Redesigning engineering tools for LLMs* [00:35:53] Multi-agent systems* [00:37:52] Why XML so good?* [00:42:57] Thoughts on agent frameworks* [00:45:12] How many turns can an agent do?* [00:47:12] Using multiple model types* [00:51:40] Computer use and agent use cases* [00:59:04] State of AI robotics* [01:04:24] Robotics in manufacturing* [01:05:01] Hardware challenges in robotics* [01:09:21] Is self-driving a good business?TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners. And today we're in the new studio with my usual co-host, Shawn from Smol AI.Swyx [00:00:14]: Hey, and today we're very blessed to have Erik Schluntz from Anthropic with us. Welcome.Erik [00:00:19]: Hi, thanks very much. I'm Erik Schluntz. I'm a member of technical staff at Anthropic, working on tool use, computer use, and Swebench.Swyx [00:00:27]: Yeah. Well, how did you get into just the whole AI journey? I think you spent some time at SpaceX as well? Yeah. And robotics. Yeah. There's a lot of overlap between like the robotics people and the AI people, and maybe like there's some interlap or interest between language models for robots right now. Maybe just a little bit of background on how you got to where you are. Yeah, sure.Erik [00:00:50]: I was at SpaceX a long time ago, but before joining Anthropic, I was the CTO and co-founder of Cobalt Robotics. We built security and inspection robots. These are sort of five foot tall robots that would patrol through an office building or a warehouse looking for anything out of the ordinary. Very friendly, no tasers or anything. We would just sort of call a remote operator if we saw anything. We have about 100 of those out in the world, and had a team of about 100. We actually got acquired about six months ago, but I had left Cobalt about a year ago now, because I was starting to get a lot more excited about AI. I had been writing a lot of my code with things like Copilot, and I was like, wow, this is actually really cool. If you had told me 10 years ago that AI would be writing a lot of my code, I would say, hey, I think that's AGI. And so I kind of realized that we had passed this level, like, wow, this is actually really useful for engineering work. That got me a lot more excited about AI and learning about large language models. So I ended up taking a sabbatical and then doing a lot of reading and research myself and decided, hey, I want to go be at the core of this and joined Anthropic.Alessio [00:01:53]: And why Anthropic? Did you consider other labs? Did you consider maybe some of the robotics companies?Erik [00:02:00]: So I think at the time I was a little burnt out of robotics, and so also for the rest of this, any sort of negative things I say about robotics or hardware is coming from a place of burnout, and I reserve my right to change my opinion in a few years. Yeah, I looked around, but ultimately I knew a lot of people that I really trusted and I thought were incredibly smart at Anthropic, and I think that was the big deciding factor to come there. I was like, hey, this team's amazing. They're not just brilliant, but sort of like the most nice and kind people that I know, and so I just felt like I could be a really good culture fit. And ultimately, I do care a lot about AI safety and making sure that I don't want to build something that's used for bad purposes, and I felt like the best chance of that was joining Anthropic.Alessio [00:02:39]: And from the outside, these labs kind of look like huge organizations that have these obscureSwyx [00:02:44]: ways to organize.Alessio [00:02:45]: How did you get, you joined Anthropic, did you already know you were going to work on of the stuff you publish or you kind of join and then you figure out where you land? I think people are always curious to learn more.Erik [00:02:57]: Yeah, I've been very happy that Anthropic is very bottoms up and sort of very sort of receptive to whatever your interests are. And so I joined sort of being very transparent of like, hey, I'm most excited about code generation and AI that can actually go out and sort of touch the world or sort of help people build things. And, you know, those weren't my initial initial projects. I also came in and said, hey, I want to do the most valuable possible thing for this company and help Anthropic succeed. And, you know, like, let me find the balance of those. So I was working on lots of things at the beginning, you know, function calling, tool use. And then sort of as it became more and more relevant, I was like, oh, hey, like, let's it's time to go work on encoding agents and sort of started looking at SWE-Bench as sort of a really good benchmark for that.Swyx [00:03:39]: So let's get right into SWE-Bench. That's one of the many claims to fame. I feel like there's just been a series of releases related with Cloud 3.5 Sonnet around about two or three months ago, 3.5 Sonnet came out and it was it was a step ahead in terms of a lot of people immediately fell in love with it for coding. And then last month you released a new updated version of Cloud Sonnet. We're not going to talk about the training for that because that's still confidential. But I think Anthropic's done a really good job, like applying the model to different things. So you took the lead on SWE-Bench, but then also we're going to talk a little bit about computer use later on. So maybe just give us a context about why you looked at SWE-Bench Verified and you actually came up with a whole system for building agents that would maximally use the model well. Yeah.Erik [00:04:28]: So I'm on a sub team called Product Research. And basically the idea of product research is to really understand what end customers care about and want in the models and then work to try to make that happen. So we're not focused on sort of these more abstract general benchmarks like math problems or MMLU, but we really care about finding the things that are really valuable and making sure the models are great at those. And so because I've been interested in coding agents, I knew that this would be a really valuable thing. And I knew there were a lot of startups and our customers trying to build coding agents with our models. And so I said, hey, this is going to be a really good benchmark to be able to measure that and do well on it. And I wasn't the first person at Anthropic to find SWE-Bench, and there are lots of people that already knew about it and had done some internal efforts on it. It fell to me to sort of both implement the benchmark, which is very tricky, and then also to sort of make sure we had an agent and basically like a reference agent, maybe I'd call it, that could do very well on it. Ultimately, we want to provide how we implemented that reference agent so that people can build their own agents on top of our system and get sort of the most out of it as possible. So with this blog post we released on SWE-Bench, we released the exact tools and the prompt that we gave the model to be able to do well.Swyx [00:05:46]: For people who don't know, who maybe haven't dived into SWE-Bench, I think the general perception is they're like tasks that a software engineer could do. I feel like that's an inaccurate description because it is basically, one, it's a subset of like 12 repos. It's everything they could find that every issue with like a matching commit that could be tested. So that's not every commit. And then SWE-Bench verified is further manually filtered by OpenAI. Is that an accurate description and anything you'd change about that? Yes.Erik [00:06:14]: SWE-Bench is, it certainly is a subset of all tasks. It's first of all, it's only Python repos, so already fairly limited there. And it's just 12 of these popular open source repos. And yes, it's only ones where there were tests that passed at the beginning and also new tests that were introduced that test the new feature that's added. So it is, I think, a very limited subset of real engineering tasks. But I think it's also very valuable because even though it's a subset, it is true engineering tasks. And I think a lot of other benchmarks are really kind of these much more artificial setups of even if they're related to coding, they're more like coding interview style questions or puzzles that I think are very different from day-to-day what you end up doing. I don't know how frequently you all get to use recursion in your day-to-day job, but whenever I do, it's like a treat. And I think it's almost comical, and a lot of people joke about this in the industry, is how different interview questions are.Swyx [00:07:13]: Dynamic programming. Yeah, exactly.Erik [00:07:15]: Like, you code. From the day-to-day job. But I think one of the most interesting things about SWE-Bench is that all these other benchmarks are usually just isolated puzzles, and you're starting from scratch. Whereas SWE-Bench, you're starting in the context of an entire repository. And so it adds this entirely new dimension to the problem of finding the relevant files. And this is a huge part of real engineering, is it's actually pretty rare that you're starting something totally greenfield. You need to go and figure out where in a codebase you're going to make a change and understand how your work is going to interact with the rest of the systems. And I think SWE-Bench does a really good job of presenting that problem.Alessio [00:07:51]: Why do we still use human eval? It's like 92%, I think. I don't even know if you can actually get to 100% because some of the data is not actuallySwyx [00:07:59]: solvable.Alessio [00:08:00]: Do you see benchmarks like that, they should just get sunsetted? Because when you look at the model releases, it's like, oh, it's like 92% instead of like 89%, 90% on human eval versus, you know, SWE-Bench verified is you have 49%, right? Which is like, before 45% was state of the art, but maybe like six months ago it was like 30%, something like that. So is that a benchmark that you think is going to replace human eval, or do you think they're just going to run in parallel?Erik [00:08:27]: I think there's still need for sort of many different varied evals. Like sometimes you do really care about just sort of greenfield code generation. And so I don't think that everything needs to go to sort of an agentic setup.Swyx [00:08:39]: It would be very expensive to implement.Erik [00:08:41]: The other thing I was going to say is that SWE-Bench is certainly hard to implement and expensive to run because each task, you have to parse, you know, a lot of the repo to understand where to put your code. And a lot of times you take many tries of writing code, running it, editing it. It can use a lot of tokens compared to something like human eval. So I think there's definitely a space for these more traditional coding evals that are sort of easy to implement, quick to run, and do get you some signal. Maybe hopefully there's just sort of harder versions of human eval that get created.Alessio [00:09:14]: How do we get SWE-Bench verified to 92%? Do you think that's something where it's like line of sight to it, or it's like, you know, we need a whole lot of things to go right? Yeah, yeah.Erik [00:09:23]: And actually, maybe I'll start with SWE-Bench versus SWE-Bench verified, which is I think something I missed earlier. So SWE-Bench is, as we described, this big set of tasks that were scraped.Swyx [00:09:33]: Like 12,000 or something?Erik [00:09:34]: Yeah, I think it's 2,000 in the final set. But a lot of those, even though a human did them, they're actually impossible given the information that comes with the task. The most classic example of this is the test looks for a very specific error string. You know, like assert message equals error, something, something, something. And unless you know that's exactly what you're looking for, there's no way the model is going to write that exact same error message, and so the tests are going to fail. So SWE-Bench verified was actually made in partnership with OpenAI, and they hired humans to go review all these tasks and pick out a subset to try to remove any obstacle like this that would make the tasks impossible. So in theory, all of these tasks should be fully doable by the model. And they also had humans grade how difficult they thought the problems would be. Between less than 15 minutes, I think 15 minutes to an hour, an hour to four hours, and greater than four hours. So that's kind of this interesting sort of how big the problem is as well. To get to SWE-Bench verified to 90%, actually, maybe I'll also start off with some of the remaining failures that I see when running our model on SWE-Bench. I'd say the biggest cases are the model sort of operates at the wrong level of abstraction. And what I mean by that is the model puts in maybe a smaller band-aid when really the task is asking for a bigger refactor. And some of those, you know, is the model's fault, but a lot of times if you're just sort of seeing the GitHub issue, it's not exactly clear which way you should do. So even though these tasks are possible, there's still some ambiguity in how the tasks are described. That being said, I think in general, language models frequently will produce a smaller diff when possible, rather than trying to do a big refactor. I think another area, at least the agent we created, didn't have any multimodal abilities, even though our models are very good at vision. So I think that's just a missed opportunity. And if I read through some of the traces, there's some funny things where, especially the tasks on matplotlib, which is a graphing library, the test script will save an image and the model will just say, okay, it looks great, you know, without looking at it. So there's certainly extra juice to squeeze there of just making sure the model really understands all the sides of the input that it's given, including multimodal. But yeah, I think like getting to 92%. So this is something that I have not looked at, but I'm very curious about. I want someone to look at, like, what is the union of all of the different tasks that have been solved by at least one attempt at SWE-Bench Verified. There's a ton of submissions to the benchmark, and so I'd be really curious to see how many of those 500 tasks at least someone has solved. And I think, you know, there's probably a bunch that none of the attempts have ever solved. And I think it'd be interesting to look at those and say, hey, is there some problem with these? Like, are these impossible? Or are they just really hard and only a human could do them?Swyx [00:12:22]: Yeah, like specifically, is there a category of problems that are still unreachable by any LLM agent? Yeah, yeah. And I think there definitely are.Erik [00:12:28]: The question is, are those fairly inaccessible or are they just impossible because of the descriptions? But I think certainly some of the tasks, especially the ones that the human graders reviewed as like taking longer than four hours are extremely difficult. I think we got a few of them right, but not very many at all in the benchmark.Swyx [00:12:49]: And did those take less than four hours?Erik [00:12:51]: They certainly did less than, yeah, than four hours.Swyx [00:12:54]: Is there a correlation of length of time with like human estimated time? You know what I mean? Or do we have sort of more of X paradox type situations where it's something super easy for a model, but hard for a human?Erik [00:13:06]: I actually haven't done the stats on that, but I think that'd be really interesting to see of like how many tokens does it take and how is that correlated with difficulty? What is the likelihood of success with difficulty? I think actually a really interesting thing that I saw, one of my coworkers who was also working on this named Simon, he was focusing just specifically on the very hard problems, the ones that are said to take longer than four hours. And he ended up sort of creating a much more detailed prompt than I used. And he got a higher score on the most difficult subset of problems, but a lower score overall on the whole benchmark. And the prompt that I made, which is sort of much more simple and bare bones, got a higher score on the overall benchmark, but lower score on the really hard problems. And I think some of that is the really detailed prompt made the model sort of overcomplicate a lot of the easy problems, because honestly, a lot of the suite bench problems, they really do just ask for a bandaid where it's like, hey, this crashes if this is none, and really all you need to do is put a check if none. And so sometimes trying to make the model think really deeply, it'll think in circles and overcomplicate something, which certainly human engineers are capable of as well. But I think there's some interesting thing of the best prompt for hard problems might not be the best prompt for easy problems.Alessio [00:14:19]: How do we fix that? Are you supposed to fix it at the model level? How do I know what prompt I'm supposed to use?Swyx [00:14:25]: Yeah.Erik [00:14:26]: And I'll say this was a very small effect size, and so I think this isn't worth obsessing over. I would say that as people are building systems around agents, I think the more you can separate out the different kinds of work the agent needs to do, the better you can tailor a prompt for that task. And I think that also creates a lot of like, for instance, if you were trying to make an agent that could both solve hard programming tasks, and it could just write quick test files for something that someone else had already made, the best way to do those two tasks might be very different prompts. I see a lot of people build systems where they first sort of have a classification, and then route the problem to two different prompts. And that's sort of a very effective thing, because one, it makes the two different prompts much simpler and smaller, and it means you can have someone work on one of the prompts without any risk of affecting the other tasks. So it creates like a nice separation of concerns. Yeah.Alessio [00:15:21]: And the other model behavior thing you mentioned, they prefer to generate like shorter diffs. Why is that? Like, is there a way? I think that's maybe like the lazy model question that people have is like, why are you not just generating the whole code instead of telling me to implement it?Swyx [00:15:36]: Are you saving tokens? Yeah, exactly. It's like conspiracy theory. Yeah. Yeah.Erik [00:15:41]: Yeah. So there's two different things there. One is like the, I'd say maybe like doing the easier solution rather than the hard solution. And I'd say the second one, I think what you're talking about is like the lazy model is like when the model says like dot, dot, dot, code remains the same.Swyx [00:15:52]: Code goes here. Yeah. I'm like, thanks, dude.Erik [00:15:55]: But honestly, like that just comes as like people on the internet will do stuff like that. And like, dude, if you're talking to a friend and you ask them like to give you some example code, they would definitely do that. They're not going to reroll the whole thing. And so I think that's just a matter of like, you know, sometimes you actually do just, just want like the relevant changes. And so I think it's, this is something where a lot of times like, you know, the models aren't good at mind reading of like which one you want. So I think that like the more explicit you can be in prompting to say, Hey, you know, give me the entire thing, no, no elisions versus just give me the relevant changes. And that's something, you know, we want to make the models always better at following those kinds of instructions.Swyx [00:16:32]: I'll drop a couple of references here. We're recording this like a day after Dario, Lex Friedman just dropped his five hour pod with Dario and Amanda and the rest of the crew. And Dario actually made this interesting observation that like, we actually don't want, we complain about models being too chatty in text and then not chatty enough in code. And so like getting that right is kind of a awkward bar because, you know, you, you don't want it to yap in its responses, but then you also want it to be complete in, in code. And then sometimes it's not complete. Sometimes you just want it to diff, which is something that Enthopic has also released with a, you know, like the, the fast edit stuff that you guys did. And then the other thing I wanted to also double back on is the prompting stuff. You said, you said it was a small effect, but it was a noticeable effect in terms of like picking a prompt. I think we'll go into suite agent in a little bit, but I kind of reject the fact that, you know, you need to choose one prompt and like have your whole performance be predicated on that one prompt. I think something that Enthopic has done really well is meta prompting, prompting for a prompt. And so why can't you just develop a meta prompt for, for all the other prompts? And you know, if it's a simple task, make a simple prompt, if it's a hard task, make a hard prompt. Obviously I'm probably hand-waving a little bit, but I will definitely ask people to try the Enthopic Workbench meta prompting system if they haven't tried it yet. I went to the Build Day recently at Enthopic HQ, and it's the closest I've felt to an AGI, like learning how to operate itself that, yeah, it's, it's, it's really magical.Erik [00:17:57]: Yeah, no, Claude is great at writing prompts for Claude.Swyx [00:18:00]: Right, so meta prompting. Yeah, yeah.Erik [00:18:02]: The way I think about this is that humans, even like very smart humans still use sort of checklists and use sort of scaffolding for themselves. Surgeons will still have checklists, even though they're incredible experts. And certainly, you know, a very senior engineer needs less structure than a junior engineer, but there still is some of that structure that you want to keep. And so I always try to anthropomorphize the models and try to think about for a human sort of what is the equivalent. And that's sort of, you know, how I think about these things is how much instruction would you give a human with the same task? And do you, would you need to give them a lot of instruction or a little bit of instruction?Alessio [00:18:36]: Let's talk about the agent architecture maybe. So first, runtime, you let it run until it thinks it's done or it reaches 200k context window.Swyx [00:18:45]: How did you come up? What's up with that?Erik [00:18:47]: Yeah.Swyx [00:18:48]: Yeah.Erik [00:18:49]: I mean, this, so I'd say that a lot of previous agent work built sort of these very hard coded and rigid workflows where the model is sort of pushed through certain flows of steps. And I think to some extent, you know, that's needed with smaller models and models that are less smart. But one of the things that we really wanted to explore was like, let's really give Claude the reins here and not force Claude to do anything, but let Claude decide, you know, how it should approach the problem, what steps it should do. And so really, you know, what we did is like the most extreme version of this is just give it some tools that it can call and it's able to keep calling the tools, keep thinking, and then yeah, keep doing that until it thinks it's done. And that's sort of the most, the most minimal agent framework that we came up with. And I think that works very well. I think especially the new Sonnet 3.5 is very, very good at self-correction, has a lot of like grit. Claude will try things that fail and then try, you know, come back and sort of try different approaches. And I think that's something that you didn't see in a lot of previous models. Some of the existing agent frameworks that I looked at, they had whole systems built to try to detect loops and see, oh, is the model doing the same thing, you know, more than three times, then we have to pull it out. And I think like the smarter the models are, the less you need that kind of extra scaffolding. So yeah, just giving the model tools and letting it keep sample and call tools until it thinks it's done was the most minimal framework that we could think of. And so that's what we did.Alessio [00:20:18]: So you're not pruning like bad paths from the context. If it tries to do something, it fails. You just burn all these tokens.Swyx [00:20:25]: Yes.Erik [00:20:26]: I would say the downside of this is that this is sort of a very token expensive way to doSwyx [00:20:29]: this. But still, it's very common to prune bad paths because models get stuck. Yeah.Erik [00:20:35]: But I'd say that, yeah, 3.5 is not getting stuck as much as previous models. And so, yeah, we wanted to at least just try the most minimal thing. Now, I would say that, you know, this is definitely an area of future research, especially if we talk about these problems that are going to take a human more than four hours. Those might be things where we're going to need to go prune bad paths to let the model be able to accomplish this task within 200k tokens. So certainly I think there's like future research to be done in that area, but it's not necessary to do well on these benchmarks.Swyx [00:21:06]: Another thing I always have questions about on context window things, there's a mini cottage industry of code indexers that have sprung up for large code bases, like the ones in SweetBench. You didn't need them? We didn't.Erik [00:21:18]: And I think I'd say there's like two reasons for this. One is like SweetBench specific and the other is a more general thing. The more general thing is that I think Sonnet is very good at what we call agentic search. And what this basically means is letting the model decide how to search for something. It gets the results and then it can decide, should it keep searching or is it done? Does it have everything it needs? So if you read through a lot of the traces of the SweetBench, the model is calling tools to view directories, list out things, view files. And it will do a few of those until it feels like it's found the file where the bug is. And then it will start working on that file. And I think like, again, this is all, everything we did was about just giving Claude the full reins. So there's no hard-coded system. There's no search system that you're relying on getting the correct files into context. This just totally lets Claude do it.Swyx [00:22:11]: Or embedding things into a vector database. Exactly. Oops. No, no.Erik [00:22:17]: This is very, very token expensive. And so certainly, and it also takes many, many turns. And so certainly if you want to do something in a single turn, you need to do RAG and just push stuff into the first prompt.Alessio [00:22:28]: And just to make it clear, it's using the Bash tool, basically doing LS, looking at files and then doing CAD for the following context. It can do that.Erik [00:22:35]: But it's file editing tool also has a command in it called view that can view a directory. It's very similar to LS, but it just sort of has some nice sort of quality of life improvements. So I think it'll only do an LS sort of two directories deep so that the model doesn't get overwhelmed if it does this on a huge file. I would say actually we did more engineering of the tools than the overall prompt. But the one other thing I want to say about this agentic search is that for SWE-Bench specifically, a lot of the tasks are bug reports, which means they have a stack trace in them. And that means right in that first prompt, it tells you where to go. And so I think this is a very easy case for the model to find the right files versus if you're using this as a general coding assistant where there isn't a stack trace or you're asking it to insert a new feature, I think there it's much harder to know which files to look at. And that might be an area where you would need to do more of this exhaustive search where an agentic search would take way too long.Swyx [00:23:33]: As someone who spent the last few years in the JS world, it'd be interesting to see SWE-Bench JS because these stack traces are useless because of so much virtualization that we do. So they're very, very disconnected with where the code problems are actually appearing.Erik [00:23:50]: That makes me feel better about my limited front-end experience, as I've always struggled with that problem.Swyx [00:23:55]: It's not your fault. We've gotten ourselves into a very, very complicated situation. And I'm not sure it's entirely needed. But if you talk to our friends at Vercel, they will say it is.Erik [00:24:04]: I will say SWE-Bench just released SWE-Bench Multimodal, which I believe is either entirely JavaScript or largely JavaScript. And it's entirely things that have visual components of them.Swyx [00:24:15]: Are you going to tackle that? We will see.Erik [00:24:17]: I think it's on the list and there's interest, but no guarantees yet.Swyx [00:24:20]: Just as a side note, it occurs to me that every model lab, including Enthopic, but the others as well, you should have your own SWE-Bench, whatever your bug tracker tool. This is a general methodology that you can use to track progress, I guess.Erik [00:24:34]: Yeah, sort of running on our own internal code base.Swyx [00:24:36]: Yeah, that's a fun idea.Alessio [00:24:37]: Since you spend so much time on the tool design, so you have this edit tool that can make changes and whatnot. Any learnings from that that you wish the AI IDEs would take in? Is there some special way to look at files, feed them in?Erik [00:24:50]: I would say the core of that tool is string replace. And so we did a few different experiments with different ways to specify how to edit a file. And string replace, basically, the model has to write out the existing version of the string and then a new version, and that just gets swapped in. We found that to be the most reliable way to do these edits. Other things that we tried were having the model directly write a diff, having the model fully regenerate files. That one is actually the most accurate, but it takes so many tokens, and if you're in a very big file, it's cost prohibitive. There's basically a lot of different ways to represent the same task. And they actually have pretty big differences in terms of model accuracy. I think Eider, they have a really good blog where they explore some of these different methods for editing files, and they post results about them, which I think is interesting. But I think this is a really good example of the broader idea that you need to iterate on tools rather than just a prompt. And I think a lot of people, when they make tools for an LLM, they kind of treat it like they're just writing an API for a computer, and it's sort of very minimal. It's sort of just the bare bones of what you'd need, and honestly, it's so hard for the models to use those. Again, I come back to anthropomorphizing these models. Imagine you're a developer, and you just read this for the very first time, and you're trying to use it. You can do so much better than just sort of the bare API spec of what you'd often see. Include examples in the description. Include really detailed explanations of how things work. And I think that, again, also think about what is the easiest way for the model to represent the change that it wants to make. For file editing, as an example, writing a diff is actually... Let's take the most extreme example. You want the model to literally write a patch file. I think patch files have at the very beginning numbers of how many total lines change. That means before the model has actually written the edit, it needs to decide how many numbers or how many lines are going to change.Swyx [00:26:52]: Don't quote me on that.Erik [00:26:54]: I think it's something like that, but I don't know if that's exactly the diff format. But you can certainly have formats that are much easier to express without messing up than others. And I like to think about how much human effort goes into designing human interfaces for things. It's incredible. This is entirely what FrontEnd is about, is creating better interfaces to kind of do the same things. And I think that same amount of attention and effort needs to go into creating agent computer interfaces.Swyx [00:27:19]: It's a topic we've discussed, ACI or whatever that looks like. I would also shout out that I think you released some of these toolings as part of computer use as well. And people really liked it. It's all open source if people want to check it out. I'm curious if there's an environment element that complements the tools. So how do you... Do you have a sandbox? Is it just Docker? Because that can be slow or resource intensive. Do you have anything else that you would recommend?Erik [00:27:47]: I don't think I can talk about sort of public details or about private details about how we implement our sandboxing. But obviously, we need to have sort of safe, secure, and fast sandboxes for training for the models to be able to practice writing code and working in an environment.Swyx [00:28:03]: I'm aware of a few startups working on agent sandboxing. E2B is a close friend of ours that Alessio has led around in, but also I think there's others where they're focusing on snapshotting memory so that it can do time travel for debugging. Computer use where you can control the mouse or keyboard or something like that. Whereas here, I think that the kinds of tools that we offer are very, very limited to coding agent work cases like bash, edit, you know, stuff like that. Yeah.Erik [00:28:30]: I think the computer use demo that we released is an extension of that. It has the same bash and edit tools, but it also has the computer tool that lets it get screenshots and move the mouse and keyboard. Yeah. So I definitely think there's sort of more general tools there. And again, the tools we released as part of SweetBench were, I'd say they're very specific for like editing files and doing bash, but at the same time, that's actually very general if you think about it. Like anything that you would do on a command line or like editing files, you can do with those tools. And so we do want those tools to feel like any sort of computer terminal work could be done with those same tools rather than making tools that were like very specific for SweetBench like run tests as its own tool, for instance. Yeah.Swyx [00:29:15]: You had a question about tests.Alessio [00:29:16]: Yeah, exactly. I saw there's no test writer tool. Is it because it generates the code and then you're running it against SweetBench anyway, so it doesn't really need to write the test or?Swyx [00:29:26]: Yeah.Erik [00:29:27]: So this is one of the interesting things about SweetBench is that the tests that the model's output is graded on are hidden from it. That's basically so that the model can't cheat by looking at the tests and writing the exact solution. And I'd say typically the model, the first thing it does is it usually writes a little script to reproduce the error. And again, most SweetBench tasks are like, hey, here's a bug that I found. I run this and I get this error. So the first thing the model does is try to reproduce that. So it's kind of been rerunning that script as a mini test. But yeah, sometimes the model will like accidentally introduce a bug that breaks some other tests and it doesn't know about that.Alessio [00:30:05]: And should we be redesigning any tools? We kind of talked about this and like having more examples, but I'm thinking even things of like Q as a query parameter in many APIs, it's like easier for the model to like re-query than read the Q. I'm sure it learned the Q by this point, but like, is there anything you've seen like building this where it's like, hey, if I were to redesign some CLI tools, some API tool, I would like change the way structure to make it better for LLMs?Erik [00:30:31]: I don't think I've thought enough about that off the top of my head, but certainly like just making everything more human friendly, like having like more detailed documentation and examples. I think examples are really good in things like descriptions, like so many, like just using the Linux command line, like how many times I do like dash dash help or look at the man page or something. It's like, just give me one example of like how I actually use this. Like I don't want to go read through a hundred flags. Just give me the most common example. But again, so you know, things that would be useful for a human, I think are also very useful for a model.Swyx [00:31:03]: Yeah. I mean, there's one thing that you cannot give to code agents that is useful for human is this access to the internet. I wonder how to design that in, because one of the issues that I also had with just the idea of a suite bench is that you can't do follow up questions. You can't like look around for similar implementations. These are all things that I do when I try to fix code and we don't do that. It's not, it wouldn't be fair, like it'd be too easy to cheat, but then also it's kind of not being fair to these agents because they're not operating in a real world situation. Like if I had a real world agent, of course I'm giving it access to the internet because I'm not trying to pass a benchmark. I don't have a question in there more, more just like, I feel like the most obvious tool access to the internet is not being used.Erik [00:31:47]: I think that that's really important for humans, but honestly the models have so much general knowledge from pre-training that it's, it's like less important for them. I feel like versioning, you know, if you're working on a newer thing that was like, they came after the knowledge cutoff, then yes, I think that's very important. I think actually this, this is like a broader problem that there is a divergence between Sweebench and like what customers will actually care about who are working on a coding agent for real use. And I think one of those there is like internet access and being able to like, how do you pull in outside information? I think another one is like, if you have a real coding agent, you don't want to have it start on a task and like spin its wheels for hours because you gave it a bad prompt. You want it to come back immediately and ask follow up questions and like really make sure it has a very detailed understanding of what to do, then go off for a few hours and do work. So I think that like real tasks are going to be much more interactive with the agent rather than this kind of like one shot system. And right now there's no benchmark that, that measures that. And maybe I think it'd be interesting to have some benchmark that is more interactive. I don't know if you're familiar with TauBench, but it's a, it's a customer service benchmark where there's basically one LLM that's playing the user or the customer that's getting support and another LLM that's playing the support agent and they interact and try to resolve the issue.Swyx [00:33:08]: Yeah. We talked to the LMSIS guys. Awesome. And they also did MTBench for people listening along. So maybe we need MTSWE-Bench. Sure. Yeah.Erik [00:33:16]: So maybe, you know, you could have something where like before the SWE-Bench task starts, you have like a few back and forths with kind of like the, the author who can answer follow up questions about what they want the task to do. And of course you'd need to do that where it doesn't cheat and like just get the exact, the exact thing out of the human or out of the sort of user. But I think that would be a really interesting thing to see. If you look at sort of existing agent work, like a Repl.it's coding agent, I think one of the really great UX things they do is like first having the agent create a plan and then having the human approve that plan or give feedback. I think for agents in general, like having a planning step at the beginning, one, just having that plan will improve performance on the downstream task just because it's kind of like a bigger chain of thought, but also it's just such a better UX. It's way easier for a human to iterate on a plan with a model rather than iterating on the full task that sort of has a much slower time through each loop. If the human has approved this implementation plan, I think it makes the end result a lot more sort of auditable and trustable. So I think there's a lot of things sort of outside of SweetBench that will be very important for real agent usage in the world. Yeah.Swyx [00:34:27]: I will say also, there's a couple of comments on names that you dropped. Copilot also does the plan stage before it writes code. I feel like those approaches have generally been less Twitter successful because it's not prompt to code, it's prompt plan code. You know, so there's a little bit of friction in there, but it's not much. Like it's, it actually, it's, it, you get a lot for what it's worth. I also like the way that Devin does it, where you can sort of edit the plan as it goes along. And then the other thing with Repl.it, we had a, we hosted a sort of dev day pregame with Repl.it and they also commented about multi-agents. So like having two agents kind of bounce off of each other. I think it's a similar approach to what you're talking about with kind of the few shot example, just as in the prompts of clarifying what the agent wants. But typically I think this would be implemented as a tool calling another agent, like a sub-agent I don't know if you explored that, do you like that idea?Erik [00:35:20]: I haven't explored this enough, but I've definitely heard of people having good success with this. Of almost like basically having a few different sort of personas of agents, even if they're all the same LLM. I think this is one thing with multi-agent that a lot of people will kind of get confused by is they think it has to be different models behind each thing. But really it's sort of usually the same, the same model with different prompts. And yet having one, having them have different personas to kind of bring different sort of thoughts and priorities to the table. I've seen that work very well and sort of create a much more thorough and thought outSwyx [00:35:53]: response.Erik [00:35:53]: I think the downside is just that it adds a lot of complexity and it adds a lot of extra tokens. So I think it depends what you care about. If you want a plan that's very thorough and detailed, I think it's great. If you want a really quick, just like write this function, you know, you probably don't want to do that and have like a bunch of different calls before it does this.Alessio [00:36:11]: And just talking about the prompt, why are XML tags so good in Cloud? I think initially people were like, oh, maybe you're just getting lucky with XML. But I saw obviously you use them in your own agent prompts, so they must work. And why is it so model specific to your family?Erik [00:36:26]: Yeah, I think that there's, again, I'm not sure how much I can say, but I think there's historical reasons that internally we've preferred XML. I think also the one broader thing I'll say is that if you look at certain kinds of outputs, there is overhead to outputting in JSON. If you're trying to output code in JSON, there's a lot of extra escaping that needs to be done, and that actually hurts model performance across the board. Versus if you're in just a single XML tag, there's none of that sort of escaping thatSwyx [00:36:58]: needs to happen.Erik [00:36:58]: That being said, I haven't tried having it write HTML and XML, which maybe then you start running into weird escaping things there. I'm not sure. But yeah, I'd say that's some historical reasons, and there's less overhead of escaping.Swyx [00:37:12]: I use XML in other models as well, and it's just a really nice way to make sure that the thing that ends is tied to the thing that starts. That's the only way to do code fences where you're pretty sure example one start, example one end, that is one cohesive unit.Alessio [00:37:30]: Because the braces are nondescriptive. Yeah, exactly.Swyx [00:37:33]: That would be my simple reason. XML is good for everyone, not just Cloud. Cloud was just the first one to popularize it, I think.Erik [00:37:39]: I do definitely prefer to read XML than read JSON.Alessio [00:37:43]: Any other details that are maybe underappreciated? I know, for example, you had the absolute paths versus relative. Any other fun nuggets?Erik [00:37:52]: I think that's a good sort of anecdote to mention about iterating on tools. Like I said, spend time prompt engineering your tools, and don't just write the prompt, but write the tool, and then actually give it to the model and read a bunch of transcripts about how the model tries to use the tool. I think by doing that, you will find areas where the model misunderstands a tool or makes mistakes, and then basically change the tool to make it foolproof. There's this Japanese term, pokayoke, about making tools mistake-proof. You know, the classic idea is you can have a plug that can fit either way, and that's dangerous, or you can make it asymmetric so that it can't fit this way, it has to go like this, and that's a better tool because you can't use it the wrong way. So for this example of absolute paths, one of the things that we saw while testing these tools is, oh, if the model has done CD and moved to a different directory, it would often get confused when trying to use the tool because it's now in a different directory, and so the paths aren't lining up. So we said, oh, well, let's just force the tool to always require an absolute path, and then that's easy for the model to understand. It knows sort of where it is. It knows where the files are. And then once we have it always giving absolute paths, it never messes up even, like, no matter where it is because it just, if you're using an absolute path, it doesn't matter whereSwyx [00:39:13]: you are.Erik [00:39:13]: So iterations like that, you know, let us make the tool foolproof for the model. I'd say there's other categories of things where we see, oh, if the model, you know, opens vim, like, you know, it's never going to return. And so the tool is stuck.Swyx [00:39:28]: Did it get stuck? Yeah. Get out of vim. What?Erik [00:39:31]: Well, because the tool is, like, it just text in, text out. It's not interactive. So it's not like the model doesn't know how to get out of vim. It's that the way that the tool is, like, hooked up to the computer is not interactive. Yes, I mean, there is the meme of no one knows how to get out of vim. You know, basically, we just added instructions in the tool of, like, hey, don't launch commands that don't return.Swyx [00:39:54]: Yeah, like, don't launch vim.Erik [00:39:55]: Don't launch whatever. If you do need to do something, you know, put an ampersand after it to launch it in the background. And so, like, just, you know, putting kind of instructions like that just right in the description for the tool really helps the model. And I think, like, that's an underutilized space of prompt engineering, where, like, people might try to do that in the overall prompt, but just put that in the tool itself so the model knows that it's, like, for this tool, this is what's relevant.Swyx [00:40:20]: You said you worked on the function calling and tool use before you actually started this vBench work, right? Was there any surprises? Because you basically went from creator of that API to user of that API. Any surprises or changes you would make now that you have extensively dog-fooded in a state-of-the-art agent?Erik [00:40:39]: I want us to make, like, maybe, like, a little bit less verbose SDK. I think some way, like, right now, it just takes, I think we sort of force people to do the best practices of writing out sort of these full JSON schemas, but it would be really nice if you could just pass in a Python function as a tool. I think that could be something nice.Swyx [00:40:58]: I think that there's a lot of, like, Python- There's helper libraries. ... structure, you know. I don't know if there's anyone else that is specializing for Anthropic. Maybe Jeremy Howard's and Simon Willis's stuff. They all have Cloud-specific stuff that they are working on. Cloudette. Cloudette, exactly. I also wanted to spend a little bit of time with SuiteAgent. It seems like a very general framework. Like, is there a reason you picked it apart from it's the same authors as vBench, or?Erik [00:41:21]: The main thing we wanted to go with was the same authors as vBench, so it just felt sort of like the safest, most neutral option. And it was, you know, very high quality. It was very easy to modify, to work with. I would say it also actually, their underlying framework is sort of this, it's like, youSwyx [00:41:39]: know, think, act, observe.Erik [00:41:40]: That they kind of go through this loop, which is like a little bit more hard-coded than what we wanted to do, but it's still very close. That's still very general. So it felt like a good match as sort of the starting point for our agent. And we had already sort of worked with and talked with the SWE-Bench people directly, so it felt nice to just have, you know, we already know the authors. This will be easy to work with.Swyx [00:42:00]: I'll share a little bit of like, this all seems disconnected, but once you figure out the people and where they go to school, it all makes sense. So it's all Princeton. Yeah, the SWE-Bench and SuiteAgent.Erik [00:42:11]: It's a group out of Princeton.Swyx [00:42:12]: Yeah, and we had Shun Yu on the pod, and he came up with the React paradigm, and that's think, act, observe. That's all React. So they're all friends. Yep, yeah, exactly.Erik [00:42:22]: And you know, if you actually read our traces of our submission, you can actually see like think, act, observe in our logs. And we just didn't even change the printing code. So it's like doing still function calls under the hood, and the model can do sort of multiple function calls in a row without thinking in between if it wants to. But yeah, so a lot of similarities and a lot of things we inherited from SuiteAgent just as a starting point for the framework.Alessio [00:42:47]: Any thoughts about other agent frameworks? I think there's, you know, the whole gamut from very simple to like very complex.Swyx [00:42:53]: Autogen, CooEI, LandGraph. Yeah, yeah.Erik [00:42:56]: I think I haven't explored a lot of them in detail. I would say with agent frameworks in general, they can certainly save you some like boilerplate. But I think there's actually this like downside of making agents too easy, where you end up very quickly like building a much more complex system than you need. And suddenly, you know, instead of having one prompt, you have five agents that are talking to each other and doing a dialogue. And it's like, because the framework made that 10 lines to do, you end up building something that's way too complex. So I think I would actually caution people to like try to start without these frameworks if you can, because you'll be closer to the raw prompts and be able to sort of directly understand what's going on. I think a lot of times these frameworks also, by trying to make everything feel really magical, you end up sort of really hiding what the actual prompt and output of the model is, and that can make it much harder to debug. So certainly these things have a place, and I think they do really help at getting rid of boilerplate, but they come with this cost of obfuscating what's really happening and making it too easy to very quickly add a lot of complexity. So yeah, I would recommend people to like try it from scratch, and it's like not that bad.Alessio [00:44:08]: Would you rather have like a framework of tools? Do you almost see like, hey, it's maybe easier to get tools that are already well curated, like the ones that you build, if I had an easy way to get the best tool from you, andSwyx [00:44:21]: like you maintain the definition?Alessio [00:44:22]: Or yeah, any thoughts on how you want to formalize tool sharing?Erik [00:44:26]: Yeah, I think that's something that we're certainly interested in exploring, and I think there is space for sort of these general tools that will be very broadly applicable. But at the same time, most people that are building on these, they do have much more specific things that they're trying to do. You know, I think that might be useful for hobbyists and demos, but the ultimate end applications are going to be bespoke. And so we just want to make sure that the model's great at any tool that it uses. But certainly something we're exploring.Alessio [00:44:52]: So everything bespoke, no frameworks, no anything.Swyx [00:44:55]: Just for now, for now.Erik [00:44:56]: Yeah, I would say that like the best thing I've seen is people building up from like, build some good util functions, and then you can use those as building blocks. Yeah, yeah.Alessio [00:45:05]: I have a utils folder, or like all these scripts. My framework is like def, call, and tropic. And then I just put all the defaults.Swyx [00:45:12]: Yeah, exactly. There's a startup hidden in every utils folder, you know? No, totally not. Like, if you use it enough, like it's a startup, you know? At some point. I'm kind of curious, is there a maximum length of turns that it took? Like, what was the longest run? I actually don't.Erik [00:45:27]: I mean, it had basically infinite turns until it ran into a 200k context. I should have looked this up. I don't know. And so for some of those failed cases where it eventually ran out of context, I mean, it was over 100 turns. I'm trying to remember like the longest successful run, but I think it was definitely over 100 turns that some of the times.Swyx [00:45:48]: Which is not that much. It's a coffee break. Yeah.Erik [00:45:52]: But certainly, you know, these things can be a lot of turns. And I think that's because some of these things are really hard, where it's going to take, you know, many tries to do it. And if you think about like, think about a task that takes a human four hours to do. Think about how many different files you read, and like times you edit a file in four hours. That's a lot more than 100.Alessio [00:46:10]: How many times you open Twitter because you get distracted. But if you had a lot more compute, what's kind of like the return on the extra compute now? So like, you know, if you had thousands of turns or like whatever, like how much better would it get?Erik [00:46:23]: Yeah, this I don't know. And I think this is, I think sort of one of the open areas of research in general with agents is memory and sort of how do you have something that can do work beyond its context length where you're just purely appending. So you mentioned earlier things like pruning bad paths. I think there's a lot of interesting work around there. Can you just roll back but summarize, hey, don't go down this path? There be dragons. Yeah, I think that's very interesting that you could have something that that uses way more tokens without ever using at a time more than 200k. So I think that's very interesting. I think the biggest thing is like, can you make the model sort of losslessly summarize what it's learned from trying different approaches and bring things back? I think that's sort of the big challenge.Swyx [00:47:11]: What about different models?Alessio [00:47:12]: So you have Haiku, which is like, you know, cheaper. So you're like, well, what if I have a Haiku to do a lot of these smaller things and then put it back up?Erik [00:47:20]: I think Cursor might have said that they actually have a separate model for file editing.Swyx [00:47:25]: I'm trying to remember.Erik [00:47:25]: I think they were on maybe the Lex Fridman podcast where they said they have a bigger model, like write what the code should be and then a different model, like apply it. So I think there's a lot of interesting room for stuff like that. Yeah, fast supply.Swyx [00:47:37]: We actually did a pod with Fireworks that they worked with on. It's speculative decoding.Erik [00:47:41]: But I think there's also really interesting things about like, you know, paring down input tokens as well, especially sometimes the models trying to read like a 10,000 line file. That's a lot of tokens. And most of it is actually not going to be relevant. I think it'd be really interesting to like delegate that to Haiku. Haiku read this file and just pull out the most relevant functions. And then, you know, Sonnet reads just those and you save 90% on tokens. I think there's a lot of really interesting room for things like that. And again, we were just trying to do sort of the simplest, most minimal thing and show that it works. I'm really hoping that people, sort of the agent community builds things like that on top of our models. That's, again, why we released these tools. We're not going to go and do lots more submissions to SWE-Bench and try to prompt engineer this and build a bigger system. We want people to like the ecosystem to do that on top of our models. But yeah, so I think that's a really interesting one.Swyx [00:48:32]: It turns out, I think you did do 3.5 Haiku with your tools and it scored a 40.6. Yes.Erik [00:48:38]: So it did very well. It itself is actually very smart, which is great. But we haven't done any experiments with this combination of the two models. But yeah, I think that's one of the exciting things is that how well Haiku 3.5 did on SWE-Bench shows that sort of even our smallest, fastest model is very good at sort of thinking agentically and working on hard problems. Like it's not just sort of for writing simple text anymore.Alessio [00:49:02]: And I know you're not going to talk about it, but like Sonnet is not even supposed to be the best model, you know? Like Opus, it's kind of like we left it at three back in the corner intro. At some point, I'm sure the new Opus will come out. And if you had Opus Plus on it, that sounds very, very good.Swyx [00:49:19]: There's a run with SuiteAgent plus Opus, but that's the official SWE-Bench guys doing it.Erik [00:49:24]: That was the older, you know, 3.0.Swyx [00:49:25]: You didn't do yours. Yeah. Okay. Did you want to? I mean, you could just change the model name.Erik [00:49:31]: I think we didn't submit it, but I think we included it in our model card.Swyx [00:49:35]: Okay.Erik [00:49:35]: We included the score as a comparison. Yeah.Swyx [00:49:38]: Yeah.Erik [00:49:38]: And Sonnet and Haiku, actually, I think the new ones, they both outperformed the original Opus. Yeah. I did see that.Swyx [00:49:44]: Yeah. It's a little bit hard to find. Yeah.Erik [00:49:47]: It's not an exciting score, so we didn't feel like they need to submit it to the benchmark.Swyx [00:49:52]: We can cut over to computer use if we're okay with moving on to topics on this, if anything else. I think we're good.Erik [00:49:58]: I'm trying to think if there's anything else SWE-Bench related.Swyx [00:50:02]: It doesn't have to be also just specifically SWE-Bench, but just your thoughts on building agents, because you are one of the few people that have reached this leaderboard on building a coding agent. This is the state of the art. It's surprisingly not that hard to reach with some good principles. Right. There's obviously a ton of low-hanging fruit that we covered. Your thoughts on if you were to build a coding agent startup, what next?Erik [00:50:24]: I think the really interesting question for me, for all the startups out there, is this kind of divergence between the benchmarks and what real customers will want. So I'm curious, maybe the next time you have a coding agent startup on the podcast, you should ask them that. What are the differences that they're starting to make? Tomorrow.Swyx [00:50:40]: Oh, perfect, perfect. Yeah.Erik [00:50:41]: I'm actually very curious what they will see, because I also have seen, I feel like it's slowed down a little bit if I don't see the startups submitting to SWE-Bench that much anymore.Swyx [00:50:52]: Because of the traces, the trace. So we had Cosign on, they had a 50-something on full, on SWE-Bench full, which is the hardest one, and they were rejected because they didn't want to submit their traces. Yep. IP, you know? Yeah, that makes sense, that makes sense. Actually, tomorrow we're talking to Bolt, which is a cloud customer. You guys actually published a case study with them. I assume you weren't involved with that, but they were very happy with Cloud. Cool. One of the biggest launches of the year. Yeah, totally. We actually happened to b
Urdin Euskal Herri Irratia euskaraz / Les chroniques en basque de France Bleu
durée : 00:53:05 - Mikel Markez, Eider, Kepa Junkera
The farmer James Rebanks recounts a season he spent on a remote Norwegian island learning the ancient trade of caring for wild Eider ducks and gathering their down. In The Place of Tides he tells the story of Anna, a ‘duck woman' who helped revive this centuries-old tradition. As he traces the rough pattern of her work and her relationship with the wild, Rebanks reassess his own relationship with his Lake District farm, his family and home. Alice Robinson is a designer who has never shied away from the inescapable link between agriculture and luxury fashion. In Field Fork Fashion she looks back at the origins of her chosen material, leather, and traces the full life of Bullock 374 from farm to abattoir, tannery to cutting table. And in retelling this story she asks whether it's possible to create a more transparent, traceable and sustainable system. There are many crafts classified as ‘endangered' in Britain, but one that has had a renaissance in the last 50 years is the ancient tradition of thatching. Protections put in place by Historic England in the 1970s not only kickstarted a thatching revival, but also helped save heritage crop varieties. Andrew Raffle from the National Thatching Association says the relationship with local farmers is vital for the tradition, and there are an estimated 600-900 thatchers working today.Producer: Katy Hickman
Hoy en "Boulevard" de Radio Euskadi ha estado Eider Mendoza Larrañaga, la diputada general de Gipuzkoa....
Eider Arruti eta Ane Antxustegi Etxarte itzuli berriak dira Atlantikoan untzi tradizionalen inguruan egiten duten lehiaketatik. Untzigintza tradizionalaren baloreak lantzeko egiten duten lehiaketan bi euskal ordezkariok izan dira antolaketak halaxe eskatuta. ...
We never sleep! We are deep house. ASMR Music. This week is a sweet treat of luscious sonic grooves. With two THREE FROM … Features from The Atjazz Record Company and Oh So Coy Recordings. Doubles Double features from Piston Recordings and HOUPH. It's a big show full of deep shit! From…Shift K3Y | Bazza Ranks | Retrogroove | Medlar | Tony Lionni | Mo'Cream | Yuu Udagawa | A Fish Called Wanda & Alex Moiss | Miguel Palhares | Timofey | AnAmStyle | IMGADSDEN | Akio Imai | Andj C | Jason Hersco | DJ Christian B | Sam Tyler | N.W.N | Tony Deledda | LUISA | KVSA | Halo feat. Maiya | Magic Number | Playground (feat. Rona Ray) | Fouk feat. Debórah Bond | Peter Mac | Dave Anthony | Curol, BRUNN, TOSZ | Guri & Eider ft Life On Planets | Raz & Afla | Cee ElAssaad feat. QVLN | beatsbyhand featuring Kali Mija | Kennedy | Roy McLaren | Terrasoul | Lesny Deep | Mario Franca | ZaVen |
En una entrevista concedida a Crónica de Euskadi Fin de Semana de Radio Euskadi, la diputada general de Gipuzkoa Eider Mendoza se ha mostrado sorprendida ante las palabras de Óscar Puente, Ministro de Transportes y Movilidad Sostenible. ...
Labrador Morning from CBC Radio Nfld. and Labrador (Highlights)
The common eider is an important part of Inuit diets on the north coast. Now research is underway to understand threats to them in the future. Labrador Morning's Heidi Atter spoke to Michelle Saunders, the research manager for the Nunatsaivut government.
Summary: How do tanuki hunt for food? Join Kiersten as she shares some surprising behaviors that Tanuki use to catch prey. For my hearing impaired listeners, a complete transcript of this podcast follows the show notes on Podbean Show Notes: Nyctereutes procyonoides, Raccoon Dog. Animal Diversity Web. https://animaldiversity.org “Raccoon Dog (Nyctereutes procyonoides) In the Community of Medium-sized Carnivores n Europe: Its Adaptations, Impact on Native Fauna and Management pf the population.”, by Katrina Kauhala and Rafal Kowalczyk. https://researchgate.net Music written and performed by Katherine Camp Transcript (Piano music plays) Kiersten - This is Ten Things I Like About…a ten minute, ten episode podcast about unknown or misunderstood wildlife. (Piano music stops) Welcome to Ten Things I Like About… I'm Kiersten, your host, and this is a podcast about misunderstood or unknown creatures in nature. Some we'll find right out side our doors and some are continents away but all are fascinating. This podcast will focus ten, ten minute episodes on different animals and their amazing characteristics. Please join me on this extraordinary journey, you won't regret it. We're more than halfway through Tanuki and the sixth thing I like about them is how they hunt and forage. Since tanuki are omnivores they do a little of both. I know we have talked about their diet already, but we'll talk a bit more about how they find their food in this episode and we will also talk about what's eating them. As you may remember from previous episodes, we don't know as much about tanuki behavior in the wild as we should so this episode will be a bit shorter that average, but I will do my best to enlighten you on this episode's topic. We have already established that raccoon dogs are omnivores which means they eat both protein and vegetation. Looking at the proteins that they eat, we can see a pattern. Raccoon dogs, regardless of where they are found, tend to eat similar proteins. Insects, frogs, bird eggs, shrews, crabs, fish, small reptiles, carrion, and human refuse. Can you see the pattern? They are all small prey items. What does this tell us? Raccoon dogs rely on their own capabilities to catch food. They do not hunt in packs, like some other canines, which means that they are restricted to hunting small prey or eating carrion. From radio telemetry studies that have been done in the last few years, we know that some raccoon dogs remain together in pairs throughout the year and we assume they hunt together. But this doesn't mean they are going after larger prey together. These animals are approximately the size of red foxes, so two won't be able to take down any larger prey than a single raccoon dog. Tanuki that live near enough to water will eat fish, crabs, and other aquatic life. I haven't found many descriptive accounts, but it is known that they will dive under water to catch their prey. This truly surprised me because there are no other canids that do this to catch prey. I'd love to see some video! They have also been seen catching fish from the shore using their paws to snag this slippery prey. This a unique behavior in the canid family, few, if any, other canines exhibit this hunting behavior. Raccoon dogs will also climb trees in search of food, which explains the bird eggs and the passerines, or songbirds, that are found in their feces. In Europe raccoon dogs have been blamed for the downswing in the populations of certain game birds, but no evidence has been found that supports this hypothesis. Eider eggs and meat have been found in the feces of Finnish raccoon dogs, but there is no evidence that they are hunting healthy eiders. It is postulated that they may have taken advantage of a disease that spread through this population of waterfowl. As of the recoding of this episode, there is no correlation between raccoon dog presence at the decline of bird populations in any habitat in which they are found. When resources are low, Tanuki take advantage of human trash. We throw away a lot of stuff these critters can eat. It is not beneath them to take an easy meal where they can get it. When it comes to vegetation, tanuki will eat berries, fruits, flowers, seeds, bulbs, and roots of various plants. They love a little human garden and have no problems taking a nibble when they can. They are small and usually forage at night, so they can easily get in and out of areas without being seen. Their coloration, brown fur and black masked face, helps them blend in like little thieves in the night. Now that we know how they are finding food, let's find out who hunts raccoon dogs. You're not going to believe this listeners, but we don't know what kinds of anitipredator behaviors raccoon dogs possess but we do know who eats them. I know, how can we know so much about this animal and also know so little. It really is amazing. Raccoon dogs must worry about a plethora of animals that might be interested in hunting them including Gray wolves, Eurasian lynx, wolverines, Japanese Martens, golden eagles, sea eagles, Eurasian eagle owls, domestic dogs, and humans. Yep, that's right humans eat these guys too. In Japan, tanuki are on the menu. That's all she wrote for this episode of tanuki. I'm glad you joined me for this one because how tanuki find their food is my sixth favorite thing about them. If you're enjoying this podcast please recommend me to friends and family and take a moment to give me a rating on whatever platform your listening. It will help me reach more listeners and give the animals I talk about an even better chance at change. Join me next week for another fascinating episode about Tanuki. (Piano Music plays) This has been an episode of Ten Things I like About with Kiersten and Company. Original music written and performed by Katherine Camp, piano extraordinaire.
Welcome to my new Radio Show Deep Sunday #8. For this mix I went back to my roots of playing Afro & Organic House with some fresh Deep House tracks to kick it off. Special thanks to RUZE, Solo Music, Jenia Vice, Ventt and Guri & Eider for their overwhelming sounds & beats to cook those magnificent songs.
Welcome to my new Radio Show Deep Sunday #8. For this mix I went back to my roots of playing Afro & Organic House with some fresh Deep House tracks to kick it off. Special thanks to RUZE, Solo Music, Jenia Vice, Ventt and Guri & Eider for their overwhelming sounds & beats to cook those magnificent songs.
Bienvenidxs a un nuevo episodio de Hospitalidad Emprendedora. En el episodio de hoy contamos con Eider Bueno, fundadora de Aloha Experience, una agencia de consultoría especializada en la creación de experiencias memorables para los viajeros. Eider nos compartirá su inspiradora trayectoria en el sector turístico y nos explicará cómo su misión profesional se centra en mejorar la experiencia del usuario tanto en alojamientos como en agencias de viajes. Exploraremos cómo nació su proyecto y cómo, a través de Aloha Experience, ayuda a las empresas del sector a implementar estrategias innovadoras y personalizadas. Hablaremos sobre la importancia de la escucha activa al cliente, la integración equilibrada de la tecnología en la experiencia turística y cómo pequeñas mejoras pueden tener un gran impacto en la satisfacción de los viajeros. Además, Eider nos compartirá ejemplos prácticos de cómo ha ayudado a empresas a optimizar sus servicios y aumentar sus ventas mediante una mejor comprensión de las necesidades de sus clientes. No te pierdas este episodio lleno de insights y consejos prácticos para innovar en el sector turístico. Contacta con Eider: https://www.linkedin.com/in/eider-bueno/ https://www.eiderbueno.com/ Escucha la entrevista que Eider le hizo a Gian Franco en el Podcast Aloha Experience: https://open.spotify.com/episode/0Tn8fGcbHkKPYuVZnlBZJn ------------------------------------------------------------------------------------------------------------ ¿Quieres ser unx Hospitality Punk? Estamos preparando algo muy innovador que revolucionará el sector. Sé de lxs primeros en enterarte y asegurarte el acceso a una comunidad de referencia en innovación turística: https://www.hospitalidademprendedora.xyz/hospitality-punks/ ------------------------------------------------------------------------------------------------------- ️ Suscríbete a nuestra newsletter semanal gratuita con lo mejor en innovación turística: https://www.hospitalidademprendedora.xyz/suscripcion-newsletter/ -------------------------------------------------------------------------------------------------------- Web: https://www.hospitalidademprendedora.xyz/ Discord: https://discord.gg/ePkHdBmW Instagram: https://bit.ly/2FoU9TG LinkedIn: https://bit.ly/2ZuwZC8 Twitter: https://bit.ly/3mleIAY Email: hola@cursoweb3turismo.com ️Fountain.fm (la App de podcasts que te paga por escucharnos) https://www.fountain.fm/show/UO8m8gJpSPJxDULVQaoy ️Spotify: https://spoti.fi/2C5Xrcz ️Ivoox: https://bit.ly/3e6TIth ️iTunes: https://apple.co/3e5Z9bN YouTube: https://bit.ly/2N0Mifa Sigue a Albert: LinkedIn: https://www.linkedin.com/in/albertperezllanos/ Twitter: https://twitter.com/albertperezll Sigue a Gian Franco: Web: www.gianfrancomercado.com LinkedIn: https://www.linkedin.com/in/gian-franco-mercado-emprendimiento/ Instagram: https://www.instagram.com/gf_merc/ ¡Comparte esta transmisión y contagia la #ActitudEmprendedora! ------------------------------------------------------------------------------------------------------- ️ Suscríbete a nuestra newsletter semanal gratuita con lo mejor en innovación turística: https://www.hospitalidademprendedora.xyz/suscripcion-newsletter/
Beginning a week of programs from inside the arctic circle at European Capital of Culture 2024, Bodø, we meet the man who makes music from an implement traditionally used by Eider gatherers.
TRACKLIST : Saand - Hooloo Dawn Again - Wonky friendz Tomahawk Bang, The Baangbrothers & CEE - Persuade you (Coflo remix) Guri & Eider & Eider - Made on Wood Rayko & Elena Hikari - Nunca jamas Pupkulies & Rebecca & Tibau Tavares - Nha Badjam (Rampue remix) Trippin & Mara Tieles - Turia Konrad Dycke - Luminescence Rigopolar - Untitled (Childs remix) Konrad Dycke - Pulsarium Yassen - To break Sangeet - Easy Skwirl - Inside Moods - Music saved my life Rafael Aragon & Mandruvá - Pássaro cantor autômato Tempura & The Purple Boy - Big room, slow music Laaar - Drifting (Farn & Bawab remix) lvnd - Competent (AmuAmu remix) Goyanu - Novo samba (Atemporal remix) Funky Destination - So meme ye Joshua - Overnight DJ Buhle - Insync Jean Caillou - Shelter Nic Arizona - Malibu storm (Budino remix) Ykonosh, Mundai, Veerle & Don Jongle - Lost in Mozambique (Unders & Kondo remix) Ünam & Valameyali - Fly (Erhan Yılmaz remix) Nada & Carlos Pulsar - Cane (Nährwerk remix) Latteo & Nsiries - Humans (Antaares remix) Jonas Schilling - Embers Brigade - To cats we go Lucky Sun & Alison David - Rain and sunshine (Tim Haze remix) Marc Holstege - Wolves Frida Darko, Atric & KataHaifisch - Canvas Beatmörtelz & Acud - Verbrennungsmotor Rafael Aragon & Mandruvá - Mandragora (Ground remix) La-African Musique - Flowers on luna China Charmeleon & Mc'Pour - Prism Tom Ellis - For five Max Von - Rio Jakob Reiter - Osho Leo Baroso - Kagami Yør Kultura - Megane Avidus - Together (Santiago Garcia remix) Orda - Echoer Agja - Der alte mann Kon Faber - Odyssey Peacey, Cl_yde & Atjazz - Hold me back Thomas Von Party & Mera De La Rosa - Tu no te na
A number of eider ducks found themselves on the ground in the southeast with now way to get back up last week. Several of them are not recovering from injuries at the Atlantic Wildlife Institute. We'll check in with Pam Novak.
Welcome to my new Radio Show Deep Sunday #5. Spring has arrived in Germany and the sunny sun is given me a flavor of how the summer 2024 could look like. This set includes some puristic deep house tracks, but also contains a few warm, vocal Afro House songs to the end. Special thanks to Guri & Eider for their production of Upside Down, which perfectly closed this Radio Show. It's lovely!
Todos queremos lucir una bonita sonrisa, y cada vez le damos más importancia a cuidar nuestra salud bucal.Pero, ¿por qué tantos niños necesitan aparatos de ortodoncia? ¿Cómo conseguir una dentadura bien alineada?En el episodio de hoy, la Odontóloga Eider Unamuno, especialista en ortodoncia preventiva, nos explica cómo ayudar a nuestros hijos en su desarrollo cráneo facial, y cómo la lactancia materna, las masticación y la respiración impactan significativamente en la prevención de una ortodoncia.Disfruta la entrevista y luce tu sonrisa!Entra el código COMOCURAR y recibe un 10% de descuento en tu primera compra:https://store.dracocomarch.com/es/inicio/475-835-happy-tummy.html#/191-cant-1_unidadhttps://store.dracocomarch.com/es/inicio/392-514-elixir-vita-minerales.html#/191-cant-1_unidadhttps://store.dracocomarch.com/es/inicio/472-825-silkface.html#/191-cant-1_unidadGUÍA DEL EPISODIO:04:00 Por qué tantos niños necesitan ortodoncia09:50 Bruxismo en niños y adultos15:00 Odontología Deportiva31:00 Masticar comida real39:30 Empastes Amalgama vs Resina45:50 Los ronquidos en los niñoshttps://www.facebook.com/CocoMarchNMDhttps://www.instagram.com/cocomarch.nmd/https://www.youtube.com/channel/UCyT1tdUjfnbA-4Cqrz8BwFghttps://blog.dracocomarch.comhttps://store.dracocomarch.com/es/https://podcast.comocurar.com/
On this episode of The Journey Within Podcast, Mark Peterson is joined by the founder of AVES Hunting--Rhett Strickland. Rhett not only shares the story behind the innovation of AVES waterfowl hunting gear, but he shares the ins and outs of his latest trip to Greenland to hunt the King Eider with the WTA Sweepstakes winner. The King Eider is the ultimate bucket list bird to add to a fowler's trophy case and the scenery while hunting them in Greenland is epic. Enjoy your journey! Partners and Promo Codes in this Episode Dominate the Sky's with a Benelli Shotgun - benelliusa.com Start a WTA Tags Portfolio or Book The Adventure of a Lifetime at - worldwidetrophyadventures.com Follow Me: Instagram: https://www.instagram.com/markvpeterson/ Facebook: https://www.facebook.com/MarkPeterson... TikTok: tiktok.com/@markvpeterson Web: http://markvpeterson.com/ This podcast is a part of the Waypoint TV Podcast Network. Waypoint is the ultimate outdoor network featuring streaming of full-length fishing and hunting television shows, short films and instructional content, a social media network, Podcast Network. Waypoint is available on Roku, Samsung Smart TV, Amazon Fire TV, Apple TV, Chromecast, Android TV, IoS devices, Android Devices and at www.waypointtv.com all for FREE! Join the Waypoint Army by following them on Instagram at the following accounts @waypointtv @waypointfish @waypointhunt @waypointpodcasts Learn more about your ad choices. Visit megaphone.fm/adchoices
Agradece a este podcast tantas horas de entretenimiento y disfruta de episodios exclusivos como éste. ¡Apóyale en iVoox! Acceso anticipado para Fans - Entrevistamos a la Dentista Inconformista, Eider Unamuno, a propósito de su libro "La boca no se equivoca": https://www.rbalibros.com/rba-no-ficcion/la-boca-no-se-equivoca_7352 Le preguntamos sobre la enfermedad periodontal y sus consecuencias, además de centrarnos en la segunda parte de la entrevista en la Odontología Deportiva: ¿cómo influye la salud oral en el rendimiento físico e incluso en las lesiones? Puedes conocer más sobre Eider en https://eiderunamuno.com/ Las notas del episodio en https://slowmedicineinstitute.com/podcast/Escucha este episodio completo y accede a todo el contenido exclusivo de Slow Medicine Revolution. Descubre antes que nadie los nuevos episodios, y participa en la comunidad exclusiva de oyentes en https://go.ivoox.com/sq/1110678
He talks about the challenges of camp management on film shoots. Evan also highlights his time on the Summer Arctic shoot and the valuable lessons he learned from the team. In this part of the conversation, Evan discusses his experience joining an amazing crew of filmmakers and photographers. He shares how he learned the art of filming and the importance of storytelling. Evan also talks about encountering the elusive Stellar's Eider and the excitement of capturing rare shots. He reflects on his journey into photography and the realization that all cameras are essentially the same. Evan shares his love for the Arctic and the significance of the region for wildlife. He recounts being left behind in the Arctic and making the most of the experience. Evan also discusses his passion for flying and the valuable lessons he has learned from experienced pilots. He highlights the challenges and rewards of living and working on a boat in remote locations. Finally, Evan talks about the unique lighting in Alaska and its impact on photography and filming. In this conversation, Evan shares his experiences filming in the Arctic and the challenges they faced, including a tsunami warning. He discusses the preparations they made and the lessons they learned from the experience. Evan also talks about his future plans and offers advice for young people interested in pursuing similar careers. Additionally, he highlights the importance of stabilized binoculars for wildlife observation.
Inertia's on Data Transmission Radio. At Inertia you will find the best in house, deep house and afro presented by our main man Shiach. TRACKLIST: After Saturday Night (Monkey Safari Remix) - Starving Yet Full, Sparrow & Barbossa, Francis Coletta Parce E AMore (Original Mix) - CIOZ You Are (Patrice Baumel Mix) - Kollmorgen More Love (Rampa &ME Remix) - Moderat, Keinemusik Be Your Girl (Arabic Piano & Dr Feel Mix) - Nomi Ruiz Forever Young (Sebas Ramis Remix) - Guri, Eider, Round Shaped Triangles Mandaro (Original Mix) - Kali Mija, Don Jongle Home (Original Mix) - Rhodes, Camelphat Music Is The Answer feat. SLO (Hot Since 82 Remix) - Slo, Joe Goddard Everyone Else (Original Mix) Miracle - RÜFÜS DU SOL Remix (Original Mix) - WhoMadeWho, Adriatique, RÜFÜS DU SOL
We reach Pam Novak at the Atlantic Wildlife Institute to hear about an eider duck paddling about in her bathtub.
Un nuevo 808 en Radio Castilla-La Mancha que nos descubre las nuevas creaciones de Tycho, Logic1000 o Parquesvur entre muchos otros. Pone en marcha el Generador de Ideas junto a Alberto Sola y Pablo Ferrer para conocer una nueva obra literaria, necesaria para comprender qué somos hoy, “Génesis: escena clubber posbacalao en la Comunitat Valenciana (1996-2010)” y Guri & Eider nos están contando, al habla, todos los detalles de su nuevo disco, su álbum debut: “Modern Mantra”. La Lista I: Shishi - Happy Birthday (Skelesys Remix) [Good Skills] Bellamy - I Need You [Aparell Music] Full Bloom - Speed FM [Ilian Tape] Dying & Barakat - The Storytellers [Mitsubasa] Bawrut - Clapa [Speicher] Al Habla: Guri & Eider. La Lista II: Guri & Eider – Make It Rain [Sub Urban] George Riley - Elixir [Ninja Tune] Zoo Brazil - Not My Love (Dub Mix) [John Henry Records] Gunjack - The Hatman (Not A Headliner Remix) [GOC] The NM Band - She Wants [Isle Of Jura] Low Contrast - Early 90 (Extended Mix) [Embliss] Thomas Roussel - No Artificial Light [Ed Banger] La Lista III: Tycho - Small Sanctuary [Ninja Tune] Ron Elliot - Photon (Extended Mix) [17 Steps] Roy Davis Jr. - About Love (Jaden Thompson Extended Remix) [Classic] Logic1000 & Kayla Blackmon - Self To Blame [Because Music] Lars Huismann - See You Soon [Dolly] Maison Blanche - Dexter On The Dancefloor [Pont Neuf] Generador de Ideas: “Génesis: escena clubber posbacalao en la Comunitat Valenciana (1996-2010)” con Alberto Sola y Pablo Ferrer. La Lista IV: Parquesvr - Todos menos tú (feat. I-Ice) [Raso] Parris & untold - Lip Locked [Oro] Yan Cook - Late Night Kyiv [Delsin Records] Kuba Sojka - Groove Form [Kantastra Records] Judy - Zelai Errea [Inguma Records] Whitesquare - N-ergia [Permanent Vacation] Midland - You Never Take Me Dancing [Graded]
Host Chris Jennings is joined by Jay Anglin and Jeremy Ullmann, owner and operator of Mi Guide Service, which offers big-water diving duck hunts in Michigan. Ullmann explains what a day hunting with his outfit looks like and how hunters should prepare to hunt divers. Anglin offers his experience growing up hunting divers on Lake Michigan and surrounding lakes, including shotshell recommendations and shooting techniques. Often overlooked by hunters, diving duck hunting can offer spectacular decoying shots and an abundance of divers on Lake St. Clair, Lake Michigan, and Lake Huron make this Upper Midwest hunting style as unique as it is exciting.www.ducks.org/DUPodcast
Eight-two year-young Phil Stanton cut his teenaged duck hunting teeth hunting among salty, yesteryear watermen that braved Massachusetts's rocky coast for seaducks, quandy and coots, as they were colloquially referred. Eider ducks have predominated his entire life since. His incredibly colorful then-versus-now stories about eiders, hunters, techniques and gear; about oil-spill rehabilitation attempts; and about pioneering new eider breeding colonies leaves little doubt that he is, indeed, the Eider Godfather. Related Links: Upton's Phil Stanton is our eider godfather. https://www.telegram.com/story/sports/2021/01/04/outdoors-uptons-phil-stanton-our-eider-godfather/4115940001/ Podcast Sponsors: BOSS Shotshells https://bossshotshells.com/ Benelli Shotguns https://www.benelliusa.com/shotguns/waterfowl-shotguns Tetra Hearing https://tetrahearing.com/ Ducks Unlimited https://www.ducks.org Mojo Outdoors https://www.mojooutdoors.com/p Tom Beckbe https://tombeckbe.com/ Flash Back Decoys https://www.duckcreekdecoys.com/ Voormi https://voormi.com/ GetDucks.com USHuntList.com It really is duck season somewhere for 365 days per year. Follow Ramsey Russell's worldwide duck hunting adventures as he chases real duck hunting experiences year-round: Instagram @ramseyrussellgetducks YouTube @GetDucks Facebook @GetDucks.com Please subscribe, rate and review Duck Season Somewhere podcast. Share your favorite episodes with friends! Business inquiries and comments contact Ramsey Russell ramsey@getducks.com
Charlamos con Eider Unamuno, odontóloga y especialista en Odontología Deportiva, sobre la grandísima y desconocida importancia que tiene la salud bucodental en el rendimiento deportivo y la salud. Esta salud bucodental va mucho más allá de no tener caries: afecta a la forma que respiramos, a las microbiota oral y digestiva, y por ende a nuestra capacidad de producir energía o recuperarnos tras los esfuerzos y las lesiones. Si eres un ciclista promedio, lo estás haciendo muy mal a nivel bucodental, y Eider nos da una buena serie de consejos para mejorarlos. Encuentra a Eider en https://eiderunamuno.com/ y en redes sociales @eider_unamuno . Espero que te guste, y si lo hace me ayudarías mucho compartiendo este episodio con amigos/as y en redes sociales. ______________________________________________________________________ 📚📌Libro La Naturaleza del Entrenamiento https://amzn.to/3zQQmbi 💎🔵Canal de Telegram: https://t.me/ciclismoevolutivo 💻✅Cursos para aprender más: https://ciclismoevolutivo.com 👁🗨🦉Todo lo demás: https://linktr.ee/solaarjona
“Icebergs break off the Breiðamerkurjökull glacier and drift through the water into the Atlantic Ocean, some of them glowing a surreal, bright blue – others vast and blackened like old ships. […]
JOIN OUR TOTE FANTASY LEAGUE HERE: http://bit.ly/3JaPrHe COMPLETE THE TOTE FANTASY SURVEY FOR A CHANCE TO WIN £50 TOTE CREDIT: http://bit.ly/3zmpdff Bruce Millington is joined by Robbie Wilders, James Stevens and Tote Fantasy's Jamie Benson to preview the weekend action in Britain and Ireland. The panel begin by looking at Saturday's card at Sandown, where the jumps season finale takes place. Kitty's Light is bidding to complete a hat-trick after recent wins in the Eider and the Scottish Grand National for the Christian Williams team. The Postcast team then turn their attentions to the rest of the terrestrial TV action from Haydock and Leicester before delving into the final day of the Punchestown festival. The panel finish by giving their best bets and selecting their Tote Fantasy stable. Brought to you by Tote Fantasy, daily fantasy racing with real cash prizes up for grabs. 18+ BeGambleAware - Key Terms: £/€7 entry per stable. Top 25% of entries receive a payout. Full T&Cs apply. 18+.
George Elek is joined by Andy Holding to look ahead to this weekend's Scottish Grand National at Ayr.It's an eight race card on Saturday with the highlight being the Grand National at 3.35. Last year's runner up Kitty's Light is currently the 9/2 favourite with our sponsors Unibet.He'll be looking to go one better and warmed up nicely with a success in the Eider back in February. Monbeg Genius, Your Own Story and Dusart are also towards the top of the market after 23 were declared on Thursday morning.It's Unibet ambassadors Nico de Boinville and Nicky Henderson who run top weight and course winner Dusart in the race. Their blogs to preview all of their Saturday runners will be live on Unibet on Friday, they'll be well worth reading after Nicky gave 28/1 winner Caribean Boy a great write up in Wednesday's blog ahead of racing at Cheltenham.Unibet will also be offering extra places in the Scottish National – six places 1/5th odds T&Cs apply, as well as extra places on the 1.15pm, 1.50pm, 4.25pmAs well as the action at Ayr, they'll be Money Back as cash on 2.05 Newbury – Dubai Duty Free stakes if your horse finishes second.Unibet, currently have a sign up offer for new customers. Deposit £10 and get money Back up to £40 if your first racing bet loses, plus a £10 Casino bonus. 18+. BeGambleAware.org. New customers only. T&Cs apply: https://welcome.unibet.co.uk/uk/sportsbook/horses/racing/aff/2023/index.html?mktid=1:81735833:92680458-30090&btag=81735833_F1D30D841B634CA7819DD24EC2BB8405&bid=30090&campaignId=1799188&pid=92680458 Hosted on Acast. See acast.com/privacy for more information.
Es la primera novela de Eider Rodríguez, una autoficción en la que aborda el alcoholismo de su padre y que termina siendo una reflexión sobre el duelo. Escuchar audio
In association with Betdaq and All About Sunday: Emmet Kennedy is joined by top jockey Denis O'Regan and Barry Caul to discuss the key horses from the last few days racing with a view to Cheltenham and Aintree. Nusret continued the Irish domination of juvenile hurdles, but will he be as effective at the Festival? Denis breaks down Kitty's Light's jumping technique and the brilliant ride from Jack Tudor to win the Eider. There was more Welsh success with Our Power taking the feature at Kempton, but what are is prospects like in the Ultima? Corbetts Cross looks a very exciting prospect for Emmet Mullins, Denis shares some strong views on the horse. Brandy Love was disappointing on her comeback, but Denis give's you reasons to keep the faith for the Mares hurdle. Plus we review Kemboy, Solo, Rubaud, Brewin'upastorm and Denis talks about his Grade 3 win on Rebel Gold. There's also ante post picks for the Grand National and the Boodles! Pay no commission for your first 100 days at BETDAQ. Join BETDAQ.com, The Sports Betting exchange today. New customers only, Terms and conditions apply. https://www.betdaq.com AllAboutSunday is fulfilling the dreams of racing fans with affordable racehorse ownership. The AllAboutSunday Ownership Experience is unparalleled as we deliver the ultimate ownership experience through our exclusive owners app. Owners are brought closer to ownership like never before https://www.allaboutsunday.com Show Your Support for The FFP with Likes & Shares on Twitter, Instagram or Facebook
This week we are joined by Alex and Ryan Fagan the winners of this years Ducks on the Bay experience. They take the twenty plus hour trek across the country from Iowa to join us on the crazy alantic ocean in the northeast to chase Eider Scoter Longtails and Divers. Not only do we talk about the great times chasing ducks but we also tear into the life of alex and ryan and how they go out of there comfort zone to really enjoy the great outdoors. Buckle the seat belt and take the ride across the country with Ryan and Alex. https://ducksonthebay.com/ https://www.theoutdoordrive.com/ Sponsors: Huntworth Gear: https://huntworthgear.com/ Nor'easter Game Calls: https://nor-eastergamecalls.com/ Zeus Broadheads: https://neweraarchery.com/
The North American Waterfowl Tour winds it's way through Vermont and Maine, where Ramsey finally scratches off duck hunting these 2 new-to-him states. Thanks to host Steve Caron and friends, it was a memorable visit despite warm, Indian Summer weather. With their belts still stretching after a steamed lobsta' and mussels dinner, Ramsey and Caron talk about waterfowl hunting Maine and Vermont, recounting memorable highlights where waterfowl were only a gateway to best experiencing this corner of the US. Podcast Sponsors: BOSS Shotshells https://bossshotshells.com/ Benelli Shotguns https://www.benelliusa.com/shotguns/waterfowl-shotguns Tetra Hearing https://tetrahearing.com/ Ducks Unlimited https://www.ducks.org Mojo Outdoors https://www.mojooutdoors.com/p Tom Beckbe https://tombeckbe.com/ Flash Back Decoys https://www.duckcreekdecoys.com/ Voormi https://voormi.com/ GetDucks.com USHuntList.com It really is duck season somewhere for 365 days per year. Follow Ramsey Russell's worldwide duck hunting adventures as he chases real duck hunting experiences year-round: Instagram @ramseyrussellgetducks YouTube @GetDucks Facebook @GetDucks.com Please subscribe, rate and review Duck Season Somewhere podcast. Share your favorite episodes with friends! Business inquiries and comments contact Ramsey Russell ramsey@getducks.com