Podcasts about malmo

City in Scania, Sweden

  • 453PODCASTS
  • 939EPISODES
  • 47mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 14, 2025LATEST
malmo

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about malmo

Latest podcast episodes about malmo

New Worlder
Episode #105: Lotta & Per-Anders Jörgensen

New Worlder

Play Episode Listen Later Mar 14, 2025 70:18


Based in Malmo, Sweden,Lotta & Per-Anders Jörgensen are the founders of the legendary food magazine Fool. Lotta is an art director and Per-Anders, or P.A. as I have come to call him, is a photographer. This is a magazine that launched in 2012 and has put out, thus far, 8 issues, very sporadically. It has been a few years since. The last issue, but as they reveal in the episode, there will be a #9.Aside of its unpredictable publishing schedule, Fool is a rare kind of magazine. In a world where everything moves so fast, where writing about food is mostly oriented towards minuscule bits of information on social media that keep coming at a rapid pace, one after the other, Fool is slow. It's thoughtful. It's reflective. It's stories are about interesting humans that work in food and their ideas, regardless of how well known they are. It's creative, with beautiful illustrations and photography, and stories that have always gone a little bit deeper than anywhere else. I had the pleasure of writing a few feature stories there and there was never any indication of what the word count should be. Just make it as long as you think it should be, they would say. That kind of collaboration is a dream for a writer or contributor of any sort. When you pick up an issue, you can read it like a book. A decade later, the stories remain relevant.Lotta and PA also create books, such as the Burnt Ends book, which we talked about with that restaurant's chef, Dave Pynt, in the previous episode. They've also worked with Andoni Luis Aduriz of Mugaritz, and quite a few other truly iconic chefs. There is also a documentary series they have created that they will launch soon, or at least soonish, or when it feels right. Anyway, their work has always been a big inspiration for me so it was a pleasure to have them on.READ MORE AT NEW WORLDER.

AnthroDish
146: Flavour's Role in Food System Fixes with Franco Fubini

AnthroDish

Play Episode Listen Later Mar 12, 2025 32:12


The idea of industrial food systems is flat, heavy, and feels complex to access. It brings up connotations of very bland, hyper-processed foods made to reach a large number of people at a low cost. There are important consequences to these food systems choices, though some are louder ones than others. My guest today, Franco Fubini, tackles an often under-appreciated one: flavours of ingredients.  Franco Fubini is the founder and CEO of Natoora, and takes a unique approach to seasonality and sourcing for chefs and consumers across London, Paris, Milan, Copenhagen, Malmo, New York, LA, Miami, and Melbourne. He is also a professor of Sustainability Management at Columbia University in NYC. Franco is driven by his belief that engaging people with the real flavour of fruits and vegetables, arguing that we can collectively transform how food is being farmed and supplied if we focus more on a supply chain rooted in flavour, transparency, and direct relationships. He is also the author of In Search of the Perfect Peach: Why Flavour Holds the Answer to Fixing Our Food System.  In today's episode, we look at the role that flavour plays in our food systems, and how flavour's decline has been connected to wartime economies and contemporary agricultural systems. Franco speaks to the work he's doing through Natoora, and how both old and new strategies are needed to model more sustainable, resilient, and locally-grounded food systems for the future.  Learn More About Franco In Search of the Perfect Peach Natoora Website Instagram: @natoora and @francofubini

Montel Weekly
Hybrid energy war in the Baltic

Montel Weekly

Play Episode Listen Later Jan 17, 2025 32:20


The recent damage to the Estlink 2 electricity cable running between Finland and Estonia has highlighted the precarious nature of power supply amid increased fears of a hybrid war with Russia.Finnish police are investigating whether a ship with links to Russia was involved in sabotage, with the cable – set to be offline for months for repairs – thought to have been damaged by the vessel's trailing anchor.In response, Nato has enhanced its military presence in the Baltic Sea, while Estonia has sent a patrol ship to protect the remaining Estlink 1 power cable.This week's pod discusses the spate of cuts to data and power links that could point to such nefarious activities, the market impact and the volatile political situation in northern Europe which evokes memories of the Cold War.   Host: Richard Sverrisson, Editor-in-Chief, Montel. Guests: Prof Paula Kivimaa, Research Professor at the Finnish Environment Institute in Helsinki; Fredrik Bodecker, Founding Partner and CEO of Bodecker Partners in Malmo; Olav Vilnes, Nordics Editor, Montel News.

Emil Amos' Drifter's Sympathy
MOVING TO PORTLAND II

Emil Amos' Drifter's Sympathy

Play Episode Listen Later Jan 15, 2025 60:45


This cast ties together the 90's era of our story to the current ERA.💫 (if you want some XTRA background you can re-listen to the episodes 'Moving to LA' & 'Moving to Portland' as those two set this one UP.) The opening catches listeners up to Portland, Oregon in 1999 and the emotional STATE of things... but this cast could ultimately be called "The beginning of GRAILS". The bottom line is that one often has to throw everything out the window to open up a new horizon of possibilities. And that horizon, by definition, can often begin with a pathetic and destitute situation by LAW. But from that desert floor,, grow the WEEDS that become the terrain in every hero's journey rite?  This cast features a couple very special guests... Alex Hall (Grails co-founder, who's living out in Malmo, Sweden) weighs in on the frame by frame situation from his perspective. And then Emil & Alex realize that they didn't so much 'start' Grails but that their mutual friend Brad Adkins (who introduced them) forced them to start the band against their WILL. So not only do you often begin the dream out in the desert with no food or water,, but sometimes there's a GUN to yr back too.🔥🚣‍♂️🔥 Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe

MMH - The Home Of Rock Radio Podcasts
Losin It With Luscious #223 Punx around the world from 1977 thru this week!

MMH - The Home Of Rock Radio Podcasts

Play Episode Listen Later Dec 16, 2024 120:50


DJ Jesse Luscious takes us on a musical ride from Edinburgh to San Francisco to London to Philly to Malmo to NYC to Oslo to Brooklyn and beyond! Hear new tracks from The Phase Problem, Śmierć, Zorn, Neutrals, Lawns, & Vessel, classics from NOFX, Sex Pistols, Joan Jett and the Blackhearts, Joey Ramone, Basic Bitches, Rudimentary Peni, Official Negazione, The Rocky Horror Picture Show, Generation X, Turbonegro, Liar Thief Bandit, The Linda Lindas, Angelic Upstarts, Problem Patterns, The Lewd, The Exploited, Ramones, Evil Conduct, Rollins Band, Social Distortion, Rubber City Rebels, Rocket From The Crypt, Agnostic Front, The Rezillos, The Lincolns, 4 Skins, & Zolar X!              Sex Pistols- Bodies (edit) Angelic Upstarts- Last Night Another Soldier Phase Problem- A.D. 2024 Problem Patterns- Poverty Tourist Lewd- I'm Not Pretty NOFX- Lori Myers Exploited- Punks Not Dead Evil Conduct- Skinhead Till I Die Rollins Band- You Look At You Rudimentary Peni- The Cloud Song Negazione- Tutti Pazzi Agnostic Front- Hiding Inside Agnostic Front- United And Strong Smierc- KTO Zorn- Warpath Rezillos- Somebody's Gonna Get (Their Head Kicked In) Lincolns- Murder Is My God-Given Right Lawns- Friends Neutrals- That's Him On That Daft Stuff Again Ramones- I Don't Want To Grow Up Four Skins- 1984 Zolar X- Timeless Joey Ramone- 1969 Rocket From The Crypt- Born in 69 Linda Lindas- Racist Sexist Boy (Live At L.A. Public Library) Rubber City Rebels- Young And Dumb Joan Jett And The Blackhearts- AC/DC Generation X- Day By Day Social Distortion- Mommy's Little Monster Liar Thief Bandit- Brand New Day Turbonegro- Wipe It 'Til It Bleeds Basic Bitches- The Rocky Horror Picture Show Rocky Horror Picture Show OST- Sweet Transvestite Vessel- Image Rehearsal Reaction

Highlights from Moncrieff
Is Eurovision's new code of conduct fair?

Highlights from Moncrieff

Play Episode Listen Later Dec 12, 2024 6:45


In light of controversies at this year's Eurovision in Malmo, a new code of conduct is being introduced by the European Broadcasting Union...but what does it entail?Joining Seán to discuss is Richard Taylor, contributor to Eurovision Ireland - the official Irish Eurovision Blog.

The Euro Trip | Eurovision Podcast
Exclusive: Eurovision's new Director on the big changes coming to the contest

The Euro Trip | Eurovision Podcast

Play Episode Listen Later Dec 10, 2024 32:44


Just over a month into his new role as Director of the Eurovision & Junior Eurovision Song Contest, Martin Green CBE sits down for an exclusive conversation. He first appeared on The Euro Trip after a successful year as Managing Director of the contest in Liverpool, now he tells us more about his new role, helping implement changes aimed at strengthing brand Eurovision, and improving artist welfare for those taking part in the competition.He tells us more about Eurovision's “future road map” process, launched after Malmo 2024, with the EBU is making several key changes to improve communication, wellbeing, and enhance the positive experience of everyone attending the Eurovision Song Contest.These include the introduction of a new Code of Conduct and Duty of Care Protocol, based on the existing rules, which will ensure clear roles and responsibilities for all those involved in the event. There will also be some changes to the rehearsal schedule coming for 2025 too.To support the podcast, head to Buy Me A Coffee.Follow us on Twitter, Instagram & TikTok or email hello@eurotrippodcast.com, and find us online at eurotrippodcast.com. Hosted on Acast. See acast.com/privacy for more information.

Story Radio Podcast
Interview with Hanna Nordenhök about her novel Caesaria

Story Radio Podcast

Play Episode Listen Later Dec 1, 2024 38:29


In 19th-century Sweden, Caesaria is kept in a doctor's mansion as a trophy: she is the first baby to be born alive from one of his c-sections.  In a Gothic ambience, Caesaria narrates in first person her experiences in the mansion and her encounters with its mysterious inhabitants and visitors. Does she know where she comes from? Where is her mother? Is there a world beyond these walls? We interview Hanna Nordenhök about her Gothic tale, published for the first time in English by Heloise Press on the 24th October 2024. Inspired by a real-life nineteenth-century medical miracle, it explores issues - women's bodies and women's rights - that are vitally contemporary. Our wide-ranging discussion covers some international writers and film-makers whose work listeners might not be familiar with so we thought we would list them here. Authors Ágota Kristóf - 1935 – 2011: Hungarian author The Notebook Trilogy and The Illiterate are available in translation Birgitta Trotzig 1929 – 1935: Swedish author Her work seems currently only available in Swedish or translated into French or Spanish. Fernanda Melchor (b.1982) Mexican:  Paradais and Hurricane Season published by Fitzcarraldo  Films The Wild Child - Francois Truffaut 1970 The Enigma of Kaspar Hauser Hans Werber Herzog 1974  The Knick - Steven Soderbergh (TV series) 2014-15 Hanna Nordenhök (Malmo, 1977) has been awarded several major literary honors for her work, both as novelist, poet and essayist. Her novel Caesaria (2020) scooped Swedish Radio's Literary Prize and was shortlisted for Vi's Literature Prize. Nordenhök also works as a translator from the Spanish and has been praised for her translations of Fernanda Melchor, Andrea Abreu and Alia Trabucco Zerán. Her last novel Wonderland (2023) was listed among the Best Books of the Year in Dagens Nyheter, Svenska Dagbladet, Expressen, Borås Tidning, Hufvudstadsbladet and Magasinet ETC, as well as shortlisted for Vi's Literature Prize. Saskia Vogel is a writer and translator of over two-dozen Swedish-language books. Her novel Permission was published in five languages. She is a recipient the Berlin Senate grant for non- German literature, the Bernard Shaw Prize, two English PEN Translates Awards, and was a PEN America Translation Prize finalist. She was Princeton's Fall 2022 Translator in Residence. Born and raised in Los Angeles, she lives in Berlin. This episode was produced by Martin Nathan. Martin Nathan's short fiction and poetry has appeared in a range of journals and his novel – A Place of Safety is published by Salt Publishing. His dramatic writing has been shortlisted for the Nick Darke award and the Woodward International Prize.  Donate We are a volunteer-led organisation and appreciate any donations towards our running costs. Buy us a coffee Become a patreon Contact us Visit our our website Storyradio.org

Irish Tech News Audio Articles
Doro launches three new easy-to-use phones for Irish seniors

Irish Tech News Audio Articles

Play Episode Listen Later Nov 28, 2024 4:22


Doro, the European technology leader for seniors, today announces the launch of three new 4G feature phones, the Doro Leva L10, Doro Leva L20 and Doro Leva L30. The devices are rolling out in select retailers nationwide, starting November. The phones have been developed to meet the evolving needs of users, futureproofing Doro's user-friendly offering. They combine intuitive features with the classic Doro interface, 4G connectivity, and a fresh, modern finish. Each component has been refined to ensure our users feel a sense of familiarity and simplicity when using the devices. New senior friendly phones from Doro All three phones feature large, tactile buttons with high-contrast graphics, and talking keys that confirm each number press, making it easier for users to navigate. With loud and clear sound quality powered by HD voice, the devices ensure crystal-clear communication. The ergonomic design includes a soft-feel rear case with a textured surface, reducing the risk of accidental drops, and is splashproof (IP54), adding durability to its list of features. They come in a variety of designs: the Leva L10 in a classic candy bar style, and the Leva L20 and Leva L30 in clamshell formats. The Doro Leva L30 comes with an extra external display for Caller ID and notifications. These devices are perfect for users seeking basic functionality paired with excellent sound quality and user-friendly features. Ben Crompton, Managing Director UK & Ireland at Doro, says: "Our new 4G feature phones elevate accessible technology. Designed for senior users, they combine simplicity with functionality. High-contrast displays, talking keys, HD voice, and an integrated assistance button make these phones easy to use, hear, and see - setting a new standard in accessible technology. The Doro Leva feature phones integrate seamlessly into the Doro ecosystem, designed to meet users' evolving needs and enable enjoyment of modern technology and social participation, regardless of age or technical expertise." To help users feel safe wherever life's adventures take them, all three handsets include the signature Doro assistance button now with GPS location. The button, located on the back of every Doro phone, can be used to alert up to five trusted friends or relatives if help is needed - providing users and their loved ones with peace of mind. The 4G capabilities of the devices offer faster speeds, better voice call quality, and an improved battery life. By introducing the three 4G phones, Doro ensures a high-performance user experience tailored for senior. The Swedish company recently launched innovative smart devices tailored to the needs of senior users, including the Doro Hemma Doorbell, the Doro HearingBuds and the Doro Watch. Availability and Pricing Doro Leva L10: €95 Doro Leva L20: €109.99 Doro Leva L30: €119.99 The Doro Leva L20 and L30 will be available from retailers including Three, Eir and Vodafone in November, and Harvey Norman from December. For further information and test samples, please contact: doro@eulogy.co.uk About Doro Doro is a leading technology brand for seniors developing consumer products and services to support an active and independent life. Doro's technology enables generations to connect digitally both while at home and when out and about. Doro is a Swedish company listed on Stockholm Nasdaq Stock Exchange. The company is headquartered in Malmo and has sales operations in more than 20 countries. In 2023, Doro had 118 employees and net sales amounted to SEK 973.6 million (EUR 35 million), making it the European market leader for senior specialised mobile phones. Read more about Doro on our website www.doro.com See more product reviews here.

Gate 7 International Podcast
Episode 354: Malmo 0-1 Olympiacos | Thrylos win first away match in the Europa League and aim for top-8 finish

Gate 7 International Podcast

Play Episode Listen Later Oct 25, 2024 57:22


After a disastrous performance against Levadiakos, Olympiacos fought back by winning their first away match in the Europa League this season against Malmo. Jose Luis Mendilibar's side is starting to look like a serious contender for a spot in the top eight, which leads to automatic qualification for the Last 16 --- Support this podcast: https://podcasters.spotify.com/pod/show/gate7/support

The Eurovision Showcase on Forest FM
Warning Signs of Cupid's Bow! - 20th October 2024

The Eurovision Showcase on Forest FM

Play Episode Listen Later Oct 20, 2024 61:11


Eurovision fans, tune in for the **Eurovision Showcase Radio Show** with Ciaran Urry-Tuttiett on Forest FM!   **When?** Today at 5pm UK / 6pm CET   Coming up on today's show: **Brand new music** from Eurovision stars: - **Olly Alexander** with his new single *"Cupid's Bow"* after his 18th place at this year's contest in Malmo! - **Loreen** is back with *"Warning Signs"*, following her stunning win in 2023 with *"Tattoo"* and her 2012 triumph with *"Euphoria"*.   Plus, don't miss: Rob's Random Request Live & Kicking The Best of the Rest The latest ESC Showcase News! Follow us on Facebook, Instagram, and Threads! More info at: www.escshowcase.com   This show was broadcasted on Forest FM on Sunday 20th October 2024 at 5pm!   Get ready to sing along to the best of Eurovision! #EurovisionShowcase #ForestFM #ESCShowcase #Eurovision

Warhammer Meta Chasers
The Fall SoCal Malmo! | Warhammer Meta Chasers

Warhammer Meta Chasers

Play Episode Listen Later Oct 18, 2024 51:32


Warhammer Meta Chasers is a weekly competitive Warhammer 40k hype show.  We run down some of the biggest and best events coming up this weekend where we discuss Warhammer 40k Factions in attendance and highlight army lists from some of the top ranked players around the globe. We talk about what the meta is, what it will be and how you can stack up against it. The show is hosted by Paul Murphy, Adam Camilleri, and Dustin Henshaw. The show runs LIVE every week on YouTube around 9pm EST every Thursday.  We sincerely invite you to join us in chat if you can make it.  The show is pushed to the Podcast aggregators soon after!  We have an amazing chat community.  Check out our Patreon here: https://www.patreon.com/WarhammerMetaChasers Join us live each and every Thursday on YouTube and join in our awesome chat community. Want to message the show another way?  Hit up Paul on twitter @warmaster_tpm or on Instagram @fightswithdice

The Euro Trip | Eurovision Podcast
An important Euro Trip podcast announcement!

The Euro Trip | Eurovision Podcast

Play Episode Listen Later Oct 8, 2024 58:04


We share some big news ahead of the start of our Eurovision 2025 season and reflect on the last four years of the podcast, including memorable trips to Melfest, Turin, Liverpool and Malmo, while remembering some of the brilliant guests who have joined us on the show.To support the podcast, head to Buy Me A Coffee.Follow us on Twitter, Instagram & TikTok or email hello@eurotrippodcast.com, and find us online at eurotrippodcast.com. Hosted on Acast. See acast.com/privacy for more information.

12 Points - le Podcast qui décrypte l'Eurovision
Slimane Eurovision 2024 - Souvenir de Fabien [BEST OF]

12 Points - le Podcast qui décrypte l'Eurovision

Play Episode Listen Later Sep 30, 2024 9:07


Je retrouve cette fois Fabien qui nous partage son moment préféré de 12 Points. Et cette fois, c'est un best of, mais aussi un inédit. Il revient sur la prestation de Slimane lors de l'Eurovision 2024. Retrouvez ici l'intégralité du moment depuis la salle de presse de Malmo de la prestation de Slimane et nos commentaires en live.Créer le premier podcast francophone qui parle de l'Eurovision.Depuis septembre 2021 Thomas, Quentin, Agathe et Vincent analysent, débattent et échangent autour de ce concours hors norme.
Du débrief des dernières éditions aux analyses géopolitiques plus poussées, ils décortiquent sous tous les angles cette émission autant admirée que critiquée. Chaque année, plus de 200 millions de téléspectateurs à travers le monde (dont plus de 5 millions en France) sont animés par cette grande célébration du continent européen. Chansons mémorables, flamboyance, et autres ambiances colorées marquent annuellement les esprits certes, mais ne vous y trompez pas, l'Eurovision c'est aussi beaucoup d'autres enjeux.Qu'elles soient historiques, culturelles, économiques ou politiques, le podcast décrypte avec talent et humour toutes ces facettes aussi surprenantes que passionnantes.
Le podcast décline ses épisodes autour de différents axes :Analyses thématiques de l'Eurovision (géopolitique, historique, linguistique…)Interview de personnalités ayant eu un rôle à l'Eurovision sur scène ou en coulissesCouverture d'évènements (Sous accréditation presse)Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Gersnet Podcast
Gersnet Podcast 355 - Hibernian Preview

Gersnet Podcast

Play Episode Listen Later Sep 28, 2024 37:30


After a fine performance and result over Malmo on Thursday, Craig is joined by Stewart Weir to discuss that game and look ahead to what should be a full Ibrox on Sunday to welcome Hibs in the Scottish Premiership. The pod is brought to you in association with our partners: Forrest Precision Engineering and Football Prizes. The Gersnet Podcast: the independent Rangers FC podcast, by fans, for fans. LIVE and FREE every Sunday on YouTube at 9.30pm with match preview shows ahead of each game as well. All available from a range of other platforms on the following day (including iTunes and Spotify).

Davor Suker's Left Foot
The Truth: How Good Was Zlatan?

Davor Suker's Left Foot

Play Episode Listen Later Sep 27, 2024 33:52


It's time for The Truth!Sam Tighe and Dougie Critchley take a look back through the annals of history to analyse the career of Zlatan Ibrahimovic, and ask the question - was he a world class striker in the pantheons of the all-time greats, or just an excellent player in his day?Zlatan has long divided opinion, even in a world before the hyper-sensationalisation of social media that we see today. From dancing through players on his debut, scoring some of the best goals that the game has ever seen, referring to himself exclusively in the third person, needing to be the lightning rod for whichever team he played for, and constant references to lions and gods, there is no denying the man was pure entertainment. But entertaining and being one of the greats are two different things. So is Zlatan right up there with the greatest ever to play the game? Or was his own perception of himself bigger than his legacy on the pitch?The Truth is somewhere in the middle... Hosts: Sam Tighe & Dougie CritchleyProduction & Editing: Jack Collins Studio Recording: Footwork Media And remember, if you'd like more from the Rank Squad, including extra podcasts every Monday and Friday (including our weekly Postbox taking a look at the whole weekend of football) and access to our brilliant Discord community, then why not join us here on Patreon?

Heart and Hand - The Rangers Podcast
Heart and Hand Extra - Success in Sweden

Heart and Hand - The Rangers Podcast

Play Episode Listen Later Sep 27, 2024 58:51


Cammy is here with this weeks Extra alongside Alan Bradley as they discuss a fantastic night in Sweden with a 2-0 win over Malmo in the Europa League opener, and look forward to a fully open Ibrox on Sunday as we return to league action against Hibs. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Scottish Football
'Building blocks' for Rangers plus Premiership preview

Scottish Football

Play Episode Listen Later Sep 27, 2024 22:51


Jane Lewis is joined by Jordan Campbell and Cammy Bell to reflect on Rangers' impressive win over Malmo and ask if it could be a catalyst for their season. They also take a look ahead to the weekend's Premiership fixtures as Aberdeen look to claim a record 12th win in a row and managerless Hearts and St Johnstone aim to get their campaigns on track. Get in touch. Our email address is Scottishfootball@bbc.co.uk

Superscoreboard
Friday 27th September | Malmo v Rangers Reaction

Superscoreboard

Play Episode Listen Later Sep 27, 2024 90:02


Gordon Duncan is joined in the studio by Hugh Keevins & Scott Allan reacting to Rangers victory against Malmo last night and look forward to the weekends Scottish football...

Scottish Football
Sportsound: Reaction as Rangers make a perfect start to their Europa League campaign with victory over Malmo

Scottish Football

Play Episode Listen Later Sep 26, 2024 40:07


Kenny Macintyre, Neil McCann, Steven Thompson and Tom English react to Malmo 0-2 Rangers

Scottish Football
Malmo v Rangers preview

Scottish Football

Play Episode Listen Later Sep 26, 2024 25:18


Alasdair Lamont is joined by Rory Loy and Joachim Bjorklund to preview a huge night for Rangers as they open their Europa League campaign away at Malmo.

Superscoreboard
Thursday 26th September | Malmo v Rangers

Superscoreboard

Play Episode Listen Later Sep 26, 2024 91:38


Gordon Duncan is joined in the studio by Gordon Dalziel & Mark Wilson reacting to Malmo v Rangers and they speak to Gabriel Antoniazzi live in Sweden for updates of the match.

Aye Ready Podcast - A Rangers Podcast
Match Reaction - Thu 26th Sept 2024 - Malmo FF 0-2 Rangers

Aye Ready Podcast - A Rangers Podcast

Play Episode Listen Later Sep 26, 2024 14:09


Daves reaction to the 2-0 win against Malmo FF.

PLZ Soccer Podcast
Will Rangers secure a win against Malmo? | The Football Show LIVE

PLZ Soccer Podcast

Play Episode Listen Later Sep 26, 2024 61:45


Hearts FC is on the hunt for a new manager after the sacking of Steven Naismith. In this episode, we cover the fallout and explore whether Derek McInnes could be the next in line for the role. We dive deep into the swirling rumours about who might take over at Tynecastle, with plenty of speculation and analysis. SUBSCRIBE NOW:  @PLZSoccer  We also preview the upcoming Malmo vs. Rangers clash, and look ahead to Borussia Dortmund vs. Celtic. Plus, catch up on the latest football headlines including the Premier Sports Cup semi-final draw, Rodri's injury update, and Scott McTominay's debut for Napoli.

Superscoreboard
Wednesday 25th September | Malmo v Rangers Build Up

Superscoreboard

Play Episode Listen Later Sep 25, 2024 90:10


Gordon Duncan is joined in the studio by Marvin Bartley & Cammy Bell as they speak to Gabriel Antoniazzi live in Sweden ahead of Rangers Europa League match.

Scottish Football
Hearts, Helander and hereditary talent

Scottish Football

Play Episode Listen Later Sep 24, 2024 25:19


Phil Goodlad is joined by Lee Miller to dissect all the big news in Scottish football. They discuss the latest in Hearts' search for a new manager, look ahead to the Masters tournament and find out what it was like for Lee to watch his 18-year-old son take a last-minute penalty in the league cup quarter-final. They also hear from former Rangers and Sweden defender Filip Helander about what the Ibrox side can expect when they face another of his old teams, Malmo, in the Europa League.

Heart and Hand - The Rangers Podcast
Heart and Hand - Shaping Up

Heart and Hand - The Rangers Podcast

Play Episode Listen Later Sep 23, 2024 53:31


Cammy is here with this weeks flagship as he and Caroline Morrison discuss the league cup win over Dundee at Ibrox on Saturday, the semi-final opponents and the start of our Europa League campaign against Malmo on Thursday. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Solo Travel with Derron
#081: Southern Sweden's Hidden Gems: Gothenburg and Malmo

Solo Travel with Derron

Play Episode Play 59 sec Highlight Listen Later Sep 3, 2024 10:23


Venture beyond Stockholm to explore southern Sweden, including Gothenburg and Malmö. This episode reveals how to save 75% on accommodation by staying in Malmö, Sweden, rather than Copenhagen, Denmark. Also, you'll hear about an excellent nightclub in Gothenburg and my problems at the Stockholm train station. If you want help to take your first solo international trip, check out my website at solomaletravel.com

En mörk historia
Gängkriget i Malmö - Tärningen kastas 1/6

En mörk historia

Play Episode Listen Later Aug 26, 2024 35:12


Sommaren 2018 larmas polisen om ett pågående människorov på Katrinelund i Malmö. Där och då är det ingen som kan föreställa sig hur den här händelsen ska komma att kasta sin mörka skugga över en hel stad för lång tid framåt. Men när Sveriges brutalaste gängskjutning inträffar bara en kort tid därefter, börjar polisen att ana ett samband. Ett program av Josefin Patzauer Producent: David MehrLjudläggning och mix: Elin RosenbergGrafik och omslag: David MehrOriginalmusiken är skriven av Joel Lyssarides För att få exklusiv tillgång till den här serien och alla avsnitt av En mörk historia - prenumerera på ThirdEar+ eller Podme. Det gör du på thirdear.studio eller podme.com! Som betalande prenumerant får du exklusiv tillgång till den här serien och alla avsnitt av En mörk historia. Kontakt: hello@thirdear.studioInstagram:@thirdear.studio @enmorkhistoriaFacebook: Third Ear Studio Ansvarig utgivare på Third Ear Studio är Martin Johnson Det här är en produktion från Third Ear Studio.

En mörk historia
Trailer - Gängkriget i Malmö

En mörk historia

Play Episode Listen Later Aug 26, 2024 1:40


UTE NU! Sommaren 2018 inleds en konflikt mellan två män som, enligt polisen, styr mycket av Malmös narkotikahandel. Det inleder den våldsammaste tiden i Malmös historia och gör att Malmö blir en symbol för de kriminella gängens våld. I sex avsnitt ger vi historien bakom flera brutala och häpnadsväckande händelser som knyts till konflikten - som mordet på småbarnsmamman Karolin Hakim, ett mord på en svensk i London och rena avrättningar på den spanska solkusten. För att få exklusiv tillgång till den här serien och alla avsnitt av En mörk historia - prenumerera på ThirdEar+ eller Podme. Det gör du på thirdear.studio eller podme.com! Som betalande prenumerant får du exklusiv tillgång till den här serien och alla avsnitt av En mörk historia.

En mörk historia
Gängkriget i Malmö - Los Suecos 2/6

En mörk historia

Play Episode Listen Later Aug 26, 2024 39:06


Gängledaren Amir var nära att dö när han besköts utanför internetcaféet Galaxy i Malmö, men han klarade sig med en skottskada i benet. Nu drar han söderut, mot Spanien, för att, misstänker polisen, sköta sina narkotikaaffärer därifrån. Men snart befinner sig Amir och hans kumpaner mitt i spanska mordutredningar misstänkta för uppdragsmord. Och i skuggorna figurerar den ökända knarkbossen, Ridouan Taghi. Ett program av Josefin Patzauer Producent: David MehrLjudläggning och mix: Elin RosenbergGrafik och omslag: David MehrOriginalmusiken är skriven av Joel Lyssarides För att få exklusiv tillgång till den här serien och alla avsnitt av En mörk historia - prenumerera på ThirdEar+ eller Podme. Det gör du på thirdear.studio eller podme.com! Som betalande prenumerant får du exklusiv tillgång till den här serien och alla avsnitt av En mörk historia. Kontakt: hello@thirdear.studioInstagram:@thirdear.studio @enmorkhistoriaFacebook: Third Ear Studio Ansvarig utgivare på Third Ear Studio är Martin Johnson Det här är en produktion från Third Ear Studio.

En mörk historia
Trailer - Gängkriget i Malmö

En mörk historia

Play Episode Listen Later Aug 19, 2024 1:40


Sommaren 2018 inleds en konflikt mellan två män som, enligt polisen, styr mycket av Malmös narkotikahandel. Det inleder den våldsammaste tiden i Malmös historia och gör att Malmö blir en symbol för de kriminella gängens våld. I sex avsnitt ger vi historien bakom flera brutala och häpnadsväckande händelser som knyts till konflikten - som mordet på småbarnsmamman Karolin Hakim, ett mord på en svensk i London och rena avrättningar på den spanska solkusten.

The Food Chain
The business of food halls

The Food Chain

Play Episode Listen Later Aug 14, 2024 26:28


Have you visited a food hall recently? It's a venue bringing together multiple independent food and drink businesses, often with communal seating. We look at the ways in which food halls are being used to bring consumers and spend to new areas, raising the value of surrounding offices, apartments and other businesses. In this programme Devina Gupta visits Society food hall in Manchester in the UK, where she meets Julia Martinelli, who manages the pizza offering from Noi Quattro restaurant and Reece Gibson, operations manager for Vocation Brewery which runs the bar. Mariko Oi in Singapore reports from the Maxwell Hawker Centre in Singapore, to explore how today's food halls have evolved from street food traders. Frode Rønne Malmo from Mathallen in Oslo, Norway and Spiros Loukopoulos, from Reffen in Copenhagen, Denmark talk about the ways in which their food halls have brought people to the surrounding area. Food hall consultant Philip Colicchio in New York in the US explains why this business model has been so popular. Presented by Devina Gupta. Produced by Beatrice Pickup. Additional reporting by Mariko Oi. (Image: a man and a woman enjoying plates of food in a food hall. Credit: Getty Images/BBC)

Hellas Footy Pod
Hellas Football Podcast S5 Ep.5 - Disappointment in Europe, EPO continue their good decisions, even more transfers plus your questions

Hellas Footy Pod

Play Episode Listen Later Aug 9, 2024 62:33


The boys return for another week to talk about Greek football, the gift that keeps on giving. Europe PAOK hold Malmo to a draw PAO narrowly lose at home to Ajax AEK down but not out against Noah EPO Greek Cup Final at OAKA Fortounis returning to the Ethniki? Talks with diaspora abroad Transfers Max to PAO Valbuena returns to Greece with Kalithea Kits Olympiakos release the 100th season home kit Plus your questions

Run Come Riddims
RCR 24.1 Summer '24 Part 1

Run Come Riddims

Play Episode Listen Later Aug 1, 2024 109:58


Mambo Nairobi mbogi, tucheze! I see you in Freetown and Malmo, big up yourselves too! If you want a shout-out, get your friends to listen and rise in the rankings :) Adding artists from Ghana and Zimbabwe, do you know which ones? Long mix with some very hot tracks and riddims for those long summer days. Jah Frozen - Healing 0:01 Anaicon - Is This Love 2:04 SoulFyah Productions - Wood And Fire Riddim 3:52 Proverb Nesta I - South of Samora 5:58 Rastaman Chant Riddim - Filo Muzik Records 7:27 * Mikey General - Rastaman Chant 7:28 * Big Simon - Mama Said 8:57 * Mikelino Rutz - Quello Che Sei 10:59 * Luciano - Ethiopia Here I Come 12:44 A#keem - Change 15:34 James Lakay Feat. Burning Spectacular - Como El Rio 18:50 Perfect Giddimani - Ah Mi Yard 21:12 Lone Ranger - Forward Ever 23:24 Black Roots - Roots 24:21 Zion Head Feat. Anthony B - Enemies 25:37 Protoje - Legend Legend (Zion I Kings Remix) 28:28 Nuttea & Kabaka Pyramid - Egaux 30:42 Marlon Asher Feat. Sizzla - Never See Us Fall 32:29 Alborosie Feat. Jaz Elise - Faith 34:34 Exco Levi - One Life 36:52 Jah Rain - Garden of Eden 38:29 Allure Riddim - Diligence Music Life 41:00 * David Conscious - I Will Be Burning 41:02 * Diligence - Forever Love You Jah 43:04 * Bushy - Can't Hear 44:45 * AB Zion - Out Of Babylon 46:17 Jah Ziek - Dans Mon Truc 48:14 Stranjah Miller Feat. Spirit Revolution - Call On Jah 50:41 King Lorenzo - New Energy 53:05 Ginjah - Fire 55:12 Stranjah Miller - Time To Rise 58:18 Brimstone Riddim - Dutty Rock Productions 1:00:33 * Quan-Dajai – Brimstone 1:00:35 * Busy Signal – Jah You Know 1:02:50 * Aiesha – Delight 1:04:08 * Ras Ajai – No Drop U Guard 1:05:15 Lutan Fyah - Journey 1:07:11 The Meditations Feat. Major Popular - Carpenter To Rebuild 1:08:51 Ezekiah Rose Feat. Fahmulah - Flip 1:10:33 Hector Roots Lewis - Possibility 1:12:28 Richie Spice - Crazy World 1:14:19 Stranjah Miller - Born To Be A Star 1:15:58 Chezidek - Action Man 1:19:01 Blakk Rasta - Ohba Ohba Generation 1:20:50 Osagyefo - Live The Life U Love 1:23:26 Blakk Rasta - Bua 1:24:55 Top Shelf Feat. Lutan Fyah - Rise 1:26:52 Osagyefo - Ghana Is Suffering 1:29:35 King Lorenzo - I Like 1:31:39 Stick With You Riddim - DJ Densen & Oneness Records 1:33:47 * Kumar - Stick With You 1:33:48 * Tatik - Far Away 1:35:40 * Lyricson - Have Some Love 1:37:21 * Treesha, Denham Smith, DJ Densen - Vampire 1:39:05 Harvest For Life Riddim - Ambassador Musik Production 1:40:49 * Winstrong - Khaki Suit 1:40:54 * Robbie Rule - Never Give Up 1:42:28 * Inezi - Firm Up 1:44:36 * Jah Lil - Seasaw 1:46:45

the Joshua Schall Audio Experience
[MONDAY MINUTE] Oatly Thinks "Outside the Carton" With Minor League Baseball Sponsorship | Malmo Oat Milkers

the Joshua Schall Audio Experience

Play Episode Listen Later Jul 29, 2024 0:54


Can we blame those “turn ahead the clock” Seattle Mariners uniforms from 1998 for minor league baseball creating a 121st team this year sponsored by Oatly? Because it was that fun (and successful) one-off promotion that led to MLB creating a league-wide version the following year…complete with a corporate advertiser, Century 21, which is why the futuristic uniforms were supposedly based on the year 2021. But the idea behind the Malmö Oat Milkers required a little “outside the carton” thinking for Oatly. See…those one-off food themed jerseys (that Oatly's marketing team enjoyed) are based on each team's local culture, but that's an issue for the Swedish plant-based milk brand. So, their solution was to have every existing minor league baseball team transform into the Oat Milkers for one game this season. So, which CPG brand is up next? FOLLOW ME ON MY SOCIAL MEDIA ACCOUNTS ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LINKEDIN⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YOUTUBE⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TWITTER⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠INSTAGRAM⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠FACEBOOK⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ --- Support this podcast: https://podcasters.spotify.com/pod/show/joshua-schall/support

Portugal - The Simple Life
Portugal's Eurovision star

Portugal - The Simple Life

Play Episode Listen Later Jul 1, 2024 80:15


Iolanda is a singer and song writer, born and raised in Portugal and recently represented Portugal at Eurovision in Malmo, finishing 10th. She is the latest guest on Portugal The Simple Life Podcast and chats to Dylan about what inspires her music, writing a hit song, what she loves about Portugal and her experience of representing her country on one of the biggest stages.  FOLLOW OUR GUESTSIolanda on InstagramIolanda on YouTubeIolanda on SpotifyABOUT PORTUGAL THE SIMPLE LIFE PODCAST:   "Portugal - The simple life”, an insider's perspective to Portugal.  We already know about Portugal's fantastic weather, food and people. In this podcast, we go deeper to meet the people who make this country so wonderful.Dylan, who has made his life in Portugal, shares an insider's perspective on what makes Portugal the unique, beautiful and fantastic country it is. Join him and his guests weekly as they shed light on the incredible people, culture, history and lifestyle that make Portugal so appealing. A country where everyone feels like they belong.    Don't forget to subscribe to our Podcast to receive more stories about living and moving to Portugal!   SPONSOR:Portugal Realty, a Leisure Launch group company, sponsors this episode. 

The Coach's Journey
#69: Myles Downey - What is effective coaching? Performance coaching, when to be directive, changing clients' maps of reality and more

The Coach's Journey

Play Episode Listen Later Jun 20, 2024 100:24


Myles Downey has been practising coaching for more than 40 years. His book, Effective Modern Coaching, has sold more than 330,000 copies and is a recommended read on many coaching training courses.In this episode, Myles returns for his second appearance on The Coach's Journey Podcast, with the questions asked by Robbie and Neil – doubling the hosts in order to double the insight!We dive deep into the nuances of coaching, exploring themes of self-actualisation, performance, and the delicate balance between directive and non-directive coaching, leveraging Myles' experience coaching senior executives and leadership teams across the globe.This episode coincides with the release of a new edition of Effective Modern Coaching, which has enabled him to fine-tune everything he has learned about effective coaching over the last 40 years, including through founding The School of Coaching in 1996, the first institution in Europe to focus solely on the development of coaching skills for coaches, managers and leaders.The episode is a treasure trove of insights for new and experienced coaches alike. Myles' wisdom provides a deep dive into the art and science of coaching. Whether you're a seasoned coach or just starting your journey, this episode offers valuable perspectives on how to elevate your practice and truly make a difference in your clients' lives.In particular, we talk about:Pushing back against the culture contagion of compliance: how we get engineered in society and lose the capacity to trust ourselves.Performance coaching and self-actualisation: how to align personal expression with professional performance.Tim Gallwey's Inner Game and its influence on Myles and Neil, including diving into Gallwey's Self One and Self Two.Construing, constructs and creating our maps of reality (and how non-directive coaching makes all the difference).‘Transcend and include' as an underpinning principle in our development as coaches and that of our clients.Proposing: the delicate balance required to offer suggestions to our clients without imposing them.Plus, we talk about whether the shine has come off coaching and other changes Myles has noticed over his decades in the industry, and Neil invites Myles to get into the business-building parts of his work: how he structures engagements, how his sales process looks and more...To listen to Myles' first appearance on The Coach's Journey Podcast, visit https://www.thecoachsjourney.com/podcast/episode-31-myles-downeyFor more information about Myles, visit www.mylesdowney.com or find him on LinkedIn at www.linkedin.com/in/mylesdowneyFor more information about Robbie Swale, visit www.robbieswale.com and for more information about Neil Mackinnon, visit www.neilmackinnon.net.Read more about The Coach's Journey at www.thecoachsjourney.com.Music by My Good Man William: listen on Spotify: https://open.spotify.com/artist/4KmeQUcTbeE31uFynHQLQgTo support the Coach's Journey, visit www.patreon.com/thecoachsjourney and to join the Coach's Journey Community visit www.thecoachsjourney.com/community.Things and people we mentioned (that you might be interested in):Myles' previous appearance on The Coach's Journey Podcast: https://www.thecoachsjourney.com/podcast/episode-31-myles-downey Myles on The Edge of Coaching Podcast: https://open.spotify.com/episode/0EWTGZa8XnMF1n13Q76wSr?si=YF9zAEILScCunFai85RIaQ The Inner Game of Tennis by Timothy Gallwey: https://www.amazon.co.uk/Inner-Game-Tennis-ultimate-performance/dp/1447288505/ The Inner Game of Music by Timothy Gallwey: https://www.amazon.co.uk/Inner-Game-Music-Timothy-Gallwey/dp/1447291727 The new edition of Effective Modern Coaching: https://www.amazon.co.uk/Effective-Modern-Coaching-principles-successful/dp/191595116X/ David Clutterbuck: https://clutterbuck-cmi.com/ Aristotle Onassis: https://en.wikipedia.org/wiki/Aristotle_Onassis Enabling Genius by Myles Downey: https://www.amazon.co.uk/Enabling-Genius-Myles-Downey/dp/1910649538 Academy of Executive Coaching: https://www.aoec.com/ Mihaily Csikszentmihaly: https://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi Roberto Assagioli: https://en.wikipedia.org/wiki/Roberto_Assagioli The Craftsman by Richard Sennett: https://www.amazon.co.uk/Craftsman-Richard-Sennett/dp/0141022094 Jennifer Garvey Berger on The Coach's Journey Podcast: https://www.thecoachsjourney.com/podcast/episode-42-jennifer-garvey-berger-the-answer-to-either-or-is-both The Prosperous Coach book: https://richlitvin.com/book/ BIOGRAPHY FROM MYLESMyles Downey is one of the leading executive coaches in Europe, with global experience (Europe, North and South America, Asia-Pacific, UAE). He has worked across the C-suite in many prestigious organisations over the last 30 years, in a wide variety of industries, including Banking and Financial Services, Manufacturing, Oil and Gas, Professional Services, Tech and the Public Sector.Myles is a recognized authority on Performance, Coaching and Leadership and the author of three classics in the coaching and performance arena: ‘Effective Modern Coaching', ‘Effective Coaching' (between them 300,00+ books sold); ‘Enabling Genius – a mindset for success in the 21st Century'; ‘The Enabling Manager - how to get the best out of your team' published in July 2022.Myles has deployed his performance coaching programs with the Senior Coaches of the England Rugby Team and with the New Zealand Elite Rugby Coaches.Myles has been a speaker at many leadership conferences, including the BBC's Conference for its top 400 senior executives, the McKinsey Global Partners Conference in Singapore and the ICF Global Conference in Malmo, to name a few. After 33 years living in London, Myles now lives in Norfolk, England. He was born in Dublin, Ireland in 1959. Myles plays for Norfolk Veterans Tennis team and competes occasionally on the ITF Masters Tour.

The Ben Shapiro Show
Ep. 1962 - The Greta Thunberg Idiots' Revolt

The Ben Shapiro Show

Play Episode Listen Later May 10, 2024 50:14


Greta Thunberg joins anti-Semitic protesters in Malmo, Sweden; Joe Biden declares that he won't leave anyone behind while abandoning American hostages to Hamas; and The New York Times says that the Republicans are the real antisemites. Click here to join the member exclusive portion of my show: https://utm.io/ueSEj Ep.1962 - - -  DailyWire+: Watch the premiere of our new animated sitcom Mr. Birchum this Sunday, May 12th at 9 PM ET on DailyWire+: https://bit.ly/4akO7wC Get 25% off your DailyWire+ Membership here: https://bit.ly/4akO7wC  - - -  Today's Sponsors: Eight Sleep - Exclusive discount for my listeners with promo code: SHAPIRO at https://www.eightsleep.com/shapiro/ Beam - Get 40% off for a limited time! http://www.ShopBeam.com/BEN Bambee - Visit https://www.bambee.com/ and type in ‘Ben Shapiro' when you sign up. Blinds - Exclusive Discount for my Listeners! Tell them The Ben Shapiro Show sent you! https://www.blinds.com/ Robinhood - Learn more by downloading the Robinhood app or by visiting http://www.Robinhood.com *Returns are not guaranteed. Interest is earned on uninvested cash swept from your brokerage account to program banks. The cash sweep program is offered through Robinhood Financial LLC. Terms apply. Robinhood is not a bank. Bigger instant deposits are only available if your instant deposit status is in good standing. Robinhood Financial LLC (member SIPC) is a registered broker-dealer. - - - Socials: Follow on Twitter: https://bit.ly/3cXUn53  Follow on Instagram: https://bit.ly/3QtuibJ  Follow on Facebook: https://bit.ly/3TTirqd  Subscribe on YouTube: https://bit.ly/3RPyBiB 

Ukrainecast
Eurovision: how important is it to Ukraine?

Ukrainecast

Play Episode Listen Later May 10, 2024 30:20


What does the competition mean for Ukrainians?Lucy is joined by Eurovision reporter Daniel Rosney, who is in the Swedish city of Malmo, which plays host to Eurovision's grand final on Saturday. Tymofii Muzychuk, who won Eurovision in 2022 with the Kalush Orchestra, tells us what life has been like since their performance and why the song contest matters so much to his country.And we catch up with Ukraine's Eurovision TV commentator, Timur Miroshnychenko, who will again be covering the competition from a bunker. Today's episode is presented by Lucy Hockings. The producers were Arsenii Sokolov and Cordelia Hemming. The technical producers were Hannah Montgomery and Emma Crowe. The series producer is Tim Walklate. The senior news editor is Sam Bonham. Email Ukrainecast@bbc.co.uk with your questions and comments. You can also send us a message or voice note via WhatsApp, Signal or Telegram to +44 330 1239480You can join the Ukrainecast discussion on Newscast's Discord server here: tinyurl.com/ukrainecastdiscord

The Times of Israel Daily Briefing
Day 217 - Blinken report to exonerate Israel; will US arms follow?

The Times of Israel Daily Briefing

Play Episode Listen Later May 10, 2024 20:40


Welcome to The Times of Israel's Daily Briefing, your 20-minute audio update on what's happening in Israel, the Middle East and the Jewish world. It is day 216 of the war with Hamas. US bureau chief Jacob Magid and news editor Amy Spiro join host Amanda Borschel-Dan for today's episode. US Secretary of State Antony Blinken is expected to deliver a report to Congress this week that will criticize Israel but ultimately conclude that the Biden administration has accepted assurances from Jerusalem that the IDF is using American weapons in accordance with international law. How could this shift the contentious US-Israel relationship -- or would it? The conceptual dissonance over the Gaza war between Israel and the US was highlighted Thursday with statements by White House National Security Council spokesperson John Kirby who stated, “Any kind of major Rafah ground operation would actually strengthen Hamas's hands at the negotiating table, not Israel's. That's our view." Magid looks into the differing stances. Israel's Eden Golan advanced to the grand final of the Eurovision on Thursday night in Malmo, Sweden, qualifying with her song “Hurricane” despite months of anti-Israel protests against her participation. Spiro gives the full picture. For the latest updates, please see The Times of Israel's ongoing live blog. Discussed articles include: Report: State Department set to confirm Israel not breaking international law in Gaza Despite Biden's pause, billions of dollars in US arms for Israel still in pipeline ‘Didn't fall from the sky': Biden threat follows months of feeling PM ignored his warnings US says it's not abandoning Israel, asserts Rafah offensive would embolden Hamas Defying haters, Israel's Eden Golan advances to the Eurovision grand final on Saturday THOSE WE HAVE LOST: Civilians and soldiers killed in Hamas's onslaught on Israel THOSE WE ARE MISSING: The hostages and victims whose fate is still unknown Subscribe to The Times of Israel Daily Briefing on Apple Podcasts, Spotify, YouTube, or wherever you get your podcasts. This episode was produced by the Pod-Waves.  IMAGE: Israeli soldiers at a staging area near the Israeli-Gaza Border, southern Israel, May 9, 2024. (Flash90)See omnystudio.com/listener for privacy information.

O'Connor & Company
Jonna Spilbor, Greta Thunberg Joins Anti-Israel Protests, Eden Gafner, Geomagnetic Impacts

O'Connor & Company

Play Episode Listen Later May 10, 2024 25:48


In the 7 AM Hour: Larry O'Connor and Patrice Onwuka discussed: WMAL GUEST: 7:05 AM - INTERVIEW - JONNA SPILBOR - - attorney and legal analyst Website: https://jonnaspilbor.com/ Stormy Daniels under fire: 5 takeaways from Day 14 of the Trump trial Greta Thunberg was just in Malmo, Sweden wearing a Keffiyeh while protesting Jews outside the Eurovision venue. Greta is merging antisemitism with her environmentalism.  John Ondrasik on X: "Eden Golan, our fellow artist, cannot leave her hotel room in fear for her life because she is Jewish. This is 2024. I call on every artist to join me in condemning publicly this despicable act of hate. This is a time for choosing. Your silence is complicit. @Eurovision WMAL GUEST: 7:35 AM - INTERVIEW - EDEN GAFNER - survivor of Oct 7th in Israel WEBSITE to get more info: www.ChabadGainesville.com She is sharing her story experiencing Oct. 7th in Israel and will be at an event in HAYMARKET THIS WEEKEND Eden Gafner (28) was in Kibbutz Re'im, the same Kibbutz that hosted the NOVA festival, on October 7th. She was visiting her parents home when terrorists entered the kibbutz and broke into their home. Together with her family they hid in an attic for 26 hrs, while under fire, until they were rescued by the IDF. After 26 hours under fire, the family was rescued by the IDF and was evacuated from the Kibbutz. Her family survived but many of her neighbors and 400+ festival goers trying to escape from their surrounding fields did not. Eden will be sharing her story for the wider community at a Shabbat Lunch at 12:45 pm TOMORROW SATURDAY MAY 11THThe event is hosted by Chabad Center for Jewish Life in Greater Gainesville & Manassas (servicing the Jewish Community in Gainesville, Haymarket, Bristow, Manassas and Prince William County) : www.ChabadGainesville.com World told to brace for 'severe geomagnetic storm' today - the first in nearly 20 years - which could bring chaos to mobile phone networks, GPS satellites and power grids Where to find more about WMAL's morning show:  Follow the Show Podcasts on Apple podcasts, Audible and Spotify. Follow WMAL's "O'Connor and Company" on X: @WMALDC, @LarryOConnor,  @Jgunlock,  @patricepinkfile and @heatherhunterdc.  Facebook: WMALDC and Larry O'Connor Instagram: WMALDC Show Website: https://www.wmal.com/oconnor-company/ How to listen live weekdays from 5 to 9 AM: https://www.wmal.com/listenlive/ Episode: Friday, May 10, 2024 / 7 AM Hour  O'Connor and Company is proudly presented by Veritas AcademySee omnystudio.com/listener for privacy information.

We Speak English Good
Episode 633 - Music News Eurovison Protest Roger Waters Podcast

We Speak English Good

Play Episode Listen Later May 10, 2024 70:26


On this Episode of WSEG, we delve into the unfolding events in Malmo, Sweden, where protests in favor of Palestine are taking place. This is in response to the decision by Eurovision to permit Israel's participation. Furthermore, we will discuss the notable endorsement of these pro-Palestine demonstrations by the esteemed musician, Roger Waters.

Inside Europe | Deutsche Welle
Inside Europe 9 May 2024

Inside Europe | Deutsche Welle

Play Episode Listen Later May 9, 2024 54:59


It's Eurovision Finals week so we've gone all out on a Euro-Culture special! Alongside the hottest-takes from Malmo 2024, we'll be bringing you the best of Liveurope in Brussels, and the arrival of the Olympic torch in Marseille. Enjoy… because this is about as lycra-packed as Inside Europe is ever likely to get! Plus: DW's Don't Drink the Milk podcast explores the backstory of the bagel

The Times of Israel Daily Briefing
Day 215 - What is Israelis' top priority: War or hostages?

The Times of Israel Daily Briefing

Play Episode Listen Later May 8, 2024 21:42


Welcome to The Times of Israel's Daily Briefing, your 20-minute audio update on what's happening in Israel, the Middle East and the Jewish world. It is day 215 of the war with Hamas. Senior analyst Haviv Rettig Gur and news editor Amy Spiro join host Amanda Borschel-Dan for today's episode. The Biden administration on Tuesday night confirmed reports that it had recently held up a large shipment of 2,000- and 500-pound bombs that it feared Israel might use in a major ground operation in the densely populated southern Gaza city of Rafah. But it also appeared to signal its initial approval of the operation launched by Israel early Tuesday morning to take over the Palestinian side of the Rafah border crossing with Egypt. Rettig Gur weighs in on these push-pull announcements. According to polling by the Israel Democracy Institute (IDI) that was released yesterday, a majority of Israelis believe that reaching a hostage deal with Hamas should be the country's top national priority — more important than launching a military operation against the terror group in Rafah. We hear whether this accurately reflects Israeli thinking and what the numbers truly mean. The Eurovision Song Contest in Malmo, Sweden, officially began Tuesday evening with the first live semifinal. Israel's contestant is set to take the stage only on Thursday, but there's plenty to talk about in the meantime. Spiro fills us in. For the latest updates, please see The Times of Israel's ongoing live blog. Discussed articles include: US confirms holding up sale of heavy bombs it feared Israel would use in Rafah US signals backing for ‘limited op' after IDF takes over Gazan side of Rafah crossing US completes construction of Gaza aid pier, but weather preventing installation Poll: Majority of Israelis support prioritizing hostage deal over Rafah operation Hostage families urge US, other countries to press Israel to reach deal with Hamas Eurovision organizers rebuke performer who wore keffiyeh during first semifinal show THOSE WE HAVE LOST: Civilians and soldiers killed in Hamas's onslaught on Israel THOSE WE ARE MISSING: The hostages and victims whose fate is still unknown Subscribe to The Times of Israel Daily Briefing on Apple Podcasts, Spotify, YouTube, or wherever you get your podcasts. This episode was produced by the Pod-Waves.  IMAGE: Einav Zangauker holds a sign identifying her son Matan (24), one of the hostages taken captive by Hamas in the Gaza Strip during the October 7 massacre, as she stands on the roof of a car during a demonstration by hostages' relatives and supporters in the Israeli coastal city of Tel Aviv on May 6, 2024. (Jack Guez / AFP)See omnystudio.com/listener for privacy information.

Premier League Gambling Podcast
Eurovision Song Contest 2024 Picks

Premier League Gambling Podcast

Play Episode Listen Later May 8, 2024 37:02


Oh yes, we're covering the Eurovision Contest. Following Loreen's win in Liverpool last year, Malmo in Sweden will host this years competition. Malcolm Bamford is joined by Eurovision expert Chris Ogle to mark your card for the event, giving us the trends and analysis to make winning bets #eurovision #malmo2024JOIN the SGPN community #DegensOnly Exclusive Merch, Contests and Bonus Episodes ONLY on Patreon - https://sg.pn/patreon Discuss with fellow degens on Discord - https://sg.pn/discord Download The Free SGPN App - https://sgpn.app Check out the Sports Gambling Podcast on YouTube - https://sg.pn/YouTube Check out our website - http://sportsgamblingpodcast.com SUPPORT us by supporting our partners NYRA Racing code SGPN25 - $25 FREE BET and $200 Deposit Bonus -https://racing.nyrabets.com/sign-up-bonus/sgpn25?utm_source=sgpn&utm_medium=paid_social&utm_campaign=sgpn_25&utm_content=1080x1080 Underdog Fantasy code SGPN - 100% Deposit Match up to $100 - https://play.underdogfantasy.com/p-sgpn Royal Retros code SGPN - 10% off - https://www.royalretros.com/ Gametime code SGPN - Download the Gametime app, create an account, and use code SGPN for $20 off your first purchase - https://gametime.co/ Football Contest Proxy - Use promo code SGP to save $50 at - https://www.footballcontestproxy.com/   ADVERTISE with SGPN Interested in advertising? Contact sales@sgpn.io   Watch the Premier League Gambling Podcast YouTube -    / @premierleaguegamblingpodcast     Follow The Premier League Gambling Podcast On Social Media Twitter -   / sgpnpremier   TikTok -   / toonbazfootball     Follow The Hosts On Social Media Malcolm Bamford -   / mal_b_sport   Barry Penaluna -   / toonbazza   Gambling problem? Call 1-800-GAMBLER CO, DC, IL, IN, LA, MD, MS, NJ, OH, PA, TN, VA, WV, WY Call 877-8-HOPENY or text HOPENY (467369) (NY) Call 1-800-327-5050 (MA)

Switched on Pop
Eurovision 2024: from Baby Lasagna to Windows95Man

Switched on Pop

Play Episode Listen Later May 7, 2024 33:22


It's that time of year again when the entirety of Europe (and a few other countries) come together to celebrate kitschy, bombastic songwriting through the Eurovision Song Contest! This year's competition, held in Malmo, Sweden, features everything from rave-pop on behalf of the Netherlands, to folk-rapping hybrids courtesy of Ukraine – and Charlie and Nate are here to musicologically unpack the craziest tracks that have the potential to win it all.  For more on the controversy surrounding this year's contest, check out Charlie's appearance on Vox's podcast Today, Explained. Songs discussed: Joost – Europapa Angelina Mango – La noia alyona alyona, Jerry Heil – Teresa & Maria Nemo – The Code Baby Lasagna – Rim Tim Tagi Dim Windows95Man – No Rules! Kaleen – We Will Rave Olly Alexander – Dizzy Bambie Thug – Doomsday Blue Ladaniva – Jako Learn more about your ad choices. Visit podcastchoices.com/adchoices

ESC Insight: The Eurovision Song Contest Podcast
Eurovision Insight Podcast: Daily News From Malmö, Tuesday 7th May

ESC Insight: The Eurovision Song Contest Podcast

Play Episode Listen Later May 7, 2024 28:00


Fin Ross Russell & Dude Points are joined by Malmo-born Anton Rasegard to preview Semi Final One. They discuss fantastic Swedish production, Irish momentum and Luxembourg's return before ending with some unique moments from around Malmö. The post Eurovision Insight Podcast: Daily News From Malmö, Tuesday 7th May appeared first on ESC Insight - Home of the Unofficial Eurovision Song Contest Podcast.

Ask a Jew
Good Evening Ramalmo! Eurovision talk with Sharon Davidovitch

Ask a Jew

Play Episode Listen Later May 2, 2024 63:44


Forget college campuses, the real culture war will take place next week in Malmo, Sweden, where the Eurovision Song Contest (otherwise known as the "gay olympics") takes place. We are joined by sports journalist and Eurovision rainman Sharon Davidovitch to talk about how Israel will fare this year in the glitter soaked battlefield - even if you know nothing of what is as big as the Superbowl across the pond, this episode offers fascinating takes about the Intersection between gepolotics and culture.  We talk about why Israel had to change the name of it's song this year or risk being kicked out, how Dana International blew the world's mind in 1998 as the first transgender winner (from Israel!), why the 1983 outfits of the Israeli team were a big F U to Germany, how we determine who is antisemitic based on how they vote, and a little about what it's like to play and cover sports on behalf of Israel abroad. Come for the music, stay for the weird insights into Scandanavian incestuous voting habits. Show notes:Israel's banger 2024 entry, Hurricane by Eden Golan (previously titled "October rain")Israeli contestant warned to stay in her hotel room in SwedenIsraels' first win, A-Ba-Ni-Bi (Paris, 1978)Israel's second win, Hallelujah (Jerusalem, 1979)Israel's third win, Diva (Birminghamm 1998)Israel's fourth win, Toy (Lisbon, 2018)Honorary mention - "Chai" (Munich, 1983)Spotify playlist of Israeli Eurovision entriesSharon's TwitterSharon's website (Hebrew)   Joing the AAJ conversation on Susbtack! askajew.substack.comEmail us your questions askajewpod@gmail.com ⭐ ⭐ ⭐ ⭐ ⭐ Want to help us grow? Rate and review us 5 stars on Apple podcasts and Spotify ⭐ ⭐ ⭐ ⭐ ⭐