Podcasts about VQ

  • 93PODCASTS
  • 189EPISODES
  • 33mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 27, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about VQ

Latest podcast episodes about VQ

Quick Spin
2025 Nissan Murano Review: Elevate the Name

Quick Spin

Play Episode Listen Later May 27, 2025 14:30


Nissan's Murano has been giving Nissan customers another crossover option for over two decades. Since its launch in 2002, the Murano has sat near the top of the brand's pricing strategy. While the Murano has ventured into some strange body styles, with the CrossCabriolet leading the pack, the latest, fourth-generation Murano is back on track -- albeit a track that looks more toward the Infiniti side of Nissan's lineup. This '25 Murano also ditches the popular 3.5-liter V6 and replaces it with the variable compression turbocharged I4 mill. The variable-compression turbocharged engine sends 241 hp and 260 lb-ft of torque through a nine-speed automatic transmission. Those figures are almost exactly flipped from the VQ-powered predecessor. On this episode of Quick Spin, Autoweek Executive Editor Tom Murphy hops behind the wheel of the 2025 Murano and puts it through its paces. Murphy takes you on a guided tour of the fourth-generation Murano and highlights some of his favorite features. Later, Murphy takes you along for a live drive review of the 2025 Nissan Murano. Adding to these segments, Murphy chats with host Wesley Wren about the Murano, how it stacks up against the competition, and more. Closing the show, the pair breaks down what makes the 2025 Nissan Murano special.

Happily EVERything Disney
2025-02-19: No more VQs at WDW

Happily EVERything Disney

Play Episode Listen Later Feb 19, 2025 11:02


Volcano Bay nights are on sale.But more importantey, TBA and Guardians is off VQ!  We breakdown the impact.Send us a textTwitter/X Handles:Dizhappenings: https://twitter.com/dizhappeningsShaun: https://twitter.com/rankingthemouseMatt: https://twitter.com/mattpetoBefore/After Watch Music in Dizhappenings copyrighted by Audio Jungle

The KE Report
Quetzal Copper - Maiden Drill Program At The Princeton Copper Project, BC, Adjacent To The Copper Mountain Mine

The KE Report

Play Episode Listen Later Jan 21, 2025 13:55


Matt Badiali, President and CEO of Quetzal Copper (TSX.V:Q) joins us to discuss the maiden drill program at the Princeton Copper Project in BC. This Project is adjacent to the Copper Mountain Mine, see the map below.    Matt outlines the Company's journey from its listing on the TSX Venture Exchange to the commencement of drilling. He provides an overview of Quetzal's projects, including the previous drilling in Mexico and current efforts in Princeton, and discusses the logistical advantages and key target areas for the drill program.    Key exploration targets include Bud South and Knob Hill, prioritized based on historical data, geophysical anomalies, and surface indications of copper and gold mineralization. Matt also emphasizes the Company's approach of balancing risk and potential reward while providing details on the planned drilling activities and goals.   Please email me with any follow up questions you have for Matt. My email address is Fleck@kereport.com.    Click here to visit the Quetzal Copper website. Figure 1: Location of Princeton Project Claims and Targets

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe

The CPG Guys
Integrating Store Planning, Merchandising, Sales & Marketing with 345 Global's Mark Edwards & Think Blue's Parag Shah

The CPG Guys

Play Episode Listen Later Oct 5, 2024 46:04


The CPG Guys are joined in this episode by Mark Edwards, founder & CEO of 345 Global, which markets store planning, merchandising, sales & marketing into one cloud-based platform. Also, Parag Shah Chief Growth Officer at Think Blue.Follow Mark Edward on LinkedIn at: https://www.linkedin.com/in/mark-edwards-51713b4/Follow 345 Global on LinkedIn at: https://www.linkedin.com/company/345global/Follow 345 Global Online at: https://www.345.global/Follow Parag Shah online at: https://www.linkedin.com/in/omnigrowthparag/Follow Think Blue on LinedIn at: https://www.linkedin.com/in/omnigrowthparag/Follow Think Blue online at: https://thethinkblue.com/Mark & Parag answer these questions:Mark - you've been in stealth mode for a while from what I know, who is Mark, what is 345 and how does one come up with a name such as 345?Parag - tell us about all things going on with space planning. Why do you feel the world is changing in retail and why does retail need transformation in this space?Mark - why were you in stealth mode for so long? What have you been building? What's the vision of 345?Parag - what is the partnership between you and 345? What do you aim to launch?Mark - what is the 345 tech platform? Why is it revolutionary?Mark - take us through the various pieces of 345 - believe those are VQ, SQ, IQ, EQ?Parag - The omnichannel world is 24-7 and 360 in terms of how she connects, how she shops, where she discovers and where she browses - shopping baskets are very fragmented. How can 345 help retailers chase this with these capabilities? Is it data is it something else?Parag - how does 345 empower merchants to win everyday? Why should they pay attention?Mark - Looking forward, what are you focused on, where can we find you? CPG Guys Website: http://CPGguys.comFMCG Guys Website: http://FMCGguys.comCPG Scoop Website: http://CPGscoop.comRhea Raj's Website: http://rhearaj.comLara Raj on PopStar Academy: https://www.netflix.com/us/title/81587828?s=i&trkid=258593161&vlang=enDISCLAIMER: The content in this podcast episode is provided for general informational purposes only. By listening to our episode, you understand that no information contained in this episode should be construed as advice from CPGGUYS, LLC or the individual author, hosts, or guests, nor is it intended to be a substitute for research on any subject matter. Reference to any specific product or entity does not constitute an endorsement or recommendation by CPGGUYS, LLC. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent.CPGGUYS LLC expressly disclaims any and all liability or responsibility for any direct, indirect, incidental, special, consequential or other damages arising out of any individual's use of, reference to, or inability to use this podcast or the information we presented in this podcast.

Saúde com Fisioterapia
#38_ Troca Gasosa e V?Q

Saúde com Fisioterapia

Play Episode Listen Later Sep 30, 2024 6:11


A troca gasosa é o processo pelo qual o oxigênio (O2) é transportado dos alvéolos para o sangue, e o dióxido de carbono (CO2) é removido do sangue para os alvéolos, ocorrendo nos pulmões. Isso depende da eficiência da ventilação alveolar (entrada de ar nos alvéolos) e da perfusão capilar (fluxo sanguíneo nos capilares pulmonares). O equilíbrio entre ventilação e perfusão é fundamental para a troca gasosa adequada, e essa relação é expressa pela razão V/Q.

The KE Report
Quetzal Copper - Permits Received For The Princeton Property, BC, Beside The Copper mountain Mine, Overview Of 4 Targets

The KE Report

Play Episode Listen Later Sep 25, 2024 15:57


Matt Badiali, President and CEO of Quetzal Copper (TSX.V:Q) joins us to discuss the path to drilling the Princeton Project, in BC, beside the Copper Mountain Mine. Quetzal holds an 80% interest in this project.    Matt provides an in-depth look at the upcoming drill program, by explaining the four primary targets: Bud South, Knob Hill, Aura, and Contact. He highlights the strategic importance of the project's location near the historic Copper Mountain mine and sheds light on the preparatory work already undertaken, including soil sampling, geophysics, and mapping.   Please email me with any follow up questions you have for Matt. My email address is Fleck@kereport.com.    Click here to visit the Quetzal Copper website.

Happily EVERything Disney
107. The Stormalong Bay Vibe - Matt's Trip Report

Happily EVERything Disney

Play Episode Listen Later Aug 8, 2024 109:46


Can Disney World be fun in the summer?Matt takes a summer trip to Disney World at Beach Club.  A much different vibe of chilling during the day and hitting the parks at nights.Disney Springs Day!  New restaurant experiences ranked.  Learn a Tiana's VQ hack.  Even running into obstacles still made this a very fun trip!Closing thoughts on D23 and what are the bare minimums of success?  And a Before/After about a party that Matt will be attending in November.Send us a Text Message.Twitter/X Handles:Dizhappenings: https://twitter.com/dizhappeningsShaun: https://twitter.com/rankingthemouseMatt: https://twitter.com/mattpetoBefore/After Watch Music in Dizhappenings copyrighted by Audio Jungle

EM Board Bombs
230. PE & Pregnancy

EM Board Bombs

Play Episode Listen Later Jul 22, 2024 22:32


PE workup? Sure. Oh by the way they're pregnant...yep. Let's discuss how this is workup needs to be simplified and not feared. Also, say goodbye to VQ scans. Want to experience the greatest in board studying? Check out our interactive question bank podcast- the FIRST of its kind here. Cite this podcast as: Briggs, Blake; Husain, Iltifat. 230. PE and Pregnancy. July 22nd, 2024. Accessed [date].

THE RISE with Sara Connell
Visibility Intelligence: The Thing That Will Determine Your Success For The Rest Of This Year

THE RISE with Sara Connell

Play Episode Listen Later Jul 11, 2024 31:21


In this new series, learn about VA: Visibility Intelligence and how this one thing can set you on a new trajectory for impact and income for the next 6 months and beyond. Key Topics: * What is VQ and how can it determine your success this year? * How to find out your VQ level * The first step in what is needed to become virally visible right now- to your ideal people Want to make a big leap in your Visibility Quotient now? Here's our new FREE Viral Visibility Roadmap that will walk you through it step by step: https://www.saraconnell.com/viral-visibility-masterclass

DBC Pod
Inside Out 2's Strong Box Office, Strategy for Tiana's Opening Day, and Ranking Recent Disney/PIXAR Releases

DBC Pod

Play Episode Listen Later Jun 25, 2024 74:13


Episode 212 ...  for the week June 24th, 2024, and this is what is going on in our Disney World...Inside Out 2's Strong Box Office- Inside Out 2 is a hit, taking in over $700m worldwide in 9 days - we discuss reasons for this- Phil provides a high level review of the film and we discuss where we could see this IP being added to Walt Disney World Starts @1:45 ...Strategy for Tiana's Opening Day- Phil's wife and oldest will be at Magic Kingdom this Friday, which is the full opening of Tiana's Bayou Adventure ... meaning at 7am there will be a VQ for that, one for Tron, an ILL for Tron, and your first G+LL to try for- What is the proper priority for everything to increase chance for getting everything you want?Starts @24:22 ...Holiday Discounts at WDW- As part of Halfway to the Holidays Disney announced event information but also some hotel discounts and free park hopper option!Starts @29:07 ...Game: Ranking the Disney/PIXAR Animated Releases Since 2020- Inspired by Shaun Ranks the Mouse, we rank the recent animated releases from Disney, but using our more technical approach, scoring them across several categories- Do the results surprise you?Source: Shaun Ranks the MouseStarts @34:58 ...* Reminder to like, subscribe, rate, and review the DBC Pod wherever you get your podcast *Send us an e-mail! .... thedbcpodcast@gmail.comFollow us on social media:- LinkTree: https://linktr.ee/thedbcpod - Instagram: https://www.instagram.com/TheDBCPod/- Twitter: https://twitter.com/TheDBCPod- Facebook: https://www.facebook.com/TheDBCPod- YouTube Channel: https://www.youtube.com/thedbcpod- Discord Server: https://discord.com/invite/cJ8Vxf4BmQNote: This podcast is not affiliated with any message boards, blogs, news sites, or other podcasts

Tennis Traverse: Exploring the Game
Tennis Traverse Episode 42- Roland Garros Week 2

Tennis Traverse: Exploring the Game

Play Episode Listen Later Jun 10, 2024 29:18


Welcome to Tennis Traverse! In my 42nd episode, I will be talking about week 2 of Roland Garros! SUBSCRIBE TO MY NEWSLETTER: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://ivannazhang428.substack.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Time stamps: Women's Quarterfinal Matches (00:03:32) Women's Semifinals (00:09:21)  Women's Final and Iga Swiatek's Victory (00:11:28) Men's Fourth Round Matches (00:13:35) Men's Quarterfinal Matches (00:18:02) Men's Semifinal Match (00:22:18) Predicting the Finals Matchup (00:26:27)  Links Mentioned: Iga Swiatek vs Marketa Vondrousova: https://www.youtube.com/watch?v=GZjXUrU-KJg Coco Gauff vs Ons Jabeur: https://www.youtube.com/watch?v=usJTwY_Px7I Mirra Andreeva vs. Aryna Sablenka: https://www.youtube.com/watch?v=1vy209m9drc Jasmine Paolini vs Elena Rybakina: https://www.youtube.com/watch?v=wzb5Yn3oT2w Jasmine Paolini vs Mirra Andreeva: https://www.youtube.com/watch?v=yibat6HHEjY Iga Swiatek vs Coco Guaff: https://www.youtube.com/watch?v=JAySM-f8YN4-- Iga Swiatek vs Jasmine Paolini: https://www.youtube.com/watch?v=bX7mKlcUWQw Novak Djokovic vs Francisco Cerundolo: https://www.youtube.com/watch?v=TJKEwqiaA40 Daniil Medvedev vs Alex De Minaur: https://www.youtube.com/watch?v=FbyB1rIe5VU Carlos Alcaraz vs Stefanos Tsitsipas: https://www.youtube.com/watch?v=Pqh3L-Ytnhk Grigor Dimitrov vs Jannik Sinner: https://www.youtube.com/watch?v=lnU5KAbpA5s Alexander Zverev vs Alex De Minaur: https://www.youtube.com/watch?v=VQ_i-hPaIoI Casper Ruud vs Alexander Zverev: https://www.youtube.com/watch?v=IIXe4Xe9Vro Carlos Alcaraz vs Jannik Sinner: https://www.youtube.com/watch?v=ZKAexfNzwh0 Social Media: Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/tennistraverse/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/tennistraverse⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Youtube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@TennisTraverse/videos⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Linktree: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/tennistraverse⁠⁠⁠⁠⁠⁠⁠⁠ Substack: ⁠⁠⁠⁠⁠⁠⁠⁠https://ivannazhang428.substack.com/⁠

PICU Doc On Call
PICU Doc on Call Shorts: Alveolar Gas Equation

PICU Doc On Call

Play Episode Listen Later Apr 28, 2024 20:06


Welcome to PICU Doc On Call, where Dr. Pradip Kamat from Children's Healthcare of Atlanta/Emory University School of Medicine and Dr. Rahul Damania from Cleveland Clinic Children's Hospital delve into the intricacies of Pediatric Intensive Care Medicine. In this special episode of PICU Doc on Call shorts, we dissect the Alveolar Gas Equation—a fundamental concept in respiratory physiology with significant clinical relevance.Key Concepts Covered:Alveolar Gas Equation Demystified: Dr. Rahul explains the Alveolar Gas Equation, which calculates the partial pressure of oxygen in the alveoli (PAO2). This equation, PAO2 = FiO2 (Patm - PH2O) - (PaCO2/R), is essential in understanding hypoxemia and the dynamics of gas exchange in the lungs.Calculating PAO2: Using the Alveolar Gas Equation, the hosts demonstrate how to calculate PAO2 at sea level, emphasizing the influence of atmospheric pressure, fraction of inspired oxygen (FiO2), water vapor pressure, arterial carbon dioxide pressure (PaCO2), and respiratory quotient (R) on oxygenation.A-a Gradient and Hypoxemia: The A-a gradient, derived from the Alveolar Gas Equation, is discussed in the context of hypoxemia evaluation. Understanding the causes of hypoxemia, including ventilation/perfusion (V/Q) mismatch, anatomical shunt, diffusion defects, and hypoventilation, is crucial for clinical diagnosis and management.Clinical Scenarios and A-a Gradient Interpretation: Through a clinical scenario, the hosts elucidate how different conditions affect the A-a gradient and oxygenation, providing insights into respiratory pathophysiology and differential diagnosis.Clinical Implications and Management Strategies: The hosts highlight the clinical significance of the Alveolar Gas Equation in assessing oxygenation status, diagnosing gas exchange abnormalities, and tailoring respiratory management strategies in the pediatric intensive care setting.Key Takeaways:Utility of the Alveolar Gas Equation: Understanding and applying the Alveolar Gas Equation is essential for evaluating oxygenation and diagnosing respiratory abnormalities.Interpreting A-a Gradient: A normal A-a gradient suggests alveolar hypoventilation as the likely cause of hypoxemia, whereas elevated gradients indicate other underlying pathologies.Clinical Relevance: Recognizing the clinical implications of the Alveolar Gas Equation aids in accurate diagnosis and optimal management of respiratory conditions in pediatric intensive care patients.Conclusion:Join Dr. Kamat and Dr. Damania as they unravel the complexities of the Alveolar Gas Equation, providing valuable insights into respiratory physiology and its clinical applications. Don't forget to subscribe, share your feedback, and visit picudoconcall.org for more educational content and resources.References:Fuhrman & Zimmerman - Textbook of Pediatric Critical Care Chapter: Physiology of the respiratory system. Chapter 42. Khemani et al. Pages 470-481Rogers textbook of Pediatric intensive care: Chapter 44....

On en parle - La 1ere
Abus financiers/ témoignage AI/ octroi AI

On en parle - La 1ere

Play Episode Listen Later Apr 16, 2024 86:15


Assurances, conso, nouvelles technologies… "On en parle" vous oriente dans tout ce qui fait votre quotidien. Au programme aujourd'hui: 1. Les abus financiers chez les plus de 55 ans pèsent 675 millions de francs 2. VQ: témoignage de Gaëtan dans son laborieux parcours avec lʹassurance invalidité 3. Guichet: les conditions dʹoctroi de lʹAI

The KE Report
Quetzal Copper - Copper Exploration In Southern BC, 3 Projects, Flagship Project Beside Copper Mountain Mine

The KE Report

Play Episode Listen Later Mar 21, 2024 21:44


We have Matt Badiali back on the show to introduce a new copper exploration company, Quetzal Copper ("Quetzal" or the "Company") (TSX.V:Q). Matt is the CEO of Quetzal, that just listed on the Venture exchange on Monday, March 18th.   Quetzal has 3 projects in southern BC; Princeton, Big Kidd, and Dot. Princeton, the flagship Project, is next to Hudbay's Copper Mountain Mine with three drill ready targets.   Matt provides a background on the Company, which was built during the bear market in copper. We discuss the exploration strategy at each project and general timelines to drilling. We also recap the management team, share structure and cash position.   IF you have any follow up questions for Matt please email me at Fleck@kereport.com.   Click here to visit the Quetzal Copper website to learn more about the Company and Projects.   

MotorInc First
Aprilia RS 457 | Motorinc First S02E04

MotorInc First

Play Episode Listen Later Jan 28, 2024 58:17


The Aprilia RS 457 is their first-ever motorcycle in India and it's looking more and more like a smashing start for the Italian brand! Shumi rode it at the racetrack to get a taste of it, and Kartikeya cannot wait to get a taste it after this episode of MotorInc First! ~ MotorInc First is a discussion between experts Kartikeya Singhee and Shumi (Shubhabrata Marmar), co-founders of MotorInc on new vehicles. Each episode will discuss one vehicle in great detail, covering the experience of driving or riding it, as well as what it means for the industry. ~ CHAPTERS 00:00 Huge Expectations 01:59 What Is Rs 457 07:12 Design & Finish 09:19 Exhaust Note 10:38 Made For India 12:18 Maxxed out 13:06 Accessories 13:43 Riding Position 14:37 Easy To Ride 18:01 vs RC390 & 390 Duke 18:45 So Aprilia in Nature 19:45 Great Tyres 21:15 Forgiving 22:22 Buy Half an RS 457 23:07 Weakest Link Are Brakes 28:15 Developed In India 29:20 Good Value Or Not 29:53 Singles vs Twins 32:44 In Everyday Use 34:22 VQ: Are 400s Perfect for India 35:16 VQ: Service & Warranty 36:56 VQ: vs Yamaha R3 37:36 VQ: Wait for Updated RC390 38:27 VQ: Will Aprilia Win Fans 38:55 VQ: Racetrack Bike Ranking 40:13 VQ: Quality Levels Expected 41:59 VQ: Tuono & Tuareg 457 42:55 VQ: Are Aprilia's Reliable 43:45 VQ: Better Than 390 Duke 44:05 VQ: Upgrade From R15 44:21 VQ: RS 457 as First Bike 46:09 VQ: Riding Position 46:34 VQ: Prices Not Coming Down 47:30 VQ: Accessory Availability 48:05 Quick Summary 54:33 Kartik's Next Bike? 56:00 Closing Comments ~ #MotorIncFirst #ApriliaRS457 #Sportsbike #FirstRide #FirstImpression #FirstLook #YamahaR3 #KTMRC390 #KawasakiNinja400 #YamahaR15 #KTMRC200

Vent de Fraîcheur | CJMD 96,9 FM LÉVIS | L'ALTERNATIVE RADIOPHONIQUE

Julie Tansey, directrice générale de Vox Québec (VQ) nous fait découvrir un OSBL unique en son genre! L'Association nationale consultent et représentent, les personnes qui vivent ou qui ont vécu un enjeu avec la santé mentale, auprès de diverses instances, afin de promouvoir et de défendre leurs intérêts. En devenant gratuitement membres, vous pourrez contribuer à la cause. Hosted on Acast. See acast.com/privacy for more information.

MotorInc First
Yamaha YZF-R3 & MT03 | MotorInc First S01E21

MotorInc First

Play Episode Listen Later Dec 27, 2023 49:52


Yamaha's launched their YZF-R3 sportsbike in India, and its naked cousin, the MT-03. Shumi went to the Buddh International Circuit to ride it and on MotorInc First, Kartikeya has all the questions! ~ MotorInc First is a discussion between experts Kartikeya Singhee and Shumi (Shubhabrata Marmar), co-founders of MotorInc on new vehicles. Each episode will discuss one vehicle in great detail, covering the experience of driving or riding it, as well as what it means for the industry. ~ CHAPTERS 00:00 Brand-New Bikes 02:10 Basics & Upgrades 04:00 Handling Excellence 06:33 Tall Riders MT-03 Fit issue 07:33 Accessories Available 07:57 Is Yamaha Serious 08:53 Electronics Suite 10:11 Baffling Strategy 12:22 Price 13:09 Design 14:12 Riding Experience15:09 Packaging Problems 17:35 Why Is Price So High 18:21 The Silver Lining 20:15 VQ Why Not Produce Here 21:39 VQ vs Ninja 400 22:26 VQ Upgrade from CBR250R 23:29 VQ R15 & R-Family Genes 24:40 VQ Missing Slipper Clutch 25:39 VQ R15 Has More Features 27:16 VQ Import Duties Allow Less Than World-Class 29:44 VQ Paying For Japanese Reliability 30:40 VQ vs Aprilia RS 457 31:57 VQ vs KTM RC390 or 390 Duke 33:31 VQ R3 For Sport Touring 34:24 VQ Yamaha Dumping Bikes Into India 36:03 VQ Same Mistake as CB500X 36:24 VQ Yamaha Service Quality 37:21 VQ Twins vs Singles 40:28 VQ Traction Control 43:46 VQ Will Large Riders Fit 46:26 Comfort 47:15 Quick Summary 49:22 Closing Comments ~ #MotorInc #MotorIncFirst #Sportsbike# NakedBike #Yamaha #YamahaYZFR3 #YamahaR3 #YamahaMT03 #KTMRC390 #ApriliaRS457

CiscoChat Podcast
Episode 22: VQ Communications Journey and Success in Video Conferencing Solutions

CiscoChat Podcast

Play Episode Listen Later Nov 13, 2023 36:03


In this month's episode, join us for a brief yet enlightening episode as Cisco's Kevin Adamson sits down with Mike Horsley, Chief Executive Officer; Steve Holmes, VP of Sales; and Jon English, Product Manager. VQ is renowned for its game-changing solutions in video conferencing, and Mike's entrepreneurial journey is an inspiration to many. We'll delve into how VQ started small and grew into a global leader, with leading customers across the US, Europe, the UK, and Asia/Pacific, thanks to Mike's visionary leadership and a dedicated team. Discover the keys to VQ Communications' success and their thriving partnership with Cisco. If you're curious about the future of video conferencing and the pivotal role VQ plays, this episode is a must-listen. Join us for valuable insights and inspiration in a bite-sized format! ​

Network Five Emergency Medicine Journal Club
Emergency Medicine Case Series: Episode 1

Network Five Emergency Medicine Journal Club

Play Episode Listen Later Jul 17, 2023 34:08


Panel: Pramod Chandru and Shreyas Iyer.Case Summary: 61-year-old male presenting with 2 distinct episodes of shortness of breath, chest pain, and associated presyncope.   Asymptomatic by the time of arrival to the emergency department.   ECG and observations at triage were unremarkable.  No recent travel or recent major surgeries.  Initial troponin and serial troponin were 80ng/L.   D-dimer was ordered given static troponin and the nature of symptoms: 0.58.  Although this D-dimer was negative when age-adjusted, a V/Q scan was pursued as the patient was not felt to fit a ‘low risk' pre-test probability for PE (IV contrast shortage dictated V/Q over CTPA).  Bilateral segmental pulmonary PE identified on V/Q scan with mild right heart strain evident on subsequent CTPA and TTE.   Key Discussion Points: If a case does not follow the usual ‘pattern' of your initial diagnosis, consider alternate aetiologies.  There are many tools available for risk-stratifying PE including PERC, age-adjusted D-dimer, and the YEARS diagnostic pathway. However, the way in which to appropriately utilize these tools is nuanced.   A paper published in JAMA in December 2021 demonstrates some ways in which these tools can be used together (see first reference below).   The PESI score (even prior to definitive diagnosis) can be useful to risk stratify patients with possible PE and help determine their disposition.   Take-Home Points: Pre-test probability is incredibly important, particularly in entities such as PE where only highly invasive imaging modalities are diagnostic.  Having a structured approach to protect yourself from your own mistakes is extremely important (such as a hypothesis and hypothesis testing approach).  References & Background Reading:  Effect of a Diagnostic Strategy Using an Elevated and Age-Adjusted D-Dimer Threshold on Thromboembolic Events in Emergency Department Patients With Suspected Pulmonary Embolism: A Randomized Clinical Trial. JAMA. 2021 Dec 7;326(21):2141-2149. doi: 10.1001/jama.2021.20750.  Thiruganasambandamoorthy, V., Stiell, I.G., Sivilotti, M.L. et al. Risk stratification of adult emergency department syncope patients to predict short-term serious outcomes after discharge (RiSEDS) study. BMC Emerg Med 14, 8 (2014). https://doi.org/10.1186/1471-227X-14-8. Crane SD, Risk stratification of patients with syncope in an accident and emergency department Emergency Medicine Journal 2002;19:23-27. Almulhim KN. The Characteristics of Syncope-Related Emergency Department Visits: Resource Utilization and Admission Rate Patterns in Emergency Departments. Cureus. 2022 Feb 8;14(2):e22039. doi: 10.7759/cureus.22039. PMID: 35340474; PMCID: PMC8913182.  Iwuji K, Almekdash H, Nugent KM, Islam E, Hyde B, Kopel J, Opiegbe A, Appiah D. Age-Adjusted D-Dimer in the Prediction of Pulmonary Embolism: Systematic Review and Meta-analysis. J Prim Care Community Health. 2021 Jan-Dec;12:21501327211054996. doi: 10.1177/21501327211054996. PMID: 34814782; PMCID: PMC8640977.  Schouten HJ, Geersing GJ, Koek HL, et al. Diagnostic accuracy of conventional or age-adjusted D-dimer cut-off values in older patients with suspected venous thromboembolism: systematic review and meta-analysis. 2012. In: Database of Abstracts of Reviews of Effects (DARE): Quality-assessed Reviews [Internet]. York (UK): Centre for Reviews and Dissemination (UK); 1995-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK133492/. Franco-Moreno AI, Bustamante-Fermosel A, Ruiz-Giardin JM, Muñoz-Rivas N, Torres-Macho J, Brown-Lavalle D. Utility of probability scores for the diagnosis of pulmonary embolism in patients with SARS-CoV-2 infection: A systematic review. Rev Clin Esp (Barc). 2023 Jan;223(1):40-49. doi: 10.1016/j.rceng.2022.07.004. Epub 2022 Sep 22. PMID: 36241500; PMCID: PMC9492501.  Christ M, Geier F, Popp S, Singler K, Smolarsky A, Bertsch T, Müller C, Greve Y. Diagnostic and prognostic value of high-sensitivity cardiac troponin T in patients with syncope. Am J Med. 2015 Feb;128(2):161-170.e1. doi: 10.1016/j.amjmed.2014.09.021. Epub 2014 Oct 15. PMID: 25447619.  Lindner G, Pfortmueller CA, Funk GC, Leichtle AB, Fiedler GM, Exadaktylos AK. High-Sensitive Troponin Measurement in Emergency Department Patients Presenting with Syncope: A Retrospective Analysis. PLoS One. 2013 Jun 18;8(6):e66470. doi: 10.1371/journal.pone.0066470. PMID: 23823330; PMCID: PMC3688899.  Music/Sound Effects: ENGINE by Alex-Productions | https://onsound.eu/, Music promoted by https://www.free-stock-music.com, Creative Commons / Attribution 3.0 Unported License (CC BY 3.0), https://creativecommons.org/licenses/by/3.0/deed.en_US.  Feel It by MBB feat. JV Saxx | https://soundcloud.com/mbbofficial, https://www.instagram.com/JVSAXX/, Music promoted by https://www.free-stock-music.com, Creative Commons / Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0), https://creativecommons.org/licenses/by-sa/3.0/deed.en_US. Lakeside by Scandinavianz | https://soundcloud.com/scandinavianz, Music promoted by https://www.free-stock-music.com, Creative Commons / Attribution 3.0 Unported License (CC BY 3.0), https://creativecommons.org/licenses/by/3.0/deed.en_US.  Ocean Love by LiQWYD | https://www.liqwydmusic.com, Music promoted by https://www.free-stock-music.com, Creative Commons / Attribution 3.0 Unported License (CC BY 3.0), https://creativecommons.org/licenses/by/3.0/deed.en_US. Nostalgic Marshmallows by Arthur Vyncke | https://soundcloud.com/arthurvost, Music promoted by https://www.free-stock-music.com, Creative Commons / Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0), https://creativecommons.org/licenses/by-sa/3.0/deed.en_US. Sound effects from https://www.free-stock-music.com. Promotional Video (Soundtrack):Pina Colada by Scandinavianz | https://soundcloud.com/scandinavianz,Music promoted by https://www.free-stock-music.com, Creative Commons / Attribution 3.0 Unported License (CC BY 3.0), https://creativecommons.org/licenses/by/3.0/deed.en_US.Disclaimer:Please be advised that the individual views and opinions expressed in this recording strive to improve clinical practice, are our own, and do not represent the views of any organization or affiliated body. Therapies discussed are general and should not be a substitute for an individualized assessment from a medical professional.Thank you for listening!Please send us an email to let us know what you thought.You can contact us at westmeadedjournalclub@gmail.com.You can also follow us on Facebook, Instagram, and Twitter!See you next time!~

F3 Omaha - Paradise Island
95: Disrupting the Average Bear - Yogi

F3 Omaha - Paradise Island

Play Episode Listen Later Jun 11, 2023 40:06


Pony and Plague interview our man Yogi. We learn how he came into F3 and his journey through Fitness, Fellowship and Faith. Hear How he got engaged in the group early and did his VQ within six weeks of joining. Yogi also gets vulnerable and shares about a family loss and how the PAX surrounded him with love and support. We learn about the Mission Forge AO, how it was started, how it got its name, and how it has impacted many guys this past year. We're extremely grateful for this man and the impact he's had on us all! Give it a listen

Cardionerds
306. Decompensated Right Ventricular Failure in Pulmonary Arterial Hypertension with Dr. Mardi Gomberg-Maitland and Dr. Rachel Damico

Cardionerds

Play Episode Listen Later Jun 8, 2023 60:13


The CardioNerds and Pulm PEEPs have joined forces to co-produce this important episode, delving into the management of decompensated right ventricular failure in pulmonary arterial hypertension. Joining us for this informative discussion are Pulm PEEPs co-founders, Dr. David Furfaro and Dr. Kristina Montemayor, along with Dr. Leonid Mirson (Internal Medicine Resident at Johns Hopkins Osler Medical Residency and Associate Editor of Pulm PEEPs), Dr. Bavya Varma (Internal Medicine Resident at Johns Hopkins, rising Cardiology Fellow at NYU, and CardioNerds Academy graduate), Dr. Mardi Gomberg-Maitland (Medical Director of the Pulmonary Hypertension Program at George Washington Hospital), and Dr. Rachel Damico (Pulmonologist and Associate Professor of Medicine at Johns Hopkins Hospital). Audio editing by CardioNerds Academy Intern, student doctor Adriana Mares. Enjoy this Circulation 2022 Paths to Discovery article to learn about the CardioNerds story, mission, and values. CardioNerds Heart Success Series PageCardioNerds Episode PageCardioNerds AcademyCardionerds Healy Honor Roll CardioNerds Journal ClubSubscribe to The Heartbeat Newsletter!Check out CardioNerds SWAG!Become a CardioNerds Patron! Show notes - Decompensated Right Ventricular Failure in Pulmonary Arterial Hypertension A 21-year-old woman with a past medical history notable for congenital heart disease (primum ASD and sinus venosus with multiple surgeries) complicated by severe PAH on home oxygen, sildenafil, ambrisentan, and subcutaneous treprostinil is presenting with palpitations, chest pain, and syncope. She presented as a transfer from an outside ED where she arrived in an unknown tachyarrhythmia and had undergone DCCV due to tachycardia into the 200s and hypotension. On arrival at our hospital, she denied SOB but did endorse nausea, leg swelling, and poor medication adherence. Her initial vitals were notable for a BP of 80/50, HR 110, RR 25, and saturating 91% on 5L O2.  On exam, she was uncomfortable appearing but mentating well. She had cool extremities with 1-2+ LE edema. Her JVP was 15cm H2O. She has an RV Heave and 2/6 systolic murmur. Her lungs were clear bilaterally. Her labs were notable for Cr 2.0, an anion gap metabolic acidosis (HCO3 = 11), elevated lactate (4.1), elevated troponin to 14,  and a pro-BNP of ~5000.  Her CBC was unremarkable. Her EKG demonstrated 2:1 atrial flutter at a rate of 130. Diagnosing RV failure in patients with PH: RV dysfunction and RV failure are two separate entities. RV dysfunction can be measured on echocardiography, but RV failure can be thought of as a clinical syndrome where there is evidence of RV dysfunction and elevated right sided filling pressures. RV failure is a spectrum and can present with a range of manifestations from evidence of R sided volume overload and markers of organ dysfunction, all the way to frank cardiogenic shock. Most patients with RV failure are not in overt shock. One of the first signs of impending shock in patients with RV failure is the development of new or worsening hypoxemia. Patients with decompensated RV failure approaching shock often do not present with symptoms classic for LV low flow state. Instead, hypoxia 2/2 VQ mismatching may be the first sign and they can be otherwise well appearing. Particularly because patients with PH tend to be younger, they can often appear compensated until they rapidly decompensate. Causes of decompensation for patients with RV dysfunction and PH: Iatrogenesis (inadvertent cessation of pulmonary vasodilators by providers, surgery if providers are not familiar with risks of anesthesia), non-adherence to pulmonary vasodilators (either due to affordability issues or other reasons), infections, arrhythmias (particularly atrial arrhythmias), and progression of underlying disease. Patients with atrial arrhythmias (atrial flutter or atrial fibrillation) and pulmonary hypertension do not tolerate the loss of...

The NACE Clinical Highlights Show
CME/CE Podcast: Shifting Targets? Decoding and Discussing PAH Guidelines

The NACE Clinical Highlights Show

Play Episode Listen Later May 25, 2023 15:58


For more information regarding this CME/CE activity and to complete the CME/CE requirements and claim credit for this activity, visit:https://www.mycme.com/courses/decoding-and-discussing-pah-guidelines-8896Featuring faculty Ioana Preston, MD, moderated by Corinne R. Young, MSN, FNP-C, FCCPSummaryIn this episode of NACE Clinical Highlights, Dr. Ioana Preston joins NP Corinne Young to discuss the diagnosis and treatment of pulmonary arterial hypertension (PAH). They dive into the importance of ruling out other groups of pulmonary hypertension and highlight the role of VQ scans in diagnosing chronic clots. The significance of right heart catheterization in confirming the diagnosis and guiding treatment decisions is emphasized. The discussion also covers risk stratification and the benefits of dual upfront therapy with oral drugs for most group one PAH patients, as well as risk stratification-based follow-up management. Finally, Dr. Preston provides insights into upcoming advancements in therapeutic options.Learning ObjectivesUpon completion of this activity, learners should be able to:Identify patients with PAH, utilizing updated definitions and diagnostic evaluationsInitiate therapy for patients with PAH based on currently available guidelinesTransition between therapies in patients with PAH based on risk stratification and guideline- and goal-directed therapyThis activity is accredited for CME/CE Credit.Provided by the National Association for Continuing Education in partnership with the Association for Pulmonary Advanced Practice Providers.The National Association for Continuing Education is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.The National Association for Continuing Education designates this enduring material for a maximum of 0.25 AMA PRA Category 1 Credits™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.National Association for Continuing Education is accredited by the American Association of Nurse Practitioners as an approved provider of nurse practitioner continuing education. Provider number: 121222. This activity is approved for 0.25 contact hours (which includes 0.25 hours of pharmacology).Summary of Individual DisclosuresDr. Preston has disclosed the following financial relationships:Consultant: Aerovate, Altavant, Gossamer, Janssen, Liquidia, Merck, Respira, United TherapeuticsAdvisor: Aerovate, Altavant, Gossamer, Janssen, Liquidia, Merck, Respira, United TherapeuticsContracted Research: Janssen, Merck, United TherapeuticsAll her disclosures are related to pulmonary hypertension.Ms. Young has disclosed the following financial relationships:Advisor: AstraZeneca (Asthma), Takeda (Alpha1)Speaker: Griffols (AAT)AAT: alpha-1 antitrypsinAll of the relevant financial relationships listed for these individuals have been mitigated.Faculty, planners, guest patient(s) (if applicable), and moderators for this educational activity not listed in the Summary of Individual Disclosures above have no relevant financial relationship(s) to disclose with ineligible companies whose primary business is producing, marketing, selling, re-selling, or distributing healthcare products used by or on patients.Disclosure of Commercial SupportThis presentation is supported by an educational grant from Merck Sharp & Dohme Corp.Please visit http://naceonline.com to engage in more live and on demand CME/CE content.

The Official Ears 4 You Podcast
Ears 4 You Episode 135- "Back to the Basic... Parks 101"

The Official Ears 4 You Podcast

Play Episode Listen Later May 4, 2023 59:51


On tonight's Episode we are going back to the basics. Park 101! Park reservations... VQ...Gennie+... Individual Lightning Lane...We got you!But First...Love to hear your answer to this weeks Rope Drop Question...Email us at info@ears4youpodcast.comIf you like what you are hearing please like and subscribe!Find us on Pinterest, Instagram and FB @ears4youpodca

Financial Freedom for Physicians with Dr. Christopher H. Loo, MD-PhD
#281 - The Power of Stories: Karen Gray on Embracing Your Scars to Find Your Voice and Pursue Your Passion

Financial Freedom for Physicians with Dr. Christopher H. Loo, MD-PhD

Play Episode Listen Later Apr 21, 2023 16:24


Description: Welcome to the podcast where our guest today, Coach Karen Gray, is someone that empowers women executives, business owners, and leaders to unlock their potential, passion, and prosperity. Karen is an ICF Certified Life & Business Coach who will discuss various topics with a focus on empowering women in their careers and personal lives. Listeners will explore how to Awaken Your Passion, Potential, and Prosperity. We will provide practical tips on how to identify your strengths and embrace your unique qualities to pursue your passions and achieve your goals. We will also focus on Conquering the Enemy in Your Head by overcoming insecurity and imposter syndrome. As a coach, Karen understands the challenges women face in their careers, and she will provide insightful advice on how to overcome these obstacles. Listeners will also learn about the concept of "VQ" in the third episode. Karen will explain how harnessing your Value Awareness can help you unlock your earning potential and achieve financial success. Listeners will explore the SUPER POWER of the ROCK MOVER, with Karen revealing how the secret to success lies in the bags we carry each day and how embracing our scars and stories can help us achieve our goals. Finally, we will talk about conquering your Money Mindset and getting paid what you are worth. We will discuss practical advice on how to change your mindset and overcome limiting beliefs to achieve financial success. Throughout the podcast, Karen will answer questions on various topics, including how being adopted impacted her confidence, how she coaches around fear, what inspired her to start coaching, the #1 thing impacting women in leadership today, why imposter syndrome is so prevalent, and what a Rock MOVER is. Karen will share something most people don't know about high-performing or high-achieving women in leadership or business. Her insights will shed light on the challenges these women face and how we can support them in their careers. Tune in to this empowering podcast to hear practical advice, inspiring stories, and expert insights that will empower you to unlock your potential and achieve your goals. To connect with Karen, visit her website: https://www.coachkarengray.com/money-mindset-mastery Disclaimer: Not advice. Educational purposes only. Not an endorsement for or against. Results not vetted. Views of the guests do not represent those of the host or show. Do your due diligence. Click here to join PodMatch (the "AirBNB" of Podcasting): https://www.joinpodmatch.com/drchrisloomdphd We couldn't do it without the support of our listeners. To help support the show: CashApp- https://cash.app/$drchrisloomdphd Venmo- https://account.venmo.com/u/Chris-Loo-4 Buy Me a Coffee- https://www.buymeacoffee.com/chrisJx Thank you to our sponsor, CityVest: https://bit.ly/37AOgkp Click here to schedule a 1-on-1 private coaching call: https://www.drchrisloomdphd.com/book-online Click here to purchase my books on Amazon: https://amzn.to/2PaQn4p Follow our YouTube channel: https://www.youtube.com/chL1357 Thank you to our advertisers on Spotify. Financial Freedom for Physicians, Copyright 2023

Cardionerds
287. Case Report: When Tumors Take Your Breath Away – University of Oklahoma College of Medicine

Cardionerds

Play Episode Listen Later Apr 14, 2023 47:09


CardioNerds join Dr. Samid Muhammad Farooqui, Dr. Hiba Hammad, and Dr. Syed Talal Hussain, from the University of Oklahoma Pulmonary and Critical Care Medicine Fellowship Program, in Oklahoma City. The fellows will take us in a fascinating discussion of a case of rapidly progressing dyspnea and pulmonary hypertension in a patient with metastatic breast cancer. They will then reveal an interesting etiology of pulmonary hypertension, where the secret was on the wedge! University of Oklahoma faculty and expert in pulmonary hypertension and right ventricular physiology, Dr. Roberto J. Bernardo provides the E-CPR for this episode. Audio editing by CardioNerds Academy Intern, Dr. Christian Faaborg-Andersen. A septuagenarian female, with a past medical history of metastatic breast adenocarcinoma, presented to the hospital with worsening dyspnea over a period of 3 weeks. She was found to be in rapidly progressive hypoxic respiratory failure with unremarkable chest x-ray, CTA chest, and V/Q scan. Transthoracic echocardiogram revealed elevated RVSP and a subsequent right heart catheterization showed pre-capillary pulmonary hypertension with a low cardiac index. She was treated for rapidly progressive RV dysfunction with inotropic support and inhaled pulmonary vasodilators until she decided to pursue comfort measures. Wedge cytology came back positive for malignant cells, confirming a diagnosis of Pulmonary Tumoral Thrombotic Microangiopathy (PTTM). CardioNerds is collaborating with Radcliffe Cardiology and US Cardiology Review journal (USC) for a ‘call for cases', with the intention to co-publish high impact cardiovascular case reports, subject to double-blind peer review. Case Reports that are accepted in USC journal and published as the version of record (VOR), will also be indexed in Scopus and the Directory of Open Access Journals (DOAJ). CardioNerds Case Reports PageCardioNerds Episode PageCardioNerds AcademyCardionerds Healy Honor Roll CardioNerds Journal ClubSubscribe to The Heartbeat Newsletter!Check out CardioNerds SWAG!Become a CardioNerds Patron! Case Media - When Tumors Take Your Breath Away - University of Oklahoma College of Medicine Pearls - When Tumors Take Your Breath Away - University of Oklahoma College of Medicine Pulmonary arterial hypertension (PAH) is a progressive disorder of the pulmonary vasculature, characterized by progressive obliteration and remodeling of the pulmonary circulation, resulting in increased pulmonary vascular resistance and increased right ventricular (RV) wall stress, abnormal right ventricular mechanics, and eventually RV dysfunction and death. Pulmonary hypertension (PH) is divided into pre-capillary and post-capillary profiles, where pre-capillary PH is hemodynamically characterized by a mean pulmonary artery pressure (mPAP) > 20 mmHg, pulmonary artery wedge pressure (PAWP) ≤ 15 mmHg and a pulmonary vascular resistance (PVR) ≥ 3 Woods Units (WU), and post-capillary PH is defined as mPAP > 20 mmHg, PAWP ≥ 15 mmHg, and PVR can be either < 3 WU (isolated post-capillary PH) or ≥ 3 WU (combined pre- and post-capillary PH). Pulmonary arterial hypertension (PAH) falls under the pre-capillary PH profile. Dyspnea on exertion is the most common manifestation of PH, and the most common initial complain. Other symptoms and physical findings such as venous congestion, peripheral edema, signs of RV dysfunction or syncope present later in the disease course. As such, PH has to be considered in the differential diagnosis of dyspnea, especially in cases of undifferentiated or unexplained dyspnea. PAH is a chronic but progressive condition, where symptoms progress over the course of months to years. Subacute or rapidly progressive forms of PH (symptoms rapidly worsening over the course of weeks) should warrant consideration for alternative etiologies (i.e., pulmonary embolism or a different cardiopulmonary disorder as the main d...

Tentpole Trauma

The 90s saw a resurgence in the western genre, though none of the many offerings outside of Clint Eastwood's Unforgiven were major hits. Unfortunately that held true for Mario Van Peebles' Posse, which distinguished itself by featuring a mostly black cast and was the star/director's follow up to his enormously successful New Jack City. Join Sebastian, Jennifer and VQ as they celebrate black representation in the old west and rustle up some posse love for this gunslinging cult classic.

Better Than New
Driveway Apocalypse Leads To Nissan Xterra Review

Better Than New

Play Episode Listen Later Jan 13, 2023 28:56


00:00 - OPEN - A Tree Falls and Smashes Our Cars, Which Leads To This 2nd Generation Nissan Xterra ReviewFirst, an apology... sorry for the very late delivery on this episode! It's been a heck of a week here at Casa de Crenshaw as my podcast prep time was suddenly taken up with debris removal and car insurance hassles. That's because at the beginning of the week, an 80' tree snapped off in high winds and smashed three of our cars. One is totaled, one will need paint work and the third... well, that one is just going to live on with battle scars. Originally I was planning a different episode this week, but out of the unexpected "Driveway Apocalypse" came an opportunity to review the 2nd Generation Nissan Xterra. What follows is a story about truly helpful neighbors and a great SUV, so enjoy! 00:42 - INTRO - Helpful Neighbors Help Cut Our Cars Free After a Tree Falls On Them In A WindstormI am so grateful to our neighbors who came to our rescue with chain saws, trucks, trailers and several hours of sweaty work to help free our cars after they were crushed by a fallen tree. What follows in the next 10 minutes is a recap of what happened. That event lead to this week's podcast, because in addition to all the help cutting away the tree debris, we also had the opportunity to drive a neighbor's 2011 Xterra while we sorted out our car situation. That generosity promoted this week's episode. I was planning to review the second generation Xterra at some point, but the opportunity to drive one this week reminded me why I like Xterras so much, so I dropped my other podcast episode idea, turned lemons into lemonade (or maybe Limoncello) and created this last-minute review of the 2nd Generation Nissan Xterra. Again, sorry it took so long to get this recorded and posted, but I had to take care of our family disaster cleanup first. 11:25 - Background On The Nissan Xterra First Gen Xterra (2000-2004) Overview1st Gen Xterra was a big hit with outdoor enthusiasts Named Motortrend's SUV of the year in 2000Featured rugged, no-frills styling that still looks great today Body-on-frame design offered solid off-road capabilityDrivers had a choice of either a 4-cylinder or a 6-cylinder engine2.4-liter 4-cylinder made 143-hp3.3-liter 6-cylinder made 170-hp (bumped to 180 hp in 2003)Available in 2WD and 4WD with automatic or manual transmission Second Gen Xterra (2005-2015) Overview Styling in original Xterra was a hit, so Nissan just enhanced it in the 2nd Gen XterraRugged good looks of the original are still attractive today. 2nd Gen Xterra is larger in every dimension & considerably more powerful than previous generation (more room inside for people and gear) Still an affordable SUV targeted towards outdoor enthusiastsBuilt on Nissan's sturdy F-Alpha platform used in the Frontier and Titan pickups4-cylinder option dropped. 6-cylinder is the only optionNew engine is a 4.0-liter VQ-series 6-cylinder that makes 261-hp and 281-lb.ft of torque (engine was rated at 265-hp & 284-lb.ft. initially, but SAE revamped their power certification rating, so the hp & torque numbers were changed to 261-hp & 281-lb.ft for 2007 model year)This new VQ-series engine is similar to the 3.5-liter motor in the 350Z, but it has a longer stroke to make it 4.0-liters displacement, delivering more low-end power & torque better suited to an SUV. 15:15 - Some Reasons Why You Might Want A 2nd Gen Nissan XterraRugged good looks are as appealing today as when newUnique design touches, including fender flares, locking top storage box, footholds in the rear bumper to reach the rack and...

Dr. Howard Smith Oncall
Children With Long CoVid Suffer Lasting Lung Damage

Dr. Howard Smith Oncall

Play Episode Listen Later Dec 20, 2022 1:15


  Vidcast:  https://youtu.be/ZjiIH5EF2h8   Pre-teens who contract CoVid can lose half their their lung ventilation function as measured by the ventilation/perfusion ratio by 1 year following infection.  A team of German radiologists and pediatricians studied 54 CoVid-infected kids 10-11 years of age half of whom had persisting CoVid symptoms and compared their lung functions with 9 healthy controls.  The children's V/Q ratio was measured at 6 months, 6 to 12 months, and more than 12 months after infection.  In general, those kids with and without long CoVid had a 25% reduction in lung function during the first year following infection.  Those with Long CoVid had double that deficit.  Bottom line: children suffering CoVid require ongoing monitoring of their lung function.   https://pubs.rsna.org/doi/10.1148/radiol.221250   #CoVid #pulmonary #vqratio #ventilationperfusionratio #longcovid  

#MulherDeFibra
Virgínia Quaresma

#MulherDeFibra

Play Episode Listen Later Dec 14, 2022 2:31


Virgínia Quaresma era negra e lésbica, e foi a primeira mulher a exercer o jornalismo em Portugal. Nascida em 1882, filha de um oficial do exército com uma empregada doméstica, descendente de escravos, VQ cresceu em um ambiente de fortes convicções republicanas, que ela mesma viria abraçaria muito cedo. Passou tb a se interessar pelo ativismo social, mais especificamente pela defesa dos direitos das mulheres. Portugal ainda era uma monarquia quando Virgínia se assumiu homossexual, horrorizando a sociedade conservadora. Durante a juventude, tornou-se uma das primeiras mulheres a se graduar em Letras no país, e em seguida viraria a primeira jornalista portuguesa, em 1906. Em seus primeiros artigos, escreveria sobre racismo, feminismo e desigualdade social nos jornais. Em 1912, se mudou para o Brasil, onde reportou feminicídios e casos de violência contra a mulher para os principais jornais do país, enquanto mantinha-se ligada a vários outros jornais portugueses. Com isso, se tornou ainda uma pioneira do jornalismo investigativo. Voltou para Portugal em 1917, e de lá teria que sair novamente na década de 1930. Em função de seus posicionamentos políticos e de sua conhecida homossexualidade. VQ passou a ser perseguida pela polícia secreta do regime do Estado Novo português. De volta ao Brasil, viveu com sua esposa, e só retornou para Portugal depois da morte da companheira, já no final da década de 1960. Durante todo esse período, continuou ativa em sua carreira jornalística. Em 1973, Virgínia Quaresma faleceu, aos 90 anos, em Lisboa.

Step 1 Basics (USMLE)
Pulm| VQ ratio

Step 1 Basics (USMLE)

Play Episode Listen Later Dec 7, 2022 4:32


2.01 VQ ratio   Pulmonary system review for the USMLE Step 1 exam.

Tentpole Trauma
Blade: Trinity

Tentpole Trauma

Play Episode Listen Later Nov 21, 2022 78:48


The Blade franchise was rolling full force when the third entry, helmed by writer David S. Goyer, hit theaters in 2004. Rumors of star Wesley Snipes' unhappiness with the franchise's new direction — namely adding hot young stars Ryan Reynolds and Jessica Biel to the cast as usurping vampire hunters — plagued the production, and the resulting film flopped with fans and staked out diminished box office returns. Join Sebastian, Jennifer and Fear of a Black Movie Critic's VQ as they drag this tonally confused bloodsucker into the light and ask themselves the eternal question — is this the worst Dracula ever put on film?

Better Than New
Infiniti G35 Sports Sedan Gives BMW 3-Series a Run For Its Money

Better Than New

Play Episode Listen Later Nov 11, 2022 37:12


00:00 Show OpenIf you want to spend less, and still have something cool that's fun to drive… I've got good news because today's Better Than New focus vehicle is a sport sedan that comes in multiple configurations to fit your lifestyle. You can get it in rear-wheel drive or all-wheel drive. You can get it with an automatic or a manual transmission. And on top of that, it handles great and comes with a 6-cylinder engine that serves up a healthy amount of power and torque. Or in other words, it's a sedan that's fun to drive.01:20 - Welcome To Better Than NewWelcome to Better Than New and thanks for listening. I encourage you to check out past episodes on interesting used cars, including the first generation Miata, the R56 Mini Cooper S, the Ford SVT Focus, Isuzu's short-wheelbase Amigo/Rodeo Sport, BMW's diesel 335d sports sedan and more. And if you like what you hear, please subscribe and follow so you continue to get interesting tips and ideas for your next fun-to-drive used car. 02:25 - Infiniti G35 OverviewThe first two generations of G-series cars - known as G20 sedans - were front-wheel drive and powered by a 2.0-liter 4-cylinder engine that made 140-horsepower. The G35 - the third generation G-series sedan from Infiniti - was completely different. It was based on Nissan's FM or “Front Midship” platform, which allowed for the engine to be pushed back further toward the center of the chassis for better weight distribution and handling. The G35 was designed from the beginning to be a rear-wheel drive car with the superior handling dynamics that rear-wheel drive offers. There was also an all-wheel drive G35x version that featured a rear-biased AWD system. Finally, instead of a 4-cylinder, the G35 was fitted with Nissan's VQ-series 6-cylinder engine for substantially better performance. The G35 was so good, it was named Motortrend's Car of The Year for 2003 and it was also named to Car And Driver's 10-Best list for 2003 and 200404:55 - You Might Want An Infiniti G35 If...You love rear-drive handling dynamicsYou want a fun-to-drive vehicle, but you need to save moneyYou have considered a crossover, but you think it's a bad ideaYou want AWD that offers handling and driving dynamics06:10 - Infiniti G35 DetailsTransmission5-speed automatic was offered initially at launch 6-speed manual became available later in 2003Engine2003-2004 V6 Engine made 260-hp and 260-lb-ft of torque2005-2006 Automatic Transmission G35 engines were rated at 280-hp and 270-lb-ft of torque2005-2006 Manual Transmission G35 Engines were rated at 298-hp and 258-lb-ft of torque2005 and 2006 Manual transmission cars had a slightly different engine versus those fitted with automatics; manual cars had engines fitted with variable valve timing on both intake and exhaust camshafts, whereas automatic cars only have VVT on the intake side. Known as the Rev-UP motorDrivetrainRear wheel drive in the G35 AWD available in the G35X AWD version drives like a rear-drive car SuspensionFour-wheel independent multi-link suspensionVehicle Dynamic Control anti-skid technologyCan be turned off with a button on the lower dashboard BrakesFour-wheel ventilated disc brakes with ABSBrake AssistElectronic Brake force DistributionPerformance0-60 mph - 6.2 secondsQuarter Mile - 14.8 @ 95-mph60-0 braking distance - 115 feet....

Cork's 96fm Opinion Line
MacCurtain St Traffic & Changes

Cork's 96fm Opinion Line

Play Episode Listen Later Oct 25, 2022 8:09


Listeners are contacting us talking about the amount of traffic on MacCurtain St lately. PJ caught up with Shane Clarke, Director of Operations at VQ who says pain now will create a world class area, so hang in there. Hosted on Acast. See acast.com/privacy for more information.

The Fellow on Call
Episode 031: Lung Cancer Series, Pt. 8: Surgical considerations in early stage NSCLC

The Fellow on Call

Play Episode Listen Later Sep 28, 2022


Lung cancer is one of the most commonly diagnosed type of cancer and so it is fitting that we start the first of our disease-specific oncology series with this diagnosis. This week, we sit down with Thoracic Surgeon, Dr. Jane Yanagawa to discuss surgical considerations in treatment of NSCLC. * How do you choose what type of surgical resection to do?- Considerations: --Lung anatomy --Location of the nodule within lung--Lymph node involvement-Options: --Pneumonectomy: removal of whole lung --Lobectomy: remove a whole lobe--Segmentectomy/sublobar resection: part of a lobe* What does “adequate margins” mean? And how do you know if it's enough?- If you're removing the whole lobe, it does not matter as much - If you're doing a segmentectomy, you want to have samples evaluated while in the OR because if there is signs of more disease that initially thought, you would take this one step further and do a lobectomy. - Need to consider the patient's situation - how good is their status * Why does preoperative workup matter?- Pulmonary function tests: Surgeons are looking at the %FEV1 and %DLCO to then predict what their function would be AFTER surgery. After surgery, they want to ensure patient has %FEV1 or %DLCO >40%. - Lung anatomy: In patients with COPD and endobronchial lesions, sometimes they also get V/Q scans to evaluate ratio- Cardiac echo: Important in pneumonectomy where removal of lung tissue will also remove a significant amount of blood vessels. Want to rule out pulmonary hypertension pre-operatively. - Pulmonary hypertension can also affect someone's survival while they're ventilating with only one lung during the procedure (“single lung ventilation”). - Smoking status: Smoking can increase complications by ~60%. - Pre-habilitation: Encouraging patients to be fit prior to surgery with walking, nutrition, +/- pulmonary rehabilitation* What is “VATS”?- VATS stands for video-assisted thoracoscopic surgery; it is not, in itself, a procedure. But a VATS allows for minimally invasive surgery through the use of a camera. - It involves three incisions (axilla, lowest part of mid-axillary line, one posterior)* In what scenario is a mediastinoscopy warranted? - Needed after EBUS if there is still high index of suspicion for cancer involvement in lymph nodes, even if lymph nodes are negative from EBUS* What is “systematic lymph node sampling”?- An organized way to sample lymph nodes, including all lymph nodes that are along the way, not just the ones that may be involved * As a surgeon, how do you determine if a patient is okay for surgery if the mass is invading another structure?- Need to take the anatomy into consideration - are there major blood vessels or nerves there, for instance, which can impact outcome and recovery.* When should we consider induction chemotherapy from a surgeon's perspective?- Lots of changes in this sphere coming; lots of discrepancy between institutions when there is N2 disease - In Dr. Yanagawa's opinion, anyone with N2 disease should get neoadjuvant therapy * If there is neoadjuvant chemoradiation given, how does that effect your surgery?- Radiation increases scar tissue in the lung tissue. But what is worse is that radiation neoadjuvantly may make wound healing more difficult. She does not prefer radiation pre-operatively- Chemotherapy also adds scar tissue*How does neoadjuvant IO therapy affect scar tissue formation?- The hilum and lymph nodes are more swollen, but does not translate to more complications - She has even seen patients who had gotten IO for another cancer and then get lung cancer, she can still appreciate swollen nodes!* How long after surgery is it safe to start adjuvant therapy?- If patient has a complication from surgery, would not start right away. It is important to discuss with the surgeon about when it is okay to proceed with adjuvant therapy. - If patient has good recovery/without complications, okay to start about 4 weeks after- There is no good guidance yet about when it is safe to start IO after surgery About our guest: Jane Yanagawa, MD is an Assistant Professor of Thoracic Surgery at the UCLA David Geffen School of Medicine and the UCLA Jonsson Comprehensive Cancer Center. She completed medical school at Baylor College of Medicine, after which she went to UCLA for her surgical residency. She went onto Memorial Sloan-Kettering for her Thoracic Surgery Fellowship. In addition to her practice as a thoracic surgeon at UCLA, Dr. Yanagawa also sits on the NCCN NSCLC guidelines committee! We are so grateful she was able to join us despite her very busy schedule! Please visit our website (TheFellowOnCall.com) for more information Twitter: @TheFellowOnCallInstagram: @TheFellowOnCallListen in on: Apple Podcast, Spotify, and Google Podcast

Snackable CX
Is a Virtual Queue Worth it?

Snackable CX

Play Episode Listen Later Sep 1, 2022 9:17


When you factor in things like agent payroll, toll fees, tech costs, and contact center upkeep, it's clear that every operating minute significantly impacts a contact center's bottom line.The simple truth is: virtual queueing can reduce these daily operating costs and increase customer satisfaction. But rather than just spout off a bunch of numbers to prove it, I'm going to break down exactly how you can measure this ROI in your contact center and see for yourself. Still hungry?Let us know what you think on Linkedin or by emailing snack@getmindful.com.Hear more at getmindful.com/podcasts. 

Volquest.com
Tennessee Football: The Volquest Mailbag Podcast (8.11.22)

Volquest.com

Play Episode Listen Later Aug 11, 2022 36:16


The VQ crew breaks down scrimmage one of fall camp and the latest in Tennessee recruiting as we are just three weeks away from kickoff.

F3 Omaha - Paradise Island
53: Acceleration Shortcuts - Spacebar

F3 Omaha - Paradise Island

Play Episode Listen Later Jun 19, 2022 38:12


Pony & Plague interview our man Spacebar on this week's episode. Spacebar hung out with many F3 guys at the pool before joining the gloom and quickly found what he was missing. He has continued to take on challenges to become a better man and we talk through his VQ as well as his perspective as a Site-Q of Heavy Metal. He also gets vulnerable about Faith and shares his thoughts on why F3 helps men to impact their community. This man is a true HIM - give it a listen!

Be Empactful
S3-E3. Leadership - Shaping Character, Influencing Actions

Be Empactful

Play Episode Listen Later Apr 19, 2022 23:24


M = DxP M - Motivation D - Dreams & Desires P - Probability of fulfillment of desire through the job or profession they are into. This might sound like a Mathematical Equation containing Probability but it is not scary as it sounds. Welcome to Season 3 of Be Empactful Podcast, In this episode, we have Mr. Prantik Panigrahi, CEO, Vipran Enterprises. He's India's first 3VQ Facilitator as well as a Licensed Psychogeometrics Facilitator among other different hats he wears as a Leadership Coach training folks on various aspects of human behavior leading to growth both personal & professional. 3 VQ stands for Three Vital Questions & Psychogeometrics is used to decode personality traits all mentioned on www.praantik.in. From what we have as an understanding of Leadership, Influence & Convince all are used synonymously in our daily conversation. The context & etymology is different. Leadership is about shaping the quality of character both internal and external. Influence is about the unseen flow of power & Convince is about explaining to the people and viola you are a leader. That's where M = D x P A leader will tell you "What to THINK?" & not "How to THINK?" 3 Key Takeaways - Leadership is about giving, Leading others by Heart & Leading yourself by the mind (P) - A good leader becomes a very strong Brand Ambassador for their company (S) - The aura that the leader emits & the kind of personality they imbibe is passed on to the people fostering great community. (V) For our listeners who want to connect with Prantik Panigrahi, check out : website: www.praantik.in Instagram link: https://instagram.com/praantik_panigrahi?utm_medium=copy_link Facebook page link: https://www.facebook.com/PraantikPanigrahi/ LinkedIn: https://www.linkedin.com/in/praantik-panigrahi-psychogeometrics-3vq So, let us know what you think leadership is all about and how easy or difficult is it to be a Leader in the business world. Reach out to us if you want to discuss, just share a thought or even a cup of coffee. Join our Facebook Community at: https://facebook.com/beempactful Follow us on Instagram: https://instagram.com/beempactful --- Send in a voice message: https://anchor.fm/be-empactful/message

百车全说丨当相声听的汽车电台
2022年029期:找不到买日产天籁的理由?

百车全说丨当相声听的汽车电台

Play Episode Listen Later Apr 13, 2022 44:01


※ 投稿邮箱:418150505@qq.com※ 本文章发布于订阅号:百车全说,订阅号阅读更加方便,欢迎关注前20分钟日产天籁,后半部分聊浙赛试驾缤瑞COOL。前两期节目我聊了本田雅阁和丰田凯美瑞,很多听友就猜下一期是不是要聊日产天籁了。其实天籁这款车聊起来并不复杂,因为买与不买的消费者,心态非常好分析。买的人觉得天籁定价不高,优惠不少,配置挺多,空间不小,他们就是信任日系省心、经济、保值率,手头预算又有限,就直接拿下了。而不买的人,要么是手头预算比较宽裕,看雅阁、凯美瑞毫无压力,要么就是有更细节的追求,比如某些配置天籁没有,其他车型有,或者直接通过颜值判断,就把天籁pass了。大家都听过一句话,天籁就是一台大沙发。这句话放在新天籁上面,总觉得有点怪怪的。因为新天籁座椅变短变硬,2.0T发动机参数看起来也是吊打同级,所以大家感觉这沙发坐起来还会舒服吗?你说是不是很奇怪,其他车的客户都希望马力越大越好,但是天籁车主似乎并不希望这台车动力多么的优秀,反而是更加追求舒服和省油。其实这样也挺好,有那种暴力激进的车主,就肯定有佛系驾驶的车主。本来就没有谁对谁错,法律又没规定驾驶一定要有激情对不对?天籁现在走的路线就有点跑偏了。可能是日产整个品牌在舒适区呆了太久,还有些没睡醒。这句话怎么解释呢?要知道,整个日产在中国的销量2021年的100多万辆中,有51万辆是轩逸。也就是说轩逸一款车占整个日产销量的45.22%。这是好事,也不是好事。因为整个品牌的命运全部押在轩逸一款车上。好在轩逸也挺争气,即使平台动力都没什么变化,这么多年来,仍然还是销量排名第一 、第二,非常强悍。反过来看,其他车型似乎就不那么重要了。比如奇骏,四缸变三缸炒得沸沸扬扬,日产也没觉得有多大危机,主要还是销量占比太低。天籁大沙发的名号如雷贯耳,实际上日产只要继续在这个点上发力即可。车内的配色弄得更温馨,更居家一些。不要搞得黑乎乎的,学人家运动车型。然后座椅搞得宽大厚实一些,什么腿托、腰拖、加热、通风、按摩、记忆全给标配起来。这样一下子就突出了卖点,也坐实了大家的想法,想要舒服,想要享受就买天籁,天籁就是大沙发。这样多好,什么平台、动力都不用更新,销量照样能上去。每年只要固定更换新款的座椅就行,你要宽点的,窄点的,甚至后排椅背调节角度,我全都可以根据您的身材来定制,这样才叫把“天籁大沙发”玩到极致。今后,再跟世界顶级沙发品牌搞个联名,动不动刷刷存在感,这样知名度不就上来了吗?说不定搞个联名限量版,加价都有人买,你信不?如果天籁的定位是我上面说的这样,大概率不会走到今天这个局面。2.0L优惠3万多才能卖得出去,18.78万的舒适型,裸车才15万多,落地18万左右。还有客户挑三拣四的说它动力弱,才156马力,187牛米。甚至迈锐宝XL客户还会嘲笑天籁车主,觉得买天籁的都是冤大头。你看看我,同样裸车15万多,买了台2.0T的车型,动力吊打你天籁。天籁其实也挺惨,当年大沙发的名号,实际上是天籁公爵打下来的,当年的天籁公爵,有3.5V6,有2.5V6,大排量自然吸气,开起来犹如刀切豆腐般丝滑无比,再加上韧性的悬挂,柔软的座椅,日产大沙发的名气是这么来的。可是,自从2013年之后,日产VQ系列发动机退出历史舞台,2.5L变成了四缸发动机,其实那时候,属于天籁的时代就已经落幕了。天籁公爵和当年的丰田皇冠,可以说都是街头有钱人的象征,抢的都是BBA的客户。我一直觉得天籁公爵这个名号舍弃了,确实有点可惜。丰田皇冠都可以单独成为一个车系,天籁公爵就像是天籁头顶的皇冠,摘掉之后就跟普通人没什么区别了。可是大排量自吸,在现在这个年代举步维艰,而且年轻客户都开始玩新能源了,真有信仰的人有多少,也是个未知数,日产肯定不敢赌这一把。新天籁刚上市那会儿,一直宣传自己的2.0T VC-TURBO多么的黑科技。我早在英菲尼迪的新车上市时候,就跟听友介绍过这个技术,当时我就有质疑,从工程师角度看,这的确是个非常厉害的技术。可是从客户角度看,你给我带来了什么?是动力更好,油耗更省,价格更便宜,还是保养更经济?如果比动力,天籁2.0T百公里加速6.9秒,似乎与迈锐宝XL的2.0T加速7秒的成绩也没什么差别。可是天籁2.0T要20多万落地,迈锐宝XL低配17万左右就能落地,这多出来的4万是让我为VC-TURBO充值吗?要说省油,无论天籁的2.0L还是2.0T,哪个能比凯美瑞混动和雅阁混动省油?人家实打实的油耗5点几升每百公里。至于保养成本还有保值率,跟日系同级竞品比也没什么优势。其实有些客户也挺喜欢天籁这车的造型,也挺想要2.0T的动力,可是一看变速箱是个CVT,就又犹豫了。这就好比配电脑的时候,CPU选了i9处理器,内存配了个4GB。又在网上听人说日产变速箱故障率不低,北方极寒天气还会有冷保护,这下更慌了。原本自己买天籁就没想过运动属性,因为大家都知道这车悬挂软,方向轻,稍微速度上来一点,就能感觉到这是个软脚虾。再看看这车内饰也不豪华,也不科技,5寸仪表,8寸中控屏。中低配L2级驾驶辅助也不给,同级就是有人标配。所以,那些觉得真皮座椅、电动调节、巡航、无钥匙进入启动都有,就已经够了的客户,会觉得2.0L舒适版就挺好,最起码对得起这个价。毕竟18万左右的落地价,比雅阁豪华和凯美瑞豪华的裸车价还低。再回头看看天籁2.0T,瞬间清醒,真的找不到买它的理由。所以,日产天籁的2.0T长期滞销,情有可原。天籁2.0L舒适版卖的还不错,也可以理解。我不知道大家是否跟我有一样的感觉,就是天籁似乎不是为了自己而存在,它是为了轩逸而存在。因为轩逸现在外号“小天籁”,买轩逸的客户可喜欢听到这个名字了。明明就是10来万的车,却开出了20多万的感觉。但是准备买天籁的客户就尴尬了,开出门一不小心,还会被错认成轩逸。毕竟满大街跑的轩逸数量远远大于天籁。其实A级车和B级车套娃设计的也很多,比如雅阁就是思域PLUS,迈腾就是速腾PLUS,这都很正常。但是,雅阁有本田技研的信仰,迈腾有大众在中国积累的“高级车”红利。日产天籁,论品牌力是肯定打不过本田和大众的,论产品力其实也并不差,只是没有戳中消费者的心窝子,换代后算是定位出现了较大的失误。天籁现在处境比较尴尬,年轻人喜欢带点运动感的车,天籁给不了。老日产粉还是喜欢当年中规中矩的天籁,要的是人、车、生活那种居家过日子风格,天籁也给不了。所以只认可日系口碑,预算有限的人会考虑18万左右买台天籁。但是超过20万预算,再拿天籁跟雅阁、凯美瑞、帕萨特和迈腾对比,似乎就找不到任何值得入手的卖点了。你们说是不是?如果是你,这几台车你会选谁?可以添加微信46415254加入我们的社群音频图文更新在订阅号: 百车全说每期抽三条留言,每人赠168元的“芥末绿”燃油添加剂一瓶点击订阅,每周三,周六更新会有提醒新听友可以搜索:百车全说2014,百车全说2015,百车全说2016,往期300多个小时的节目可供收听

行動星球
【行動星球⼀小徐說說話-EP106】第一代Infiniti G35的前世今生(上),Nissan Skyline(V35)的左駕兄弟!

行動星球

Play Episode Listen Later Mar 10, 2022 17:14


說起大名鼎鼎Skyline車系,大大們直覺就會想起GT-R、還有頭文字D,不過GT-R車系在R35後正式成為獨立車系,在此先不討論。第一代Infiniti G35算是第一輛量產的左駕Skyline,具備Coupe和Sedan車型,改為搭載VQ系列的V6引擎,當年推出時在美國銷量頗佳。G35的豐功偉業留到下集說明,這一集要講的是一般版的Skyline從R32、R33、R34一路演化到V35、G35的歷史演進,一同來了解Skyline在時代的衝擊下如何演進?而第一代G35又是如何橫空出世?

The Vagine Queens
The VQ's #23: “I might have broken my vagina”, Whiskey Dick, and the “female version” of ED

The Vagine Queens

Play Episode Listen Later Feb 25, 2022 22:11


Back like we never left! This episode The VQ's jump right in to the female version of erectile dysfunction. It's going to be vaginal dryness. Learn about why it happens, and what to do about it. Hear the story how one of us went too hard with the vibrator and ended up in the doctor's office! Yikes. Whiskey Dick. It's a thing, listen in as it's explained. What is estrogen cream and how does it help in menopause? As usual, come back for more fun next time. Until then have a Happy and Healthy Vagina! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/thevaginequeens/support

The Podium
Debunking Bonking - It's More than Fueling (with Robbie Ventura)

The Podium

Play Episode Listen Later Oct 18, 2021 75:35


I sat down with Robbie Ventura to discuss what happens when you “bonk”, the complicated variables that impact it, and how you can prevent it.  This episode was recorded for The Velocity Cycling Podcast, part of the educational component of Robbie's newest venture, VQ Velocity.Robbie Ventura is the founder and owner of Vision Quest Coaching.  Robbie was a professional cyclist for 12 years. A competitive racer on the dirt, road and track since the age of 7, Ventura amassed over 70 victories during his professional career, was a member of the U.S. World Team and rode the Track World Championships in Hamar, Norway, placing 5th in the elimination.  In 2000, Robbie started Vision Quest Coaching, a uniquely positioned program for endurance athletes focusing on high quality coaching delivered with the maximum benefits of the latest technology and training methodologies. The success of his athletes has helped VQ grow to five locations and, since opening, has helped thousands of athletes reach their potential in competitive athletic events. Robbie Ventura on InstagramVQ Velocity WebsiteVision Quest Coaching WebsiteSubscribe: Apple Podcast |  Spotify| Velocity Cycling PodcastCheck us out at: Podium Sports Medicine Website | Instagram

Anaesthesia Coffee Break
Gravitational effects on the lung, Wests Zones and a performance tip for the viva exam!

Anaesthesia Coffee Break

Play Episode Listen Later Oct 10, 2021 30:36


In this episode we delve into gravitational effects on the lung including ventilation aspects, pO2, pCO2, VQ matching and its effects and Wests Zones.The diagrams are from Respiratory Physiology by John B West.Please support us on our Patreonhttps://www.patreon.com/anaesthesiaAll proceeds will go to Fund a Fellow to help train anaesthetists in developing countries whilst acknowledging the work it takes to keep creating this educational resource.If you enjoyed this content please like and subscribePlease post any comments or questions below. Check out www.anaesthesiacollective.com and sign up to the ABCs of Anaesthesia facebook group for other content.Any questions please email lahiruandstan@gmail.comDisclaimer: The information contained in this video/audio/graphic is for medical practitioner education only. It is not and will not be relevant for the general public.Where applicable patients have given written informed consent to the use of their images in video/photography and aware that it will be published online and visible by medical practitioners and the general public.This contains general information about medical conditions and treatments. The information is not advice and should not be treated as such. The medical information is provided “as is” without any representations or warranties, express or implied. The presenter makes no representations or warranties in relation to the medical information on this video. You must not rely on the information as an alternative to assessing and managing your patient with your treating team and consultant. You should seek your own advice from your medical practitioner in relation to any of the topics discussed in this episode' Medical information can change rapidly, and the author/s make all reasonable attempts to provide accurate information at the time of filming. There is no guarantee that the information will be accurate at the time of viewingThe information provided is within the scope of a specialist anaesthetist (FANZCA) working in Australia.The information presented here does not represent the views of any hospital or ANZCA.These videos are solely for training and education of medical practitioners, and are not an advertisement. They were not sponsored and offer no discounts, gifts or other inducements. This disclaimer was created based on a Contractology template available at http://www.contractology.com.

Rock N Roll Pantheon
Make It Stop: Vanilla Ice - Mind Blowin' (w/ VQ of BLOWW and Izzy Da Rosa)

Rock N Roll Pantheon

Play Episode Listen Later Feb 4, 2021 124:07


It's Black History Month, which means it's the perfect time for another round of Eviscerating White Nonsense with returning white rapper roundup participant VQ of BLOWW and Boston comedian Izzy Da Rosa. The object of our ire this time is of course the ultimate Great White Embarassment Vanilla Ice and his 1992 follow up to his megahit debut album, the incredibly misinformed and extremely poorly executed "Mind Blowin". Featuring profoundly sad attempts to flex his 'gangsta rap' chops, references to smoking bales of 'hootie mac', and painfully corny sex tomes that dry up our collective vaginas quicker than old roller rink pizza under a heat lamp, this is a truly awful album. However modern day attempts to contextualize the infamous Ice invite us to consider - was Robert Matthew Van Winkle really all that bad, after all? Spoiler alert: yes, yes he was. Part of the Pantheon Podcast Network.

Make it Stop: A Bad Music Podcast
Vanilla Ice - Mind Blowin' (w/ VQ of BLOWW and Izzy Da Rosa)

Make it Stop: A Bad Music Podcast

Play Episode Listen Later Feb 2, 2021 124:07


It's Black History Month, which means it's the perfect time for another round of Eviscerating White Nonsense with returning white rapper roundup participant VQ of BLOWW and Boston comedian Izzy Da Rosa. The object of our ire this time is of course the ultimate Great White Embarassment Vanilla Ice and his 1992 follow up to his megahit debut album, the incredibly misinformed and extremely poorly executed "Mind Blowin". Featuring profoundly sad attempts to flex his 'gangsta rap' chops, references to smoking bales of 'hootie mac', and painfully corny sex tomes that dry up our collective vaginas quicker than old roller rink pizza under a heat lamp, this is a truly awful album. However modern day attempts to contextualize the infamous Ice invite us to consider - was Robert Matthew Van Winkle really all that bad, after all? Spoiler alert: yes, yes he was. Part of the Pantheon Podcast Network.

SlipAngle powered by MotoIQ
Kevin Parlett from VP Racing Fuel

SlipAngle powered by MotoIQ

Play Episode Listen Later Sep 25, 2020 28:39


Episode 361 - Kevin's been around time attack for a long time, and he's working for VP racing fuel now. They're going to be bringing some of their fancy fuel to upcoming #GRIDLIFE Events, and he's getting his G35 back into racing shape. This time around, he's ditching the turbo VQ for a LS3. --- Send in a voice message: https://anchor.fm/slipangle-show/message Support this podcast: https://anchor.fm/slipangle-show/support

Rizzy And Jet Podcasts
EP #5 Tuning with AdminTuning!

Rizzy And Jet Podcasts

Play Episode Listen Later Aug 28, 2020 61:16


Moncef owner of AdminTuning talking about tuning and best VQ builds! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

Rock N Roll Pantheon
Make it Stop: Liz Phair - Self Titled with VQ

Rock N Roll Pantheon

Play Episode Listen Later Jun 3, 2020 110:21


Oh, Liz Phair. An indie darling and feminist rock icon who DIY-ed herself into underground superstardom with her classic 1993 album "Exile in Guyville", Phair had gone through some ups and downs musically and personally after her breakout album, putting out a few quietly mediocre albums in the late 90s and getting married, having a child, and divorcing. So why did, at the age of 36, Liz "Fuck and Run" Phair team up with bubblegum pop wizards The Matrix, known for their songcrafting for teen pop ingenues Avril Lavigne and Britney Spears, to write aggressively bland midtempo cougar pablum? No one knows for sure, and certainly nobody wanted it - not her loyal fans, not the young girls the Matrixified monstrosities on the album were seemingly marketed to, and certainly not rock critics, who famously derided the album, with Pitchfork giving it a legendary 0.0 rating. Liz Phair's 2003 self-titled album is a mainstay on numerous Worst Albums of All Time list. But is it really all that terrible? Mostly, yes. Parsing through the poor decision making is returning guest, BLOWW wrestler VQ, who once again is asked to analyze the motivations of self-absorbed white women. Give us your Hot White Cum, stoppies. Part of the Pantheon Podcast Network.

Make it Stop: A Bad Music Podcast
Liz Phair - Self Titled (w/ VQ)

Make it Stop: A Bad Music Podcast

Play Episode Listen Later May 26, 2020 110:21


Oh, Liz Phair. An indie darling and feminist rock icon who DIY-ed herself into underground superstardom with her classic 1993 album "Exile in Guyville", Phair had gone through some ups and downs musically and personally after her breakout album, putting out a few quietly mediocre albums in the late 90s and getting married, having a child, and divorcing. So why did, at the age of 36, Liz "Fuck and Run" Phair team up with bubblegum pop wizards The Matrix, known for their songcrafting for teen pop ingenues Avril Lavigne and Britney Spears, to write aggressively bland midtempo cougar pablum? No one knows for sure, and certainly nobody wanted it - not her loyal fans, not the young girls the Matrixified monstrosities on the album were seemingly marketed to, and certainly not rock critics, who famously derided the album, with Pitchfork giving it a legendary 0.0 rating. Liz Phair's 2003 self-titled album is a mainstay on numerous Worst Albums of All Time list. But is it really all that terrible? Mostly, yes. Parsing through the poor decision making is returning guest, BLOWW wrestler VQ, who once again is asked to analyze the motivations of self-absorbed white women. Give us your Hot White Cum, stoppies. Part of the Pantheon Podcast Network.