POPULARITY
First up in the news: New GIMP, Debian comes to a RISC-V tablet, Google explains why the are putting Terminal on Android, Asahi Linux loses another top dev, Plex goes for the gold – yours, meet EU OS, Kernel 6.14 is released, Gnome 48 released, new GRUB updates, AerynOS is released with GNOME 48 In security and privacy: “MyTerms” wants to let the user dictate privacy Then in our Wanderings: Moss plays Musical Tablets, Joe Moxes the Prox, Dale has a burpday, Majid is on holiday and Bill is off truckin' somewhere... In our Innards section: Dale takes us through Mobile Networks In Bodhi Corner, Moss covers new translations and work on the next version.
150 heures de travaux communautaires pour avoir poignardé un adolescent. Elle voit son patron se faire tirer en pleine tête. Un proxénète enregistre son vidéoclip de rap en prison. Faits divers avec Maxime Deland, journaliste à l’agence QMIPour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
durée : 00:54:16 - Les Nuits de France Culture - par : Marc Floriot - En 1974, la Brigade de Répression du Proxénétisme et de toute forme d'exploitation des êtres humains, (BRP) succède à la brigade mondaine. En 2003, le racolage, même passif, est devenu illégal. Il est verbalisé non par la BRP, mais par les équipes municipales en charge de la voie publique. - réalisation : Virginie Mourthé
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver. Today, we're proud to share Loubna's highly anticipated talk (slides here)!Synthetic DataWe called out the Synthetic Data debate at last year's NeurIPS, and no surprise that 2024 was dominated by the rise of synthetic data everywhere:* Apple's Rephrasing the Web, Microsoft's Phi 2-4 and Orca/AgentInstruct, Tencent's Billion Persona dataset, DCLM, and HuggingFace's FineWeb-Edu, and Loubna's own Cosmopedia extended the ideas of synthetic textbook and agent generation to improve raw web scrape dataset quality* This year we also talked to the IDEFICS/OBELICS team at HuggingFace who released WebSight this year, the first work on code-vs-images synthetic data.* We called Llama 3.1 the Synthetic Data Model for its extensive use (and documentation!) of synthetic data in its pipeline, as well as its permissive license. * Nemotron CC and Nemotron-4-340B also made a big splash this year for how they used 20k items of human data to synthesize over 98% of the data used for SFT/PFT.* Cohere introduced Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress observing gains of up to 56.5% improvement in win rates comparing multiple teachers vs the single best teacher model* In post training, AI2's Tülu3 (discussed by Luca in our Open Models talk) and Loubna's Smol Talk were also notable open releases this year.This comes in the face of a lot of scrutiny and criticism, with Scale AI as one of the leading voices publishing AI models collapse when trained on recursively generated data in Nature magazine bringing mainstream concerns to the potential downsides of poor quality syndata:Part of the concerns we highlighted last year on low-background tokens are coming to bear: ChatGPT contaminated data is spiking in every possible metric:But perhaps, if Sakana's AI Scientist pans out this year, we will have mostly-AI AI researchers publishing AI research anyway so do we really care as long as the ideas can be verified to be correct?Smol ModelsMeta surprised many folks this year by not just aggressively updating Llama 3 and adding multimodality, but also adding a new series of “small” 1B and 3B “on device” models this year, even working on quantized numerics collaborations with Qualcomm, Mediatek, and Arm. It is near unbelievable that a 1B model today can qualitatively match a 13B model of last year:and the minimum size to hit a given MMLU bar has come down roughly 10x in the last year. We have been tracking this proxied by Lmsys Elo and inference price:The key reads this year are:* MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases* Apple Intelligence Foundation Language Models* Hymba: A Hybrid-head Architecture for Small Language Models* Loubna's SmolLM and SmolLM2: a family of state-of-the-art small models with 135M, 360M, and 1.7B parameters on the pareto efficiency frontier.* and Moondream, which we already covered in the 2024 in Vision talkFull Talk on YouTubeplease like and subscribe!Timestamps* [00:00:05] Loubna Intro* [00:00:33] The Rise of Synthetic Data Everywhere* [00:02:57] Model Collapse* [00:05:14] Phi, FineWeb, Cosmopedia - Synthetic Textbooks* [00:12:36] DCLM, Nemotron-CC* [00:13:28] Post Training - AI2 Tulu, Smol Talk, Cohere Multilingual Arbitrage* [00:16:17] Smol Models* [00:18:24] On Device Models* [00:22:45] Smol Vision Models* [00:25:14] What's NextTranscript2024 in Synthetic Data and Smol Models[00:00:00] [00:00:05] Loubna Intro[00:00:05] Speaker: I'm very happy to be here. Thank you for the invitation. So I'm going to be talking about synthetic data in 2024. And then I'm going to be talking about small on device models. So I think the most interesting thing about synthetic data this year is that like now we have it everywhere in the large language models pipeline.[00:00:33] The Rise of Synthetic Data Everywhere[00:00:33] Speaker: I think initially, synthetic data was mainly used just for post training, because naturally that's the part where we needed human annotators. And then after that, we realized that we don't really have good benchmarks to [00:01:00] measure if models follow instructions well, if they are creative enough, or if they are chatty enough, so we also started using LLMs as judges.[00:01:08] Speaker: Thank you. And I think this year and towards the end of last year, we also went to the pre training parts and we started generating synthetic data for pre training to kind of replace some parts of the web. And the motivation behind that is that you have a lot of control over synthetic data. You can control your prompt and basically also the kind of data that you generate.[00:01:28] Speaker: So instead of just trying to filter the web, you could try to get the LLM to generate what you think the best web pages could look like and then train your models on that. So this is how we went from not having synthetic data at all in the LLM pipeline to having it everywhere. And so the cool thing is like today you can train an LLM with like an entirely synthetic pipeline.[00:01:49] Speaker: For example, you can use our Cosmopedia datasets and you can train a 1B model on like 150 billion tokens that are 100 percent synthetic. And those are also of good quality. And then you can [00:02:00] instruction tune the model on a synthetic SFT dataset. You can also do DPO on a synthetic dataset. And then to evaluate if the model is good, you can use.[00:02:07] Speaker: A benchmark that uses LLMs as a judge, for example, MTBench or AlpacaEvil. So I think this is like a really mind blowing because like just a few years ago, we wouldn't think this is possible. And I think there's a lot of concerns about model collapse, and I'm going to talk about that later. But we'll see that like, if we use synthetic data properly and we curate it carefully, that shouldn't happen.[00:02:29] Speaker: And the reason synthetic data is very popular right now is that we have really strong models, both open and closed. It is really cheap and fast to use compared to human annotations, which cost a lot and take a lot of time. And also for open models right now, we have some really good inference frameworks.[00:02:47] Speaker: So if you have enough GPUs, it's really easy to spawn these GPUs and generate like a lot of synthetic data. Some examples are VLM, TGI, and TensorRT.[00:02:57] Model Collapse[00:02:57] Speaker: Now let's talk about the elephant in the room, model [00:03:00] collapse. Is this the end? If you look at the media and all of like, for example, some papers in nature, it's really scary because there's a lot of synthetic data out there in the web.[00:03:09] Speaker: And naturally we train on the web. So we're going to be training a lot of synthetic data. And if model collapse is going to happen, we should really try to take that seriously. And the other issue is that, as I said, we think, a lot of people think the web is polluted because there's a lot of synthetic data.[00:03:24] Speaker: And for example, when we're building fine web datasets here at Guillerm and Hinek, we're interested in like, how much synthetic data is there in the web? So there isn't really a method to properly measure the amount of synthetic data or to save a webpage synthetic or not. But one thing we can do is to try to look for like proxy words, for example, expressions like as a large language model or words like delve that we know are actually generated by chat GPT.[00:03:49] Speaker: We could try to measure the amount of these words in our data system and compare them to the previous years. For example, here, we measured like a, these words ratio in different dumps of common crawl. [00:04:00] And we can see that like the ratio really increased after chat GPT's release. So if we were to say that synthetic data amount didn't change, you would expect this ratio to stay constant, which is not the case.[00:04:11] Speaker: So there's a lot of synthetic data probably on the web, but does this really make models worse? So what we did is we trained different models on these different dumps. And we then computed their performance on popular, like, NLP benchmarks, and then we computed the aggregated score. And surprisingly, you can see that the latest DOMs are actually even better than the DOMs that are before.[00:04:31] Speaker: So if there's some synthetic data there, at least it did not make the model's worse. Yeah, which is really encouraging. So personally, I wouldn't say the web is positive with Synthetic Data. Maybe it's even making it more rich. And the issue with like model collapse is that, for example, those studies, they were done at like a small scale, and you would ask the model to complete, for example, a Wikipedia paragraph, and then you would train it on these new generations, and you would do that every day.[00:04:56] Speaker: iteratively. I think if you do that approach, it's normal to [00:05:00] observe this kind of behavior because the quality is going to be worse because the model is already small. And then if you train it just on its generations, you shouldn't expect it to become better. But what we're really doing here is that we take a model that is very large and we try to distill its knowledge into a model that is smaller.[00:05:14] Phi, FineWeb, Cosmopedia - Synthetic Textbooks[00:05:14] Speaker: And in this way, you can expect to get like a better performance for your small model. And using synthetic data for pre-training has become really popular. After the textbooks are all you need papers where Microsoft basically trained a series of small models on textbooks that were using a large LLM.[00:05:32] Speaker: And then they found that these models were actually better than models that are much larger. So this was really interesting. It was like first of its time, but it was also met with a lot of skepticism, which is a good thing in research. It pushes you to question things because the dataset that they trained on was not public, so people were not really sure if these models are really good or maybe there's just some data contamination.[00:05:55] Speaker: So it was really hard to check if you just have the weights of the models. [00:06:00] And as Hugging Face, because we like open source, we tried to reproduce what they did. So this is our Cosmopedia dataset. We basically tried to follow a similar approach to what they documented in the paper. And we created a synthetic dataset of textbooks and blog posts and stories that had almost 30 billion tokens.[00:06:16] Speaker: And we tried to train some models on that. And we found that like the key ingredient to getting a good data set that is synthetic is trying as much as possible to keep it diverse. Because if you just throw the same prompts as your model, like generate like a textbook about linear algebra, and even if you change the temperature, the textbooks are going to look alike.[00:06:35] Speaker: So there's no way you could scale to like millions of samples. And the way you do that is by creating prompts that have some seeds that make them diverse. In our case, the prompt, we would ask the model to generate a textbook, but make it related to an extract from a webpage. And also we try to frame it within, to stay within topic.[00:06:55] Speaker: For example, here, we put like an extract about cardiovascular bioimaging, [00:07:00] and then we ask the model to generate a textbook related to medicine that is also related to this webpage. And this is a really nice approach because there's so many webpages out there. So you can. Be sure that your generation is not going to be diverse when you change the seed example.[00:07:16] Speaker: One thing that's challenging with this is that you want the seed samples to be related to your topics. So we use like a search tool to try to go all of fine web datasets. And then we also do a lot of experiments with the type of generations we want the model to generate. For example, we ask it for textbooks for middle school students or textbook for college.[00:07:40] Speaker: And we found that like some generation styles help on some specific benchmarks, while others help on other benchmarks. For example, college textbooks are really good for MMLU, while middle school textbooks are good for benchmarks like OpenBookQA and Pico. This is like a sample from like our search tool.[00:07:56] Speaker: For example, you have a top category, which is a topic, and then you have some [00:08:00] subtopics, and then you have the topic hits, which are basically the web pages in fine web does belong to these topics. And here you can see the comparison between Cosmopedia. We had two versions V1 and V2 in blue and red, and you can see the comparison to fine web, and as you can see throughout the training training on Cosmopedia was consistently better.[00:08:20] Speaker: So we managed to get a data set that was actually good to train these models on. It's of course so much smaller than FineWeb, it's only 30 billion tokens, but that's the scale that Microsoft data sets was, so we kind of managed to reproduce a bit what they did. And the data set is public, so everyone can go there, check if everything is all right.[00:08:38] Speaker: And now this is a recent paper from NVIDIA, Neumatron CC. They took things a bit further, and they generated not a few billion tokens, but 1. 9 trillion tokens, which is huge. And we can see later how they did that. It's more of, like, rephrasing the web. So we can see today that there's, like, some really huge synthetic datasets out there, and they're public, so, [00:09:00] like, you can try to filter them even further if you want to get, like, more high quality corpses.[00:09:04] Speaker: So for this, rephrasing the web this approach was suggested in this paper by Pratyush, where basically in this paper, they take some samples from C4 datasets, and then they use an LLM to rewrite these samples into a better format. For example, they ask an LLM to rewrite the sample into a Wikipedia passage or into a Q& A page.[00:09:25] Speaker: And the interesting thing in this approach is that you can use a model that is Small because it doesn't, rewriting doesn't require knowledge. It's just rewriting a page into a different style. So the model doesn't need to have like knowledge that is like extensive of what is rewriting compared to just asking a model to generate a new textbook and not giving it like ground truth.[00:09:45] Speaker: So here they rewrite some samples from C4 into Q& A, into Wikipedia, and they find that doing this works better than training just on C4. And so what they did in Nemo Trans CC is a similar approach. [00:10:00] They rewrite some pages from Common Crawl for two reasons. One is to, like improve Pages that are low quality, so they rewrite them into, for example, Wikipedia page, so they look better.[00:10:11] Speaker: And another reason is to create more diverse datasets. So they have a dataset that they already heavily filtered, and then they take these pages that are already high quality, and they ask the model to rewrite them in Question and Answer format. into like open ended questions or like multi choice questions.[00:10:27] Speaker: So this way they can reuse the same page multiple times without fearing like having multiple duplicates, because it's the same information, but it's going to be written differently. So I think that's also a really interesting approach for like generating synthetic data just by rephrasing the pages that you already have.[00:10:44] Speaker: There's also this approach called Prox where they try to start from a web page and then they generate a program which finds how to write that page to make it better and less noisy. For example, here you can see that there's some leftover metadata in the web page and you don't necessarily want to keep that for training [00:11:00] your model.[00:11:00] Speaker: So So they train a model that can generate programs that can like normalize and remove lines that are extra. So I think this approach is also interesting, but it's maybe less scalable than the approaches that I presented before. So that was it for like rephrasing and generating new textbooks.[00:11:17] Speaker: Another approach that I think is really good and becoming really popular for using synthetic data for pre training is basically building a better classifiers. For filtering the web for example, here we release the data sets called fine web edu. And the way we built it is by taking Llama3 and asking it to rate the educational content of web pages from zero to five.[00:11:39] Speaker: So for example, if a page is like a really good textbook that could be useful in a school setting, it would get a really high score. And if a page is just like an advertisement or promotional material, it would get a lower score. And then after that, we take these synthetic annotations and we train a classifier on them.[00:11:57] Speaker: It's a classifier like a BERT model. [00:12:00] And then we run this classifier on all of FineWeb, which is a 15 trillion tokens dataset. And then we only keep the pages that have like a score that's higher than 3. So for example, in our case, we went from 15 trillion tokens to 3. to just 1. 5 trillion tokens. Those are really highly educational.[00:12:16] Speaker: And as you can see here, a fine web EDU outperforms all the other public web datasets by a larger margin on a couple of benchmarks here, I show the aggregated score and you can see that this approach is really effective for filtering web datasets to get like better corpuses for training your LLMs.[00:12:36] DCLM, Nemotron-CC[00:12:36] Speaker: Others also try to do this approach. There's, for example, the DCLM datasets where they also train the classifier, but not to detect educational content. Instead, they trained it on OpenHermes dataset, which is a dataset for instruction tuning. And also they explain like IAM5 subreddits, and then they also get really high quality dataset which is like very information dense and can help [00:13:00] you train some really good LLMs.[00:13:01] Speaker: And then Nemotron Common Crawl, they also did this approach, but instead of using one classifier, they used an ensemble of classifiers. So they used, for example, the DCLM classifier, and also classifiers like the ones we used in FineWebEducational, and then they combined these two. Scores into a, with an ensemble method to only retain the best high quality pages, and they get a data set that works even better than the ones we develop.[00:13:25] Speaker: So that was it for like synthetic data for pre-training.[00:13:28] Post Training - AI2 Tulu, Smol Talk, Cohere Multilingual Arbitrage[00:13:28] Speaker: Now we can go back to post training. I think there's a lot of interesting post training data sets out there. One that was released recently, the agent instructs by Microsoft where they basically try to target some specific skills. And improve the performance of models on them.[00:13:43] Speaker: For example, here, you can see code, brain teasers, open domain QA, and they managed to get a dataset that outperforms that's when fine tuning Mistral 7b on it, it outperforms the original instruct model that was released by Mistral. And as I said, to get good synthetic data, you really [00:14:00] have to have a framework to make sure that your data is diverse.[00:14:03] Speaker: So for example, for them, they always. And then they see the generations on either source code or raw text documents, and then they rewrite them to make sure they're easier to generate instructions from, and then they use that for their like instruction data generation. There's also the Tool3SFT mixture, which was released recently by Allen AI.[00:14:23] Speaker: It's also really good quality and it covers a wide range of tasks. And the way they make sure that this dataset is diverse is by using personas from the persona hub datasets. Which is basically a data set of like I think over a million personas. And for example, in the tool mixture to generate like a new code snippet, they would give like the model persona, for example, a machine learning researcher interested in neural networks, and then ask it to generate like a coding problem.[00:14:49] Speaker: This way you make sure that your data set is really diverse, and then you can further filter the data sets, for example, using the reward models. We also released a dataset called Smalltalk, [00:15:00] and we also tried to cover the wide range of tasks, and as you can see here, for example, when fine tuning Mistral 7b on the dataset, we also outperformed the original Mistral instructs on a number of benchmarks, notably on mathematics and instruction following with ifevil.[00:15:18] Speaker: Another paper that's really interesting I wanted to mention is this one called Multilingual Data Arbitrage by Cohere. And basically they want to generate a data set for post training that is multilingual. And they have a really interesting problem. It's the fact that there isn't like one model that's really good at all the languages they wanted.[00:15:36] Speaker: So what they do is that like they use not just one teacher model, but multiple teachers. And then they have a router which basically sends the prompts they have to all these models. And then they get the completions and they have a reward model that traces all these generations and only keeps the best one.[00:15:52] Speaker: And this is like arbitrage and finance. So well, I think what's interesting in this, it shows that like synthetic data, it doesn't have to come from a single model. [00:16:00] And because we have so many good models now, you could like pull these models together and get like a dataset that's really high quality and that's diverse and that's covers all your needs.[00:16:12] Speaker: I was supposed to put a meme there, but. Yeah, so that was it for like a synthetic data.[00:16:17] Smol Models[00:16:17] Speaker: Now we can go to see what's happening in the small models field in 2024. I don't know if you know, but like now we have some really good small models. For example, Lama 3. 2 1B is. It matches Lama 2. 13b from, that was released last year on the LMSYS arena, which is basically the default go to leaderboard for evaluating models using human evaluation.[00:16:39] Speaker: And as you can see here, the scores of the models are really close. So I think we've made like hugely forward in terms of small models. Of course, that's one, just one data point, but there's more. For example, if you look at this chart from the Quint 2. 5 blog post, it shows that today we have some really good models that are only like 3 billion parameters [00:17:00] and 4 billion that score really high on MMLU.[00:17:03] Speaker: Which is a really popular benchmark for evaluating models. And you can see here that the red, the blue dots have more than 65 on MMLU. And the grey ones have less. And for example, Llama33b had less. So now we have a 3b model that outperforms a 33b model that was released earlier. So I think now people are starting to realize that like, we shouldn't just scale and scale models, but we should try to make them more efficient.[00:17:33] Speaker: I don't know if you knew, but you can also chat with a 3B plus model on your iPhone. For example, here, this is an app called PocketPal, where you can go and select a model from Hugging Face. It has a large choice. For example, here we loaded the 5. 3. 5, which is 3. 8 billion parameters on this iPhone. And we can chat with this and you can see that even the latency is also acceptable.[00:17:57] Speaker: For example, here, I asked it to give me a joke about [00:18:00] NeurIPS. So let's see what it has to say.[00:18:06] Speaker: Okay, why did the neural network attend NeurIPS? Because it heard there would be a lot of layers and fun and it wanted to train its sense of humor. So not very funny, but at least it can run on device. Yeah, so I think now we have good small models, but we also have like good frameworks and tools to use these small models.[00:18:24] On Device Models[00:18:24] Speaker: So I think we're really close to having like really on edge and on device models that are really good. And I think for a while we've had this narrative. But just training larger models is better. Of course, this is supported by science scaling laws. As you can see here, for example, when we scale the model size, the loss is lower and obviously you get a better model.[00:18:46] Speaker: But and we can see this, for example, in the GPT family of models, how we went from just a hundred million parameters to more than a trillion. parameters. And of course, we all observed the performance improvement when using the latest model. But [00:19:00] one thing that we shouldn't forget is that when we scale the model, we also scale the inference costs and time.[00:19:05] Speaker: And so the largest models were are going to cost so much more. So I think now instead of just building larger models, we should be focusing on building more efficient models. It's no longer a race for the largest models since these models are really expensive to run and they require like a really good infrastructure to do that and they cannot run on, for example, consumer hardware.[00:19:27] Speaker: And when you try to build more efficient models that match larger models, that's when you can really unlock some really interesting on device use cases. And I think a trend that we're noticing now is the trend of training smaller models longer. For example, if you compare how much, how long LLAMA was trained compared to LLAMA3, there is a huge increase in the pre training length.[00:19:50] Speaker: LLAMA was trained on 1 trillion tokens, but LLAMA3 8b was trained on 15 trillion tokens. So Meta managed to get a model that's the same size, but But it performs so much [00:20:00] better by choosing to like spend the sacrifice during training, because as we know, training is a one time cost, but inference is something that's ongoing.[00:20:08] Speaker: If we want to see what are like the small models reads in 2024, I think this mobile LLM paper by Meta is interesting. They try to study different models that are like have the less than 1 billion parameters and find which architecture makes most sense for these models. For example, they find that depth is more important than width.[00:20:29] Speaker: So it's more important to have models that have like more layers than just one. making them more wide. They also find that GQA helps, that tying the embedding helps. So I think it's a nice study overall for models that are just a few hundred million parameters. There's also the Apple intelligence tech report, which is interesting.[00:20:48] Speaker: So for Apple intelligence, they had two models, one that was like on server and another model that was on device. It had 3 billion parameters. And I think the interesting part is that they trained this model using [00:21:00] pruning. And then distillation. And for example, they have this table where they show that, like, using pruning and distillation works much better than training from scratch.[00:21:08] Speaker: And they also have some interesting insights about, like, how they specialize their models on specific tasks, like, for example, summarization and rewriting. There's also this paper by NVIDIA that was released recently. I think you've already had a talk about, like, hybrid models that was all interesting.[00:21:23] Speaker: And this model, they used, like, a hybrid architecture between state space models and transformers. And they managed to train a 1B model that's really performant without needing to train it on a lot of tokens. And regarding our work, we just recently released SmallM2, so it's a series of three models, which are the best in class in each model size.[00:21:46] Speaker: For example, our 1. 7b model outperforms Lama 1b and also Qt 2. 5. And how we managed to train this model is the following. That's where you spent a lot of time trying to curate the pre training datasets. We did a lot of [00:22:00] ablations, trying to find which datasets are good and also how to mix them. We also created some new math and code datasets that we're releasing soon.[00:22:08] Speaker: But you basically really spent a lot of time trying to find what's the best mixture that you can train these models on. And then we spent some time trying to like we also trained these models for very long. For example, small M1 was trained only on 1 trillion tokens, but this model is trained on 11 trillion tokens.[00:22:24] Speaker: And we saw that the performance kept improving. The models didn't really plateau mid training, which I think is really interesting. It shows that you can train such small models for very long and keep getting performance gains. What's interesting about SmallLM2 is that it's fully open. We also released, like the pre training code base, the fine tuning code, the datasets, and also evaluation in this repository.[00:22:45] Smol Vision Models[00:22:45] Speaker: Also there's, like, really interesting small models for text, but also for vision. For example, here you can see SmallVLM, which is a 2B model that's really efficient. It doesn't consume a lot of RAM, and it also has a good performance. There's also Moondream 0. [00:23:00] 5b, which was released recently. It's like the smallest visual language model.[00:23:04] Speaker: And as you can see, there isn't like a big trade off compared to Moondream 2b. So now I showed you that we have some really good small models. We also have the tools to use them, but why should you consider using small models and when? I think, like, small models are really interesting because of the on device feature.[00:23:23] Speaker: Because these models are small and they can run fast, you can basically run them on your laptop, but also on your mobile phone. And this means that your dataset stays locally. You don't have to send your queries to third parties. And this really enhances privacy. That was, for example, one of the big selling points for Apple Intelligence.[00:23:42] Speaker: Also, right now, we really have a lot of work to do. So many frameworks to do on device inference. For example, there's MLX, MLC, Llama, CPP, Transformers, JS. So we have a lot of options and each of them have like great features. So you have so many options for doing that. Small models are also really powerful if you choose to specialize them.[00:24:00][00:24:00] Speaker: For example, here there's a startup called Numind, which took small LM and then they fine tuned it on text extraction datasets. And they managed to get a model that's not very far from models that are much larger. So I think text extraction is like one use case where small models can be really performant and it makes sense to use them instead of just using larger models.[00:24:19] Speaker: You can also chat with these models in browser. For example, here, you can go there, you can load the model, you can even turn off your internet and just start chatting with the model locally. Speaking of text extraction, if you don't want to fine tune the models, there's a really good method of structure generation.[00:24:36] Speaker: We can basically force the models to follow a JSON schema that you defined. For example, here, we try to force the model to follow a schema for extracting key information from GitHub issues. So you can input free text, which is a complaint about a GitHub repository, something not working. And then you can run it there and the model can extract anything that is relevant for your GitHub issue creation.[00:24:58] Speaker: For example, the [00:25:00] priority, for example, here, priority is high, the type of the issue bug, and then a title and the estimation of how long this will take to fix. And you can just like do this in the browser, you can transform your text into a GitHub issue that's properly formatted.[00:25:14] What's Next[00:25:14] Speaker: So what's next for synthetic data and small models?[00:25:18] Speaker: I think that domain specific synthetic data is going to be, it's already important, it's going to be even more important. For example, generating synthetic data for math. I think this really would help improve the reasoning of a lot of models. And a lot of people are doing it, for example, Quint 2. 12 math, everyone's trying to reproduce a one.[00:25:37] Speaker: And so I think for synthetic data, trying to specialize it on some domains is going to be really important. And then for small models, I think specializing them through fine tuning, it's also going to be really important because I think a lot of companies are just trying to use these large models because they are better.[00:25:53] Speaker: But on some tasks, I think you can already get decent performance with small models. So you don't need to Pay like a [00:26:00] cost that's much larger just to make your model better at your task by a few percent. And this is not just for text. And I think it also applies for other modalities like vision and audio.[00:26:11] Speaker: And I think you should also watch out for on device frameworks and applications. For example, like the app I showed, or lama, all these frameworks are becoming really popular and I'm pretty sure that we're gonna get like more of them in 2025. And users really like that. Maybe for other, I should also say hot take.[00:26:28] Speaker: I think that like in AI, we just started like with fine tuning, for example, trying to make BERT work on some specific use cases, and really struggling to do that. And then we had some models that are much larger. So we just switched to like prompt engineering to get the models And I think we're going back to fine tuning where we realize these models are really costly.[00:26:47] Speaker: It's better to use just a small model or try to specialize it. So I think it's a little bit of a cycle and we're going to start to see like more fine tuning and less of just like a prompt engineering the models. So that was my talk. Thank you for following. And if you have [00:27:00] any questions, we can take them now. Get full access to Latent Space at www.latent.space/subscribe
#newproducts JP's Product Pick of the Week 12/17/24 VCNL4200 Long Distance IR Prox/Light Sensor STEMMA QT RECAP https://www.adafruit.com/product/6064 Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------
Le congé de TPS adopté. Frontière: Legault est rassuré. Mises à pied chez Postes Canada. C'est le vendredi fou! Lion Électrique: 130 millions à risque. Les réseaux sociaux et les proxénètes… Réouverture de Notre-Dame de Paris. Aznavour et Callas au cinéma. Moana 2 a amassé 14 millions pour une première journée… un record? L'industrie du livre du Québec se porte bien? Le tour de l'actualité avec Alexandre Dubé et Mario Dumont. Pour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
durée : 00:02:49 - Une affaire de proxénétisme aggravé devant le tribunal de Pau
Zach is joined by Ken Gordon and co-host of the Git Rec'd podcast, Micah Henderson, to talk the graphic novel, "The Prox Transmissions." Written by Starset lead singer, Dustin Bates, "The Prox Transmissions" begins the science fiction tales told within the music of the band Starset.---------------------------------------------------Want to hear more from Thor and Micah?Check out the Git Rec'd Podcast!Follow them on Instagram!---------------------------------------------------Check out Dreampass and all their killer tracks on Spotify!---------------------------------------------------Join the Patreon to help us keep the lights on, and internet connected! https://www.patreon.com/tctwl---------------------------------------------------Listen to my other podcast!TFD: NerdcastAnd I am also part of the team over at...I Read Comic Books!---------------------------------------------------Want to try out all the sweet gigs over on Fiverr.com? Click on the link below and sign up!https://go.fiverr.com/visit/?bta=323533&brand=fiverrcpa---------------------------------------------------Follow on Instagram!The Comics That We LoveFollow on Tiktok!The Comics that We LoveFollow on Twitter!@Z_Irish_Red
Reprendre sa vie en main après avoir été sous les griffes de plusieurs proxénètes pendant 10 ans. La rencontre Maréchal-Dumont avec Isabelle Maréchal et Mario Dumont Pour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
Depuis plusieurs années, des milliers d'adolescentes placées à l'Aide sociale à l'enfance (Ase) sont régulièrement approchées par des proxénètes. Ces jeunes filles parfois très jeunes, sont forcées à se prostituer, souvent dans des logements loués sur internet.Les proxénètes prennent contact avec les jeunes filles par le biais d'internet, ou directement devant les foyers. Les éducateurs de l'Ase se disent impuissants, mettant en avant le manque de moyens mis à leur disposition pour lutter contre ce fléau. Cet épisode de Code source est raconté par les deux journalistes du Parisien qui ont signé cette enquête : Elsa Mari, du service société, et Stéphanie Forestier, de notre édition de l'Oise.Crédits. Direction de la rédaction : Pierre Chausse - Rédacteur en chef : Jules Lavie - Reporter : Barbara Gouy - Production : Raphaël Pueyo, Clara Garnier-Amouroux, Barbara Gouy et Thibault Lambert - Réalisation et mixage : Pierre Chaffanjon - Musiques : François Clos, Audio Network. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Selon les enquêteurs, elle aurait amassé plus d'un million! Ses cinq hôtels ont fermé et celle que l'on appelait Madame Jo a été condamnée par le tribunal de Toulon à 18 mois de prison pour avoir dirigé un réseau de prostitution. Denis Dupont vous raconte cette archive parue en 1964 dans L'Indépendant.
Connais-tu (vraiment) la prostitution et le travail du sexe ? Que tu sois pour ou contre, comment met-on en danger les travailleuses et travailleurs du sexe en voulant les protéger ? En quoi la réalité est bien plus subtile que nos clichés ? Voici Cybèle Lespérance, travailleuse du sexe (TSD) et militante pour les droits des TDS ! SOMMAIRE 00:00 Travailleuse du sexe 01:45 Disclaimer 02:31 Ses débuts 05:16 Escort et accompagnante sexuelle 08:15 Travail du sexe 09:01 Mineures et traite humaine 11:05 C'est légal ? 13:43 Proxénétisme 16:53 Belgique et Allemagne 18:53 Australie 20:08 Accompagnante sexuelle 22:52 Anecdotes 25:36 Sens du travail sexuel____ ____ Vous voulez sponsoriser le podcast ? C'est par ici !
Construit sous Louis XIV, le château de Versailles est associé à l'Ancien Régime. Résidence des rois de France, le château est par la suite devenu lieu symbolique des relations franco-allemande. En 1871, alors que la France est défaite par la Prusse, c'est dans la galerie des Glaces qu'est proclamé l'Empire allemand. Le 28 juin 1919, c'est dans cette même galerie des Glaces que l'Allemagne vaincue signe le Traité de Versailles qui fixe les modalités de sa défaite. Versailles devient outre-Rhin un synonyme d'humiliation. Mais qu'en est-il de la Seconde Guerre mondiale ? Dans ce récit, Virginie Girod vous plonge au cœur de l'Histoire du château de Versailles pendant l'Occupation. Thèmes abordés : château de Versailles, œuvres d'art, Occupation, seconde guerre mondiale "Au cœur de l'histoire" est un podcast Europe 1 Studio- Présentatrice : Virginie Girod - Auteure : Sandrine Brugot- Production : Camille Bichler- Réalisation : Pierre Cazalot- Direction artistique : Julien Tharaud- Composition de la musique originale : Julien Tharaud et Sébastien Guidis- Edition et Diffusion : Nathan Laporte- Promotion et coordination des partenariats : Marie Corpet- Visuel : Sidonie Mangin Sources Blaizeau Robert, "Le château de Versailles pendant la Seconde Guerre mondiale", In: Versalia. Revue de la Société des Amis de Versailles, n°19, 2016. pp. 93-108. https://www.persee.fr/doc/versa_1285-8412_2016_num_19_1_960 Ladoué Pierre, Et Versailles fut sauvegardé : Souvenirs d'un conservateur, 1939-1941 https://books.google.fr/books?id=G9pYDwAAQBAJ&pg=PT17&lpg=PP1&focus=viewport&hl=fr&output=html_text https://gallica.bnf.fr/ark:/12148/bd6t5103353s/f1.item.r=(prOx:%20%22messe%22%2050%20%22versailles%22 ) "Versailles occupé. Le Château dans la Seconde Guerre mondiale" https://www.youtube.com/watch?v=4T8k_luASzs https://www.chateauversailles.fr Découvrez l'abonnement "Au Coeur de l'Histoire +" et accédez à des heures de programmes, des archives inédites, des épisodes en avant-première et une sélection d'épisodes sur des grandes thématiques. Profitez de cette offre sur Apple Podcasts dès aujourd'hui !
Join PJ tonight for this weeks #Lockboss Show!CONNECT WITH CLK SUPPLIESWebsite: https://www.clksupplies.com/SUBSCRIBE: https://www.youtube.com/@clksupplies?sub_confirmation=1CONNECT ON SOCIALFacebook: https://www.facebook.com/clksupplies/Instagram: https://www.instagram.com/clksupplies/Here at CLK Supplies, we believe a # Lockboss is anyone who works with locks and keys. Maybe you rekey locks, install lock hardware, or help a customer who's locked out; at the end of the day, you show up and get the job done, and you should be proud of what you do.We want to celebrate the # Lockbosses in our community. Every week at 4:00 PM PST on our YouTube Live show "# Lockboss Show & Giveaway," we give away 5 free prizes to lucky individuals. Include #Lockboss in your comment on that week's videos to be automatically entered to win one of the prizes.# Lockboss Show & Giveaway:NO PURCHASE IS NECESSARY TO ENTER. Must be 18 years orolder and U.S. Resident. Void where prohibited. This promotionis in no way sponsored, endorsed, or administered by,sanctioned, or associated with YouTube, Instagram, or Facebook. The winnerannounced Every Tuesday by 11:59pm PST. For Official Rules, click the link below https://www.clksupplies.com/pages/lockboss-giveaway#locksmith #lock #key #security #locksmithlife #locksmithing #lockpicking #locksmithtools #sparekey #rekey #lockpick #clksuppliesABOUT US:Locksmithing is what PJ knows, he grew up watching his dad locksmith. PJ started his locksmith training at age 6 by learning how to cut a key! PJ, President of CLK Supplies shows locksmith tips, does locksmith training videos as a sort of locksmith school, interviews locksmiths goes over locksmith equipment, key cutting machines, and more. If you are interested in locksmithing, want to know how to use locksmith tools, or would like to learn a few new locksmith tricks you are in the right place. Welcome!
I've sat here for a bit thinking about what I wanted to say about today's guest. Jen Prox-Weisblat who may be better known by her business name Prox or ProxArtist, is someone I've known and admired for a very long time. I've always seen Jen as a prolific jewelry artist, obsessed with quality and detail. Whenever I see a piece she has finished, my first thought is “Man I wonder how long that piece took to fabricate” which is then usually followed by some thought of how I would lose my mind if I had to saw out or solder all of those little details. I've had visions of her locked in her studio from sun up to sun down, only taking a break because she really has to pee. Yet in our conversation, Jen talks about the fullness of her days and life. She is not just slogging away at her bench unceasingly. She is gardening, spending time with her kids, and even taking days off back to back. In prioritizing the things in life that matter, she's found a sense of balance in her days, even when it feels a little chaotic. Equally perplexing to me is the fact that with all the hours of work she puts into each piece, she has absolutely no attachment to it if the end product isn't something she love. Jen just scraps it, or reworks it if she can without the deep disappointment that I feel when this happens. And let's not forget that Jen has a huge following of loyal fans on Instagram. What's her secret? She doesn't have one. She posts when she has something to post and she's not worrying one to worry about the results. My friends if you have been following Jen, I think you are going to be as surprised as I was as you listen to our conversation. Follow Jen... Instagram: @proxartist Website: www.proxartist.com If this podcast means something to you and you would like to support it, please take a a moment to give it a few kinds word with a written review on your favorite podcast listening platform. This helps me share the podcast with others. You can also share a favorite episode or consider joining our Slowmade Podcast Patreon community. You support literally makes this podcast possible. Thank you so much! You can follow along or reach out to Christine on Instagram: @christinemighion or send her an email at: info@christinemighion.com
The Killer Klowns From Outer Space Game Gameplay Review. Fright Night Gaming goes over the new Killer Klowns Gameplay and covers the following: Prox Chat, all escapes, all klowns, how to play as klowns and humans, gameplay strategy, female klowns, weapons, perks, abilitys, and much more! We talk the new Texas Chainsaw Massacre Lobby Penalty, New Johnny Skin, Nancy Skin, and More! Enjoy-Social Media Links - YouTube - https://www.youtube.com/channel/UCHJRgXrWm-aVOzjR1ANwBHQ
Este el el video en cuestion. https://www.youtube.com/watch?v=xbl5Jls444E Y este es el gran procesador con el que saquen una bestia los chicos de SlimBook https://www.noticias3d.com/noticia/94667/amd-apu-strix-halo-premium.html Si el ProX es el que tengo actual, creo que el modelo con este procesador compliendo mis esperanzas en prestaciones se llame ProHALO: https://www.noticias3d.com/noticia/93903/amd-confirma-apu-strix-halo-16-cores-zen5-40-cus-rdna3.html
Vous avez sans doute suivi le feuilleton internet qui opposait l'influenceur britanno-états-unien Andrew Tate à l'activiste écologiste suédoise Greta Thunberg. L'un se vantait d'avoir 33 voitures ultra-polluantes, l'autre le taclait en retour avec une répartie cinglante. Quelques jours après, l'ancien kickboxeur, devenu célèbre après avoir participé à l'émission Big brother en Angleterre en 2016 et influenceur masculiniste, a été arrêté par les autorités roumaines pour viols et traite d'êtres humains. Il est accusé d'avoir un réseau de proxénétisme en Roumanie et au Royaume-Uni. Plusieurs femmes l'ont aussi accusé de viol et de violences physiques. Celui qui s'autoproclame “roi de la masculinité toxique” a construit son réseau grâce à une technique qu'il a théorisé en ligne : la méthode “loverboy”. “Loverboy” comme dans “amour” ? Donc concrètement, elle fonctionne comment la méthode Loverboy ? Et cette méthode est-elle courante ? Écoutez la suite de cet épisode de "Maintenant vous savez". Un podcast Bababam Originals, écrit et réalisé par Antonella Francini. À écouter aussi : “Littéralement n'importe qui d'autre” : qui est cet étrange candidat à la présidentielle américaine ? Théorie du complot : qui sont les "citoyens souverains" ? Une union de la gauche est-elle vraiment possible en France ? Retrouvez tous les épisodes de "Maintenant vous savez". Suivez Bababam sur Instagram. Date de première diffusion : 17 janvier 2023 Learn more about your ad choices. Visit megaphone.fm/adchoices
How well do you know Prox Keys?CONNECT WITH CLK SUPPLIESWebsite: https://www.clksupplies.com/SUBSCRIBE: https://www.youtube.com/@clksupplies?sub_confirmation=1CONNECT ON SOCIALFacebook: https://www.facebook.com/clksupplies/Instagram: https://www.instagram.com/clksupplies/Here at CLK Supplies, we believe a # Lockboss is anyone who works with locks and keys. Maybe you rekey locks, install lock hardware, or help a customer who's locked out; at the end of the day, you show up and get the job done, and you should be proud of what you do.We want to celebrate the # Lockbosses in our community. Every week at 4:00 PM PST on our YouTube Live show "# Lockboss Show & Giveaway," we give away 5 free prizes to lucky individuals. Include #Lockboss in your comment on that week's videos to be automatically entered to win one of the prizes.# Lockboss Show & Giveaway:NO PURCHASE IS NECESSARY TO ENTER. Must be 18 years orolder and U.S. Resident. Void where prohibited. This promotionis in no way sponsored, endorsed, or administered by,sanctioned, or associated with YouTube, Instagram, or Facebook. The winnerannounced Every Tuesday by 11:59pm PST. For Official Rules, click the link below https://www.clksupplies.com/pages/lockboss-giveaway#locksmith #lock #key #security #locksmithlife #locksmithing #lockpicking #locksmithtools #sparekey #rekey #lockpick #clksuppliesABOUT US:Locksmithing is what PJ knows, he grew up watching his dad locksmith. PJ started his locksmith training at age 6 by learning how to cut a key! PJ, President of CLK Supplies shows locksmith tips, does locksmith training videos as a sort of locksmith school, interviews locksmiths goes over locksmith equipment, key cutting machines, and more. If you are interested in locksmithing, want to know how to use locksmith tools, or would like to learn a few new locksmith tricks you are in the right place. Welcome!
Cette semaine, on discute du phénomène préoccupant du « white lining» qui consiste à rouler très dangereusement sur les routes, d'un programme de prévention du proxénétisme, et du transfert d'établissement carcéral du tristement célèbre meurtrier Luka Rocco Magnotta. https://www.journaldemontreal.com/actualite/faits-divers Pour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
Make a Logo on Fiverr I took the Saramonic Blink 500 ProX B2R with me to CES to get content for Geekazine. I recorded several videos with this dual-transmitter camera microphone. Here are my thoughts. What is the Saramonic Blink 500 ProX B2R? This is an over the camera wireless microphone system that uses 2.4 […] The post Unboxing & Testing the Saramonic Blink 500 ProX B2R: Worth The Hype? appeared first on Geekazine.
Un proxénète récidiviste renvoie son avocate pendant l'audience. Les victimes de Robert Pickton stupéfaites de le voir admissible à une semi-liberté La rencontre Gibeault-Dutrizac avec Nicole Gibeault, juge à la retraitePour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
durée : 00:03:39 - Le Pourquoi du comment : histoire - par : Gérard Noiriel - Harry Silla, souteneur noir vs François Carbone, souteneur corse, se livrent en 1929 à une guerre sanglante pour le "contrôle du trottoir" à Marseille. Deux "thèses" de ce crime, l'une livre une lecture "racialisée" par analogie à Chicago, et l'autre, analyse les points communs : "des navigateurs".
Siamo in tanto probabilmente a rosicare per l'ultimo Totocalcio, con la Juve che ha fatto saltare presumibilmente tutte o quasi le giocate. Vediamo com'è andata, il grande apporto dei Pronostici Naturali, le schedine per rosicare l'ultima volta, la giocata di recupero lato betting per ulteriore rosicata e poi voltiamo pagina e ci concentriamo su tutti i pronostici del prossimo Totocalcio, concorso n. 5. Dedicato a chi fa betting in modo responsabile, divertente ma lo stesso soddisfacente. Aumenta la tua conoscenza e le tue capacità di analisi con il modello dei Pronostici Naturali per scommettere sui campionati. Leggi gratis il libro-manuale con Amazon Kindle Unlimited su Amazon o acquista la versione e-book o cartacea. Leggi le recensioni! #pronosticinaturali #betting #scommesse #bollecalcio #totocalcio #schedina
38 tracks are in store for you this week on TTE302 with a 3-hour show. All new music from Cris Grey, Eryon Stocker, Junk Project, Prox, and Starry Major among many, many more! Choose your player
Ça commence par une lune de miel et des promesses d'amour pour la vie. Amoureuses, mais victimes d'emprise, de toutes jeunes femmes se retrouvent ensuite jetée dans la prostitution. La technique du loverboy est une technique éprouvée de la traite des êtres humains. Jadis essentiellement pratiquée dans les pays de l'Est, elle fait désormais des victimes en Suisse. À Genève et Lausanne, ces proxénètes d'un genre nouveau ciblent des adolescentes mineures, toujours plus jeunes.
Machine safety is often mistaken as a complicated topic, but safe inductive sensors from Pepperl+Fuchs make safe position monitoring easy. Tune in for a brief background on machine safety and updates on safe inductive sensors for mobile equipment.See omnystudio.com/listener for privacy information.
Pendant près de 20 ans, Madame Claude a été à la tête d'un réseau important de prostitution « de luxe » qui avait pour clients des personnes importantes (politiques, personnalités publiques...).
Chanel Rion of One America News Network, joins Marc to discuss the failed uprising in Russia and if there was more to it than what was being told to the public
Welcome back again! This week we're going through 95% completely new releases and dropping the best tracks from each. The mix will take you in a few different directions, but all of them, awesome :) So let's go! This is BL_K NOISE Radio... TRACKLIST: 00:00 DJ Skymall 01:20 Blawan – Panic https://blawan.bandcamp.com/album/dismantled-into-juice 04:51 Woulg - Echinoderm https://woulg.bandcamp.com/album/soap 07:56 Prox.Bleep - Fragmented https://billegalbeats.bandcamp.com/album/fragmented 12:02 Jan Jelinek - The Water Seems Changed To Mist And Vapor https://janjelinek.bandcamp.com/album/seascape-polyptych 15:07 Reid Willis - Sifting Through The Years https://reidwillis.bandcamp.com/album/sediment 18:47 DJ Skymall 22:17 James Holden - Continuous Revolution https://jamesholden.bandcamp.com/album/imagine-this-is-a-high-dimensional-space-of-all-possibilities 24:32 P1nkf1re - p011 var https://p1nkf1re.bandcamp.com/album/cart1 27:37 Jodey Kendrick - Ava Acid https://wemerecords.bandcamp.com/album/grace-weme080 31:47 Autechre - /]{- /](||) Excerpt https://www.discogs.com/release/197416-Various-All-Tomorrows-Parties-30-Autechre-Curated 35:42 Grischa Lichtenberger - 0712_24_lv_!_sc_! https://raster-raster.bandcamp.com/album/works-for-last-work 38:35 Exm- 17 06 is the (clip) https://exmat.bandcamp.com/album/treethree 45:00 DJ Skymall 47:44 Lowfish - Scarborough Brutalist https://lowfish.bandcamp.com/album/grey-with-breaks 49:46 Datassette - Polyhedron Navigator https://shop.cpurecords.net/album/kestrel-manoeuvres-in-the-dark 55:15 Oval - Zauberwort https://oval.bandcamp.com/album/romantiq 57:43 DJ Skymall 58:44 Suumhow - Krimineilzat https://n5md.bandcamp.com/album/years-failed-successfully 62:26 End. https://blknoise.bandcamp.com https://www.instagram.com/blknoise https://twitter.com/blknoisemusic https://www.instagram.com/ed_skymall
Vous avez sans doute suivi le feuilleton internet qui opposait l'influenceur britanno-états-unien Andrew Tate à l'activiste écologiste suédoise Greta Thunberg. L'un se vantait d'avoir 33 voitures ultra-polluantes, l'autre le taclait en retour avec une répartie cinglante. Quelques jours après, l'ancien kickboxeur, devenu célèbre après avoir participé à l'émission Big brother en Angleterre en 2016 et influenceur masculiniste, a été arrêté par les autorités roumaines pour viols et traite d'êtres humains. Il est accusé d'avoir un réseau de proxénétisme en Roumanie et au Royaume-Uni. Plusieurs femmes l'ont aussi accusé de viol et de violences physiques. Celui qui s'autoproclame “roi de la masculinité toxique” a construit son réseau grâce à une technique qu'il a théorisé en ligne : la méthode “loverboy”. “Loverboy” comme dans “amour” ? Donc concrètement, elle fonctionne comment la méthode Loverboy ? Et cette méthode est-elle courante ? Écoutez la suite de cet épisode de "Maintenant vous savez". Un podcast Bababam Originals, écrit et réalisé par Antonella Francini. Date de première diffusion : 15 janvier 2023 À écouter aussi : Pourquoi la solitude touche-t-elle de plus en plus les ados ? Qu'est-ce que la “stunt food”, qui enflamme les réseaux sociaux ? Faut-il taxer l'eau quand on en consomme trop ? Learn more about your ad choices. Visit megaphone.fm/adchoices
Kevin Bailey is the Marketing Director for the Race Winning Brands Group that owns numerous motorsport performance companies including Wiseco, Rekluse, and Pro-X. Kevin talks about his history within the sport and R.W.B.'s involvement in the moto industry.
Connais-tu (vraiment) la prostitution et le travail du sexe ? Que tu sois pour ou contre, comment met-on en danger les travailleuses et travailleurs du sexe en voulant les protéger ? En quoi la réalité est bien plus subtile que nos clichés ? Voici Cybèle Lespérance, travailleuse du sexe (TSD) et militante pour les droits des TDS ! SOMMAIRE 00:00 Travailleuse du sexe 01:45 Disclaimer 02:31 Ses débuts 05:16 Escort et accompagnante sexuelle 08:15 Travail du sexe 09:01 Mineures et traite humaine 11:05 C'est légal ? 13:43 Proxénétisme 16:53 Belgique et Allemagne 18:53 Australie 20:08 Accompagnante sexuelle 22:52 Anecdotes 25:36 Sens du travail sexuel
Vous avez sans doute suivi le feuilleton internet qui opposait l'influenceur britanno-états-unien Andrew Tate à l'activiste écologiste suédoise Greta Thunberg. L'un se vantait d'avoir 33 voitures ultra-polluantes, l'autre le taclait en retour avec une répartie cinglante. Quelques jours après, l'ancien kickboxeur, devenu célèbre après avoir participé à l'émission Big brother en Angleterre en 2016 et influenceur masculiniste, a été arrêté par les autorités roumaines pour viols et traite d'êtres humains. Il est accusé d'avoir un réseau de proxénétisme en Roumanie et au Royaume-Uni. Plusieurs femmes l'ont aussi accusé de viol et de violences physiques. Celui qui s'autoproclame “roi de la masculinité toxique” a construit son réseau grâce à une technique qu'il a théorisé en ligne : la méthode “loverboy”. “Loverboy” comme dans “amour” ? Donc concrètement, elle fonctionne comment la méthode Loverboy ? Et cette méthode est-elle courante ? Écoutez la suite de cet épisode de "Maintenant vous savez". Un podcast Bababam Originals, écrit et réalisé par Antonella Francini. À écouter aussi : Qu'est-ce que le mouvement des Masculinistes ? Que sont les Incels ? Que sont les "warming stripes", ce nouveau symbole universel de la cause climatique ? Learn more about your ad choices. Visit megaphone.fm/adchoices
Gordon Prox ist YouTube-Creator, StartUp-Gründer, Food-Enthusiast, Speaker, Podcast-Host und Lebemensch!
Dernier parrain d'envergure à avoir été assassiné, Francis le Belge était l'image parfaite qu'on se fait du parrain, qui ne vit que pour son aura, son influence et sa personnalité. Avec le Belge, on est en plein western. Ses jeunes années, il les a consacrées à devenir le tireur le plus rapide de l'ouest… de l'Huveaune. Parvenu au sommet, il a dû assumer ce titre dit de «parrain». Le jour de sa mort, la légende prétend qu'il aurait écarté les bras en croix alors que les balles le traversaient. Cet épisode de « Les parrains de la côte » est co-produit par Initial Studio et Comic Strip Production, adapté de la série documentaire audiovisuelle « Les parrains de la côte » produite par Comic Strip Production, écrite et réalisée par Thierry Aguila. Avec la voix d'Olivier Marchal. Bonne écoute ! Pour découvrir nos autres podcasts, suivez Initial Studio sur Instagram et Facebook. Crédits du podcast Production exécutive du podcast : Initial Studio Production éditoriale : Sarah Koskievic assistée de Louise Nguyen Montage : Victor Benhamou et Camille Legras Illustration : Initial Studio
durée : 00:29:05 - Les Pieds sur terre - par : Sonia Kronlund - Amadou devient du jour au lendemain proxénète d'une adolescente. Après avoir été arrêté et placé en détention, il nous raconte sa plongée dans une spirale illégale et infernale. Simon Benard-Courbon, magistrat et substitut au Parquet des mineurs de Bobigny, décrypte un phénomène d'ampleur.
REDIFF - Pendant près de 20 ans, Madame Claude a été à la tête d'un réseau important de prostitution « de luxe » qui avait pour clients des personnes importantes (politiques, personnalités publiques...). Invités : Philippe Thuillier, producteur du documentaire « Les confessions de Madame Claude », Martine Monteil, à l'époque chef de la Brigade de Répression du Proxénétisme
Dans cet épisode, découvrez la proxénète la plus célèbre au monde. Révolutionnaire dans ses méthodes, intraitable mais maternelle avec ses jeunes protégées, elle a régné pendant vingt ans sur le milieu de la prostitution française et internationale. Car si personne ne la connaissait vraiment, elle, connaissait tout le monde. Son nom : Fernande Grudet, dite Madame Claude. Découvrez sa True Story. Une femme sulfureuse qui a fait couler beaucoup d'encre Paris, milieu des années 60. Un homme élégant marche rue Boulainvilliers, dans le très chic 16ème arrondissement. Grand pardessus noir sur le dos et col relevé jusqu'au menton, il furète du regard. Aucun passant ne doit le reconnaître. Il presse le pas et finit par s'arrêter au numéro 32. Il entre et monte les quelques marches qui le séparent de son rendez-vous du soir. L'anxiété retombe, personne ne l'a vu entrer. Ecoutez la suite de cette histoire incroyable dans ce podcast. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : Fernand Legros et Elmyr de Hory, le duo d'arnaqueurs qui a dupé le monde de l'art Linda Burfield Hazzard, la docteure qui affamait ses patients jusqu'à la mort Ada Lovelace, la première codeuse informatique de l'Histoire Ecriture : Elie Olivennes Réalisation : Célia Brondeau, Antoine Berry Roger Voix : Andréa Brusque Production : Bababam Si vous souhaitez écouter les épisodes sans interruption, rendez-vous sur la chaîne Bababam+ d'Apple Podcasts : https://apple.co/3NQHV3I Abonnement True Story : https://apple.co/3auE6D9 Learn more about your ad choices. Visit megaphone.fm/adchoices
Voir Acast.com/privacy pour les informations sur la vie privée et l'opt-out.
Entrevue avec Tricia Murray, survivante d'exploitation sexuelle, militante pour l'aide rapide pour les survivantes. Courageuse et résiliente : Tricia Murray a amené son proxénète en prison sous ses yeux. Après deux ans de démarches, son bourreau est finalement en taule, un grand soulagement pour elle et un message d'espoir pour toutes les victimes d'exploitation sexuelle. Pour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
Ecoutez L'heure du Crime avec Jean-Alphonse Richard du 08 avril 2022
Pendant près de 20 ans, Madame Claude a été à la tête d'un réseau important de prostitution « de luxe » qui avait pour clients des personnes importantes (politiques, personnalités publiques...). Invités : Philippe Thuillier, producteur du documentaire « Les confessions de Madame Claude », Martine Monteil, à l'époque chef de la Brigade de Répression du Proxénétisme Ecoutez L'heure du Crime avec Jean-Alphonse Richard du 08 avril 2022
Maud Louvrier-Clerc développe sa réflexion artistique autour de deux axes principaux, l'équilibre et l'évolution, qu'elle inscrit dans un même défi : « cocréer un développement durable ». Sensible aux questions du bien vivre ensemble et de l'écologie, les thématiques de l'identité, de l'empreinte et de l'interdépendance sont au cœur de sa réflexion et de sa production artistique. Plasticienne et designer de formation, elle est aussi passionnée d'économie, et de biologie, influencée par la philosophie, l'astrophysique, et a pratiqué pendant plusieurs années la danse, la guitare, le théâtre et écrit encore aujourd'hui de la poésie. De cette pluralité d'inspiration, elle construit entre autres depuis une dizaine d'années sa recherche sur un motif d'équilibre qu'elle nomme « carrond », forme née de la fusion d'un demi-carré et d'un demi-rond. Ses œuvres se nourrissent des enjeux écologiques propres à « l'anthropocène », terme désignant « l'ère de l'Humain », nouvelle période dans laquelle l'espèce humaine est perçue comme la principale force de changement sur Terre, au-delà des forces géophysiques. Elle aborde ainsi à travers son art plusieurs problématiques au cœur de l'actualité comme le réchauffement climatique, la pollution plastique et certaines transformations sociétales. Son travail est régulièrement présenté dans des expositions personnelles, au sein desquelles elle a rendu hommage à certaines grandes figures artistiques telles que Le Corbusier en 2016 avec son banc-sculpture « Le Modulaire », ou Mondrian à travers sa console-étagère Ruban. Plus récemment, suite au premier confinement de 2020, elle initie « les rencontres Proxémie » qui cherchent à revisiter et déconstruire la distance entre monde réel et monde virtuel. Avec elle, nous avons parlé de : l'apoptose la place de l'anthropocène dans son travail son regard sur le désastre écologique en cours la disparition du sable les quatre carronds l'impact de l'art sur l'énergie la psychologie entrepreneuriale le pouvoir de l'art sur la prise de conscience sa passionariart Madeleine Filippi Tous les détails, références et ressources des épisodes sont disponibles sur le site www.lespassionariarts.com
已经把自己的官方中文名改成「拜雅」的拜亚动力,在10月发布了自己新一代的Stella 单元制作的前两款耳机,分别是封闭式的DT700ProX 与开放式的DT900ProX。随身音频圈,韭菜与镰刀涛声依旧;传统台式桌面设备日渐式微。各个大品牌不同姿势躺平的2021年末,两个Pro 后缀的新品并不算高调的出现,还是在曾经激战正酣,现在已经没什么动静的2000元价位,听起来就让人提不起兴致。但这次不一样了。听众群里动静逐渐增多,你听过了吗,到哪儿去试听,谁听过了快点说说。头戴式大耳机发烧友,这个已经被无趣产品、无聊新闻、跳水大赛和商业骗局折腾得疲惫不堪、纷纷退圈的一群人,几年来第一次看到了感兴趣的新玩具。更意外的是,递来玩具的,是那个行业巨头,已经沉寂许久,最近刚刚用新旗舰吓跑了几乎所有人的,拜亚动力。ProX 两个耳机,不一定那么值得你购买,但我相信它们对Beyer 意义非凡。不管是对用户还是厂家,这两个外观低调的家伙,散发出来的气息,是这些近年来泥足深陷的德奥系耳机大厂,越来越稀缺的一样东西,「生命力」。如果你喜欢「声波飞行员」,请在「爱发电」平台为我们打赏,增加它继续飞行下去的动力,谢谢。[00:00:00] 片头广告(?)[00:00:34] BGM#1. 靴腿 - 苏维埃计算机[00:03:08] 正片开始;本期录音的契机:拜亚动力的ProX 系列两个新品,DT700ProX & DT900ProX;[00:03:56] Windows 端的音乐播放、管理软件MusicBee 使用体验;与foobar2000 截然不同的体验;Mac 是否需要iTunes 的替代品;[00:09:11] 又说起了Verum One;乌克兰平板耳机如何成为烂音源拯救者;原线与sommer 黑参考升级线的差异;「8Ω+高灵敏度」套餐体验;为什么v1 成为了「耳塞碾压者」;团购状况,以及为了乌克兰大哥的身心健康,请尽量不要在冬天下单(?);[00:30:54] Beyerdynamic DT700ProX & DT900ProX;拜亚动力的翻身仗;聋的人传人现象;DT900ProX 的使用体验;高度相似的两兄弟;[00:45:34] 「同行衬托」下的DT700ProX;2000元价位的搏杀;为何引入hd600;好久不见好的封闭式;出街可能性研究;[00:57:09] BGM#2. 还潮 - 柯桥足浴中心[00:58:41] Stella 单元的新期待;DT770/880/990 的调音终点变化;DT770m 何以获得「现阶段最好」的评价;[01:07:52] 关于DT1770pro 和DT1990pro;替换耳罩里的玄机; [01:15:20] 难得让我们重启拜亚产品回顾节目的契机(上一期是2016-12-23的#046);为什么这次的Beyerdynamic 让人激动;老品牌们「不思进取」成为普遍情况;孟获的冲动言论;关于200期;[01:21:05] BGM#3. Black Box Recorder - Rock `n` Roll Suicide[01:21:41] 结束语,以及#200# 飞行员:地下丝贼 / vineland / 包雪龙