POPULARITY
Am 3. Oktober 2022 kehrt die Medizinstudentin Hanna nach einer Partynacht im Club „Eiskeller“ nicht nach Hause zurück. Wenige Stunden später wird ihre Leiche aus einem Fluss geborgen – 12 Kilometer von ihrem Elternhaus entfernt, mit schweren Verletzungen und nur spärlich bekleidet. Seitdem fragen sich die Menschen im kleinen bayerischen Ort Aschau: Was ist in dieser Nacht mit Hanna geschehen? Einer von ihnen, glaubt die Polizei, kennt die Antwort längst. Doch es fehlt an objektiven Beweisen. In der 200. Folge von “Mordlust – Verbrechen und ihre Hintergründe” sprechen wir über einen tragischen Fall, der unzählige Fragen aufwirft. Für zwei Familien aus Aschau steht dabei seit Langem nur eine im Mittelpunkt: Wann kommt die Wahrheit endlich ans Licht? **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Redaktion: Paulina Krasa, Laura Wohlers, Isabel Mayer Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Quellen (Auswahl)** Hintergrundgespräche unter anderem mit Regina Rick Urteil: Landgericht Traunstein, Az. 2 KLs 402 Js 40276/22 jug SZ: https://t1p.de/w1kln ZEIT: https://t1p.de/qv3lb SZ: https://t1p.de/olkgv LTO: https://t1p.de/mvjt5 **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust Du möchtest Werbung in diesem Podcast schalten? Dann erfahre hier mehr über die Werbemöglichkeiten bei Seven.One Audio: https://www.seven.one/portfolio/sevenone-audio
Guests from a Kansas pro bono partnership discuss how private businesses and in-house attorneys can support legal aid initiatives in their communities on Talk Justice. In 2017, Koch was interested in making pro bono work an official part of the company, but learned that Kansas rules did not allow in-house attorneys who are licensed by outside states to provide local pro bono services. Koch worked with KLS to get the rule changed, demonstrating that the demand for free legal help greatly exceeded the supply of legal aid lawyers. Ever since, Koch attorneys have engaged in an ongoing partnership with KLS to provide pro bono expungement and driver's license restoration clinics. Learn more about your ad choices. Visit megaphone.fm/adchoices
Guests from a Kansas pro bono partnership discuss how private businesses and in-house attorneys can support legal aid initiatives in their communities on Talk Justice. In 2017, Koch was interested in making pro bono work an official part of the company, but learned that Kansas rules did not allow in-house attorneys who are licensed by outside states to provide local pro bono services. Koch worked with KLS to get the rule changed, demonstrating that the demand for free legal help greatly exceeded the supply of legal aid lawyers. Ever since, Koch attorneys have engaged in an ongoing partnership with KLS to provide pro bono expungement and driver's license restoration clinics.
Annas Alltag ist streng durchgetaktet. Seit ihre Mutter Kerstin krank ist und das Bett nicht mehr eigenständig verlassen kann, kümmert sich Anna hingebungsvoll um sie – und das völlig allein. Ihre Fürsorge hat einen hohen Preis: Um für ihre Mutter da zu sein, stellt sie ihre eigenen Bedürfnisse vollständig zurück. Sie verzichtet auf Spaß, Leichtigkeit und ein eigenes Leben. Zudem ist Kerstin nicht nur fordernd, sondern geradezu undankbar – sie macht ihre Tochter regelmäßig herunter. Anna erträgt all das. Bis es eines Abends zu einem Streit kommt, der sie zum Nachdenken bringt. Und sie sich entscheidet, zum Äußersten zu gehen … Rund 500.000 Kinder und Jugendliche in Deutschland kümmern sich um ein krankes Familienmitglied. Sie gehören zu den sogenannten „Young Carers“, einer Gruppe pflegender Angehöriger, die oft unter dem Radar bleibt. In dieser Folge von „Mordlust – Verbrechen und ihre Hintergründe“ sprechen wir über die enormen Belastungen, mit denen Betroffene täglich konfrontiert sind, und zeigen, welche fatalen Folgen es haben kann, wenn die Last auf jungen Schultern zu schwer wird. Expertin in dieser Folge: Dr. Inés Brock-Harder, Vorsitzende des Bundesverbandes für Kinder- und Jugendlichenpsychotherapie Hilfe für pflegende Kinder und Jugendliche Webportal “Pausentaste”: www.pausentaste.de Nummer gegen Kummer: 116111, erreichbar montags bis samstags, 14 bis 20 Uhr, kostenlos und anonym **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Redaktion: Paulina Krasa, Laura Wohlers, Jennifer Fahrenholz Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Quellen (Auswahl)** Landgericht Bonn, Urteil vom 13.09.2021, 28 KLs 8/21 BGH-Beschluss vom 17.08.2023, 2 StR 138/22 Kölnische Rundschau: “Mutter mit Kissen erstickt. Öffentlichkeit von Prozess gegen Tochter ausgeschlossen”: https://t1p.de/hforj Deutsches Ärzteblatt: “Junge Pflegende: Eine massive Überforderung”: https://t1p.de/hebq1 Pflegeberatung.de: “Kinder und Jugendliche mit Pflegeverantwortung”: https://t1p.de/5xhvm **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust Du möchtest Werbung in diesem Podcast schalten? Dann erfahre hier mehr über die Werbemöglichkeiten bei Seven.One Audio: https://www.seven.one/portfolio/sevenone-audio
Another solid day with the ASX 200 up 27 points to 8178 (+0.3%). NAB (+1.6%) results helped, but the news of Bessent heading for a Swiss showdown with Chinese counterparties on trade, popped US futures higher and commodity stocks were back on. China also cut rates. BHP up 0.9% with RIO up 0.6%. Gold miners were solid on bullion's rise, SPR up 0.9%, and VAU doing better, up 3.3%. NST fell 2.6% though with WGX up 2.9%. Lithium stocks better, LTR up 7.8% on WA loans, PLS rallied 4.8% and MIN up 2.6%. LYC slipped as US/China relations improved. Oil and gas stocks also bounced. WDS up 1.7% and STO up 2.0%. Uranium stocks were once again in demand as shorts continue to cover, BOE updated the market and ran hard, up 12.4%. PDN up 4.4%. Banks stalled with the Big Bank Basket flat at $263.60. NAB (+1.6%) the standout and CBA (-0.5%) the problem child. Other financials doing well. MQG rose 0.9% despite news from ASIC on serious breaches. ASX up 2.5% and PNI up 4.3%. ZIP presented at the Macquarie conference and rose 13.0%. Industrials were mixed, CSL fell 3.0% with TLX down too but RMD up 0.9%. REITs were positive. Tech slipped, XRO down 0.3% and WTC off 0.4%. The index slipping 0.2%. Retail recovered from early JBH losses, PMV up 2.6% and TPW on an update rose 8.0%.In corporate news, no real bad news from Macquarie Conference. NXL slammed 16.0% on an uncertain update, KLS ran hard, up 18.3%, on a positive briefing. On the economic front, all eyes on the Fed and China stimulated again with more rate cuts. Asian markets slightly positive. Japan up 0.5%. 10-year yields steady at 4.29%. Dow futures up 0.5%, NASDAQ futures up 0.6%.Want to invest with Marcus Today? The Managed Strategy Portfolio is designed for investors seeking exposure to our strategy while we do the hard work for you. If you're looking for personal financial advice, our friends at Clime Investment Management can help. Their team of licensed advisers operates across most states, offering tailored financial planning services. Why not sign up for a free trial? Gain access to expert insights, research, and analysis to become a better investor.
Questions:01:24 – How was the day? How did the day go?06:03 – The one concern I have is… where do those smaller cases go?09:36 – What are you complaining about these days?11:25 – Why from your perspective? (first year being hardest)12:29 – What were some of the things you didn't expect when you started private practice?18:18 – What's the hardest thing that gets you in a bind sometimes?32:38 – What's your favorite restaurant on Long Island?34:49 – What do you have coming up this week?35:15 – How often are you doing full arch cases now?46:32 – What's your favorite procedure to perform?57:17 – What's it like working with a residency program like NUMC?1:07:07 – Have you seen the IPS plates that KLS is doing?1:13:32 – Have you been to Japan? Is that your next trip?Quotes & Wisdom:“The easiest thing you do is the surgery.”“90% of what I do today, I did not do in residency.”“Mentorship is learning your ABCs in residency. Practice is when you read books.”“Complications are your tuition for learning.”“You're not in it for one case. You're in it for the long run.”“We compete by providing the best patient care. There's enough for everyone.”“Never hesitate to ask for help. We're colleagues above competitors.”“Your first job is one of the most formative decisions you'll make when you base it upon mentorship.”“We're not just surgeons—we're educators to patients.”“You know a patient's jaws better than any specialty.”
The ASX 200 pushed 9 points higher to 7934 (0.1%) in defensive quiet trade ahead of Liberation Day. Money flowed into banks and REITs with resources under pressure on global growth fears. BHP gave back 1.5% with RIO off 1.7% and FMG down 1.6%. Lithium stocks poleaxed again, PLS down 3.1% and LTR falling 4.3%. Gold miners too under pressure as AUD rallied and bullion steady, profit taking in NEM down 2.1% and RMS off 3.8% with VAU down 3.3%. Uranium remains in fallout mode with PDN down 3.3% and BOE down 3.4% with nearly 25% of the company shorted now. Coal stocks fell and oil and gas slid, WDS down 1.5% and STO off 0.4%. Meanwhile banks firmed again, the Big Bank Basket up to $250.98 (1%). CBA up 1.0% and other financials also doing well, NWL up 1.9% and MPL rising 2.2%. REITs also in demand, GMG up 3.1% and CHC up 3.2%. TLS continues to creep higher up another 1.2% with retail better too. JBH up 1.5% and MYR rising 0.8%. Even DMP rose 2.5%. Tech better as WTC continued to find friends up 1.1%. CPU dropped again down 4.1%. QAN off 2.5%. In corporate news, new CEO at KLS set to shake things up by trying to sell tourism assets, RPL wrote down its OPT to zero! SGR has failed to secure a $750m lifeline from Salter Brothers and WBC has a new chief people officer. Nothing on the economic front. Asian markets in wait and see mode. 10-year yields steady at 4.41%. Want to invest with Marcus Today? The Managed Strategy Portfolio is designed for investors seeking exposure to our strategy while we do the hard work for you. If you're looking for personal financial advice, our friends at Clime Investment Management can help. Their team of licensed advisers operates across most states, offering tailored financial planning services. Why not sign up for a free trial? Gain access to expert insights, research, and analysis to become a better investor.
Von der VIP-Lounge vors Gericht – Die Rolling-Stones-Entscheidung im Compliance-Check „You can't always get what you want“ – aber was passiert, wenn man es trotzdem versucht? In der dieswöchigen Folge geht es um einen spannenden Fall aus der Welt der Compliance – und um die Rolling Stones! Im Fokus steht die Entscheidung des BGH vom 31. August 2023. Darin geht es um Freikarten, Vorteilsannahme und einen Behördenleiter, der Tickets für ein Rolling-Stones-Konzert im Austausch für die Nutzung des Stadtparks anforderte. Der Fall zeigt, wie wichtig es ist, solche Deals rechtlich sauber zu regeln. Der BGH hat das zugrundeliegende Urteil des LG Hamburg aufgehoben und verlangt eine genauere Untersuchung, ob eine Unrechtsvereinbarung zwischen dem Behördenleiter und dem Veranstalter vorlag. Dr. Rosinus klärt, was das für Unternehmen und Amtsträger bedeutet und wie man seine Konzertbesuche auch in Zukunft ohne rechtliche Risiken genießen kann. BGH, Urteil vom 31.08.2023 - 5 StR 447/22: https://www.juris.de/perma?d=NJRE001585832 LG Hamburg, Urteil vom 08.04.2022 - 622 KLs 4/20: https://openjur.de/u/2393920.html Hier geht´s zur Folge Nr. 78: Rechtsprechungsupdate Korruption – Provisionszahlungen als Bestechung: https://criminal-compliance.podigee.io/78-rosinusonair Hier geht´s zur Folge Nr. 169: Die Amtsträgereigenschaft im Rahmen der Korruption: https://criminal-compliance.podigee.io/196-cr Hier geht´s zu Folge Nr. 229: Rechtsprechungsupdate: Untreue bei Bestechungszahlungen? https://criminal-compliance.podigee.io/229-cr https://www.rosinus-on-air.com https://rosinus-partner.com
Dragan Labovic bivsi kosarkas i reprezentativac Srbije, vlasnik je svih zlatnih medalje sa EP u mladjim selekcijama, novi je gost Jao Mile podcast-a.Vise puta je proglasavan za MVP prvenstva.Sa 18 god bio je najbolji strelac KLS.Ipak njegova profesionala karijera nije bila toliko sjajna kako su mnogi predvidjali.Uzivajte u razgovoru! Zabranjeno je svako kopiranje i neovlašćeno preuzimanje video i/ili audio snimaka i postavljanje na druge kanale! Nije dozvoljeno koristiti materijal sa ovog kanala, bilo u celosti ili iz segmenata, bez licenciranja / plaćanja kako za komercijalnu, tako i za nekomercijalnu upotrebu.Svaka upotreba bez licenciranja za komercijalnu ili nekomercijalnu / privatnu upotrebu biće procesuirana. Za sve informacije o pravima, za upite o licenciranju i dobijanju dozvole za korišćenje možete nas kontaktirati putem naše zvanične email adrese.00:00:00 Pocetak00:08:00 Zivotna prica00:13:27 Klub00:19:20 Odrastanje00:28:04 Uzori I mastanja00:41:30 FMP00:47:27 Cacak00:55:42 Odlazak/FMP00:59:00 Aris/Nemacka01:08:46 Rusija01:21:21 Finska01:32:16 Repka01:41:32 Najtezi protivnik01:45:00 Penzija01:47:20 Savet za mlade01:49:10 Benafiti 01:54:00 Top 5 Thumbnail designer:https://instagram.com/design33_mk?igshid=MzRlODBiNWFlZA==Pratite nas na društvenim mrežama!Instagramhttps://www.instagram.com/jaomile_podcast/Facebook https://www.facebook.com/JAOMILEPODCASTTikTokhttps://www.tiktok.com/@jaomile_podcastTwitter https://twitter.com/mileilicGost: Dragan LabovićDatum: 26. Februar 2025. Autor i domaćin: Mile IlićLokacija: Studio na kraju UniverzumaProdukcija: Infinity Lighthouse#jaomilepodcast #draganlabovic #swishfindyourway #djokovic #crvenazvezda #kkpartizan #findyourway #nba #nikolajokic #abaliga #jokic #bogdanovic #euroleague #doncic #nikolatopic
The ASX 200 finished down around 11 points at 8241(0.1%) with CPI coming in as expected. Once again results were the focus, banks found buyers with the Big Bank Basket up to $253.05 (+1.1%) with CBA leading the way up 1.3%. MQG rose 0.6% and insurers flat. REITs were weaker, GMG falling 1.6% and SCG off 3.3% on results. Industrials mixed again, WTC rose 2.1% as White became Chair and results showed no further surprises, the All-Tech Index still fell 1.3% with TLS under pressure off 1.9% and QAN down 2.3%. Retail stocks mixed, JBH off 0.6% with LOV finding friends in all the right places up 5.7%. Travel stocks arrived jet lagged as FLT touched down 10.2% and CTD fell 3.6%. DMP hit another 3.7% with LNW better by 7.8% on results. Resources were back in the doghouse as iron ore fell, FMG dropped 6.2% as it went ex-div, BHP off 1.5% and RIO down 3.4%. Gold miners ran out of luck, NST down 2.8% and EVN off 1.7%. NEM off just 2.0%. LYC fell 1.7% after results showed the effects of low REE prices. Uranium stocks showed more fall out with PDN results bringing no joy and coal down too. WDS gained 3.4% after broker comments. STO going nowhere. In corporate news, WOW fell 3.0% after cutting its dividend, PBH had an abundance of bid interest around the 106c level, up 32.5%. BAP rose 13.4% and WOR up 10.3% on results whereas APX crashed 33.3% on disappointment. PTM also suffering as results were nasty, off 20.0%. KLS was another casualty today, maybe just a flesh would, but down 15.2%. In economic news, the CPI came in as expected and hardly moved the dial. Asian markets mixed, Japan down 0.8% and HK up 2.8% with China up 0.6%. 10-year yields at 4.37%. Want to invest with Marcus Today? The Managed Strategy Portfolio is designed for investors seeking exposure to our strategy while we do the hard work for you. If you're looking for personal financial advice, our friends at Clime Investment Management can help. Their team of licensed advisers operates across most states, offering tailored financial planning services. Why not sign up for a free trial? Gain access to expert insights, research, and analysis to become a better investor.
Schneeregen und Weihnachtsferien sind keine gute Kombi. Anne-Kathrin langweilt sich zu Hause, also geht die 13-Jährige zum Schlittschuhlaufen in die Eishalle. Als Mutter Inge ihre Tochter wieder abholen möchte, muss sie mitansehen, wie das tonnenschwere Dach des Gebäudes einstürzt – und alles unter sich begräbt. In dieser Folge “Mordlust – Verbrechen und ihre Hintergründe” geht es um das Eishallenunglück in Bad Reichenhall. Durch den Einsturz des Eishallendachs am 2. Januar 2006 kommen 15 Menschen ums Leben, darunter zwölf Kinder. Ein Fall, der zeigt, wie Schlamperei und Ignoranz in den Tod führen können, warum die Frage nach Verantwortung und Schuld manchmal nicht leicht zu beantworten ist – und dass das Leid für Hinterbliebene längst nicht vorbei ist, nachdem die Justiz ihr Urteil gefällt hat. **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Redaktion: Paulina Krasa, Laura Wohlers, Magdalena Höcherl Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Quellen (Auswahl)** Interview Mutter Urteil LG Traunstein 2 KLs 200 Js 865/06 openjur: Urteil vom 12.01.2010 - 1 StR 272/09: https://openjur.de/u/69617.html Spiegel: Eissporthalle in Bad Reichenhall: https://t1p.de/94qbd Stern: Der Eishockey-Chef aus Reichenhall: https://t1p.de/0958g Feuerwehr BR: Einsturz der Eissporthalle in Bad Reichenhall am 02.01.2006: https://t1p.de/4xcl0 Spiegel: Prozess um Eishallen-Einsturz: https://t1p.de/fznpn **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust Du möchtest Werbung in diesem Podcast schalten? Dann erfahre hier mehr über die Werbemöglichkeiten bei Seven.One Audio: https://www.seven.one/portfolio/sevenone-audio
Triggerwarnung: In dieser Folge geht es um Gewalt an Kindern. Endlich scheint Jackie gerettet zu sein. Jahrelang litt das Mädchen unter der schweren Alkoholsucht ihrer alleinerziehenden Mutter. Sie wurde ignoriert, ausgesperrt und oft allein gelassen. 2008 findet das kleine Mädchen Zuflucht bei einer Pflegefamilie. Auch zu ihrem leiblichen Vater baut die Achtjährige eine Bindung auf. Hatte sie vor Kurzem noch keine richtige Familie, hat sie nun gleich zwei. Doch dann stirbt Jackie völlig unerwartet. Antworten liefert erst ein toxikologisches Gutachten. Was dabei ans Licht kommt, wird nicht nur den heutigen Bundeskanzler Olaf Scholz auf den Plan rufen, sondern auch eine bundesweite Debatte über die Arbeit der Jugendämter auslösen. Experte dieser Folge ist der Rechtsanwalt für Familienrecht Thomas Mörsberger, ehemaliger Vorsitzender des Deutschen Instituts für Jugendhilfe und Familienrecht **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Redaktion: Paulina Krasa, Laura Wohlers, Simon Garschhammer Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Shownotes** Anlaufstellen bei Suchterkrankungen: “Keine Macht den Drogen”: Suchtberatung für Kinder und Jugendliche: https://t1p.de/9cnk0 “Deutsche Hauptstelle für Suchtfragen”: Suchtberatung: https://t1p.de/mk8bj “Deutsches Rotes Kreuz”: Suchtberatung: https://t1p.de/7yl1 “Sozialplattform”: Suchtberatung für Angehörige: https://t1p.de/ejus9 Urteil 606 KLs 19/12 Abendblatt-TV: Chantal und das Drama um ihren Methadon: https://t1p.de/pxts9 Spiegel: Methadon-Tod von Pflegekind Chantal: https://t1p.de/u1dpb Spiegel: Zwei kaputte Leben: https://t1p.de/ym77b Spiegel: Chantals Vermächnis: https://t1p.de/y6o2k Bild: Kind starb an Überdosis Methadon: https://t1p.de/2mpo8 **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/Mordlust) Du möchtest Werbung in diesem Podcast schalten? [**Dann erfahre hier mehr über die Werbemöglichkeiten bei Seven.One Audio!**](https://www.seven.one/portfolio/sevenone-audio)
ASX 200 wilted after a great start to close up only 18 points at 8210, unable to hold on to earlier big gains. Triple witching and index rebalancing in US tonight. Banks remained solid with the Big Bank Basket at $249.63 (+0.5%). MQG slid from record highs, and SUN eased 1.3%. REIT's mixed, GMG down 0.6% and SGP up 0.4%. Industrials were mostly firm with WES up 1.5% and retailers doing well, JBH up 1.4% and LOV up 3.2%. Travel and pizza back in fashion (when did they ever go out of fashion?), FLT up 2.2% and DMP up 4.0%. Healthcare in casualty, CSL down 0.3% and SHL continuing to suffer, down 2.2%. Resources promised so much this morning, and yet delivered so little. Iron ore stocks gave up early gains despite futures in Singapore rising, China left rates unchanged which disappointed some. Lithium stocks slipped into depression again, PLS down 3.1% and MIN off 2.9%. Gold miners mixed bag of lollies, NST up 1.3% but NEM off 0.5%. Energy stocks mixed, oils doing ok, WDS up 0.7% but uranium stocks seeing a little fallout after a solid week. In corporate news, MYR lost 0.6% on a trading and strategy update, HVN being sued again, KLS rose 1.5% on a US contract and NAM recommended the Olam Agri bid. LTM looking at exiting Australia! Nothing on the local economic front. BoJ and PBoC kept rates unchanged. Asian markets mixed again, Japan up 1.0%, HK up 0.9% and China off 0.6%. 10-year yields steady at 3.92%. Dow Futures off 0.1% and NASDAQ Futures down 0.2%. Why not sign up for a free trial? Get access to expert market insights and manage your investments with confidence. Ready to invest in yourself? Join the Marcus Today community.
ASX 200 rose 61 points to 8076 (+0.8%) as uranium and US leads helped sentiment. Banks once again taking the market higher, CBA up 1.5% and NAB up 2.2% with the Big Bank Basket up to $236.90 (1.6%). MQG rose 1.0% with a mixed picture in insurers. NHF fell 17.7% on disappointing results. REITs were better though GMG fell 0.5% as tech stocks came under some pressure. GPT up 1.9% and SCG up 1.8%. Industrials better, WES up 1.1%, REH up 2.6% and SVW rising 2.7%. Retail stocks found friends again, SUL up 1.6% and ARB rising 3.9%. Travel stocks remain under pressure, WEB off 0.5% and FLT down 1.6%. Resources picked up throughout the day, the Three Amigos slightly firmer, FMG up 1.9% and BHP modestly higher ahead of results. Gold miners mixed and lithium stocks firmed throughout the day. PLS reported today, no surprises and the stock rallied 1.3%. Oil and gas mixed, STO down 1.4% and WDS up 2.1%. Uranium stocks glowed, BOE up 7.6% and NXG up 9.6% on Kazatomprom news. In corporate news, ABB produced the goods and dividend up 11.9%, KGN beat forecasts and rose 11.8% whilst TYR also did well, recovering 12.2%. KLS dropped 23.8% on capex issues, EDV disappointed by falling 6.9% on news, and GOR fell 4.4% after its half-yearly report. In economic news, China left rates unchanged again. Asian markets weaker, Japan off 0.9% and China off 0.4%, HK managed a 0.8% rise. 10-year yields falling to 3.86%. UK closed for Public Holiday today.Why not sign up for a free trial? Get access to expert market insights and manage your investments with confidence. Ready to invest in yourself? Join the Marcus Today community.
Triggerwarnung: In dieser Folge geht es um sexualisierte Gewalt an Minderjährigen Ein letztes Mal wollen die Eltern in das Gesicht ihrer Tochter blicken, um Abschied zu nehmen, nachdem sie bei einem Unfall ums Leben kam. Doch als die Bestatterin den Deckel des Sargs hebt, ist der leer. Wo ist die Leiche der 14-Jährigen hin? Im Dorf ist man sich schnell einig, dass dieses Teufelswerk nur auf die Kappe von Satanist:innen gehen kann. Doch Wochen später ist klar: Der wahre Beelzebub trägt statt Hörner das Gewand eines piefigen Bürgers von nebenan. Es ist das kalte, tote Gebein und die Leblosigkeit, die sie reizen. Menschen, die nekrophil sind, sehnen sich sexuell nach Vergangenem und machen sich bei der Auslebung ihrer Lust in den meisten Fällen strafbar. Noch dazu werden sie geächtet, denn eines können die Verstorbenen nicht mehr geben: ihr Einverständnis. **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Redaktion: Paulina Krasa, Laura Wohlers Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Shownotes** Urteil: 2 KLs 107 Js 2434-2000 RP: Horror hinter idyllischer Familien-Fassade: https://t1p.de/nyval taz: Wenn der Satan von sich hören lässt: https://t1p.de/4avtk Spiegel: Mädchen-Leiche aus Sarg gestohlen: https://t1p.de/xq7to **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Slobodan Klipa je svoju trenerska karijeru posvetio pronalasku i razvoju mladih igrača i to radi uspešno skoro 30 godina, a sada je novi gost Jao Mile podcast-a. https://www.jaomile.comŠta je potrebno za uspeh u košarci, ali i ko ga je najviše razočarao od talentovanih igrača imaćete priliku čuti večeras. 00:00:00 Početak 00:00:55 Humanost 00:01:15 Slobodan Klipa00:02:00 Kina00:29:00 Dnevna košarka 00:38:00 Razvoj igrača 01:11:40 KLS 01:25:01 Pasarela01:38:10 Mlaðe reprezentacije01:57:00 Odrastanje i prvi treneski dani02:18 40 Najtalenat 02:37:25 Košarka u budućnosti 02:43:40 Savet za mlade02:45:20 Top 502:50:00 FMP novi početakThumbnail designer:https://instagram.com/design33_mk?igshid=MzRlODBiNWFlZA==Pratite nas na društvenim mrežama!Instagramhttps://www.instagram.com/jaomile_podcast/Facebook https://www.facebook.com/JAOMILEPODCASTTikTokhttps://www.tiktok.com/@jaomile_podcastTwitter https://twitter.com/mileilicGost: Slobodan KlipaDatum: 10. jul 2024. Autor i domaćin: Mile IlićLokacija: Studio na kraju UniverzumaProdukcija: Infinity Lighthouse#jaomilepodcast #slobodanklipa #fmp #crvenazvezda #kkpartizan #nikolajovic #nba #nikolajokic #abaliga #jokic #bogdanovic #euroleague #doncic #nikolatopic #zagreb
Hello Friends and Followers! Today we air Season 4, Episode #020, Today let's sit and talk with Kayak Loading Solutions Owners Jeff and Vicki Wilson regarding their upstart #KLS business and what lead to this kayak transportation innovation... � Facebook page: @Rustyhookpodcast � John Rapp's YouTube channel: / @wvrapp � John Rapp's Twitch channel: / wvrapp . � John Rapp's Twitter Page: / jkrappjr DANIEL WOODCOCK � Website: https://segkbf.com/ � Facebook Page: / southeastgeorgiakayakbassfishing � Instagram Page: / segeorgiakayakbassfish � YouTube Channel: / @segkbf Rusty Hook Kayak Fishing Podcast is affiliated with these fine Businesses. If you're Looking for reviews, Kayak Fishing gear, accessories or equipment, or that adventure resort vacation... go check out: Big Lake Bait Company Feelfree Kayaks Westbrook Supply Co. West Virginia Kayak Anglers ACE Adventure Resort YakGadget ZPRO Lithium DuBro Fishing We are a proud affiliate of the Paddle N' Fin Podcast Paddle Media Group. Dubro Fishing Pelican Products ZPRO Lithium Want to create live streams like this? Check out StreamYard: https://streamyard.com/pal/d/65997418... Learn more about your ad choices. Visit megaphone.fm/adchoices
Zoran Krečković, bivši trener IMT-a, Beopetrola i Crvene zvezde, novi je gost Jao Mile podcast-a. Kako je izgledala njegova igračka karijera i zašto je otišao iz Partizana.Kako je odlučio da se bavi trenerskim pozivom. Kako je izgledalo raditi u KKCZ i zbog čega je napustio klub. Kako je izgledalo raditi u arapskom svetu. 00:00:00 Početak 00:00:30 Humanost 00:00:50 Zoran Krečković 00:04:30 Kuwait 00:10:00 Zavrsnice 00:26:28 KLS 00:37:30 Dubai 00:53:30 EL01:10:45 OI01:29:00 Maštanja i uzori01:35:00 trenerska karijera01:54:50 KKCZ 01:02:30 Arapski svet02:10:40 Najtalenat02:17:30 Savet za mlade02:21:40 Top 5Thumbnail designer:https://instagram.com/design33_mk?igs...Pratite nas na društvenim mrežama!Instagram / jaomile_podcast Facebook / jaomilepodcast TikTok
Hello Friends and Followers! Today we air Season 4, Episode #020, Today let's sit and talk with Kayak Loading Solutions Owners Jeff and Vicki Wilson regarding their upstart #KLS business and what lead to this kayak transportation innovation... Facebook page: @Rustyhookpodcast John Rapp's YouTube channel: https://www.youtube.com/@wvrapp John Rapp's Twitch channel: https://www.Twitch.TV/WVRapp. John Rapp's Twitter Page: https://www.Twitter.com/jkrappjr KAYAK LOADING SOLUTIONS: Website: :https://kayakloadingsolutions.com/ Facebook Page: https://www.facebook.com/KayakLoading... Instagram Page: https://www.instagram.com/kayakloadin... YouTube Channel://www.youtube.com/@KayakLoadingSolutions Rusty Hook Kayak Fishing Podcast is affiliated with these fine Businesses. If you're Looking for reviews, Kayak Fishing gear, accessories or equipment, or that adventure resort vacation... go check out: Big Lake Bait Company Feelfree Kayaks Westbrook Supply Co. West Virginia Kayak Anglers ACE Adventure Resort YakGadget ZPRO Lithium DuBro Fishing We are a proud affiliate of the Paddle N' Fin Podcast Paddle Media Group. Dubro Fishing Pelican Products ZPRO Lithium Want to create live streams like this? Check out StreamYard: https://streamyard.com/pal/d/65997418... Learn more about your ad choices. Visit megaphone.fm/adchoices
“J'ai travaillé dans 3 boites qui ont coulé !”Cette phrase, sortie de son contexte, peut paraître bizarre ou inquiétante. Il n'en est rien. Antoine a eu l'opportunité de travailler dans bien plus de 3 sociétés et son parcours a été émaillé de choix de sociétés, startup /scale up dont les business subirent des évènements externes ou des échecs.Il revient sur ses expériences chez Bazarchic, Unilend et Pixmania. Pour Pixmania, on se replonge dans l'époque d'une société qui marche très bien et qui voit débarquer sur son secteur une petite société qui s'appelle Amazon. Difficile de lutter à l'époque et c'est la descente progressive de cette société en plein succès. De ces 3 expériences, il ne retire que du positif. L'expérience Kls où il est le CTO démarre avec Unilend. Kls est né du rachat d'assets technologiques d'Unilend qui ont été le point de départ de la solution. Kls est un éditeur de solutions qui a pour but le pilotage de financements et d'investissement corporate. La plateforme SaaS propose une suite d'outils collaboratifs et intuitifs pour les banques, spécialistes du conseil, fonds d'investissement et leur écosystème. Antoine nous raconte son arrivée en tant que premier salarié de l'équipe technique. Nous revenons sur les différentes étapes qui l'ont mené de premier développeur jusqu'à la prise du costume de CTO.Dans cet épisode vous découvrirez :La difficulté d'apprendre à recruter : ne pas se tromper, ne pas survendre tout en étant attractif.L'importance des bons rapports avec les CEO.L'importance de savoir prioriser au mieux la roadmap et de faire les bons choix aux bons moments.
Strajin Nedović, aktuelni trener KK Zlatibor sa kojim pravi neverovatne rezultate u poslednjih pet godina, novi je gost Jao Mile podcasta!Sa ekipom Zlatibora na fantastican nacin osvaja trofej ABA 2 lige, ali zbog "birokratije" nije dobio priliku da ucestvuje u ABA ligi.Kako funkcionise finansiranje kluba je nesto sto moze da posluzi kao primer mnogim klubovima u Srbiji. 00:00:00 Početak 00:00:30 Humanost 00:02:05 Strajin Nedović 00:12:05 Sezona 00:13:00 KLS 00:22:10 Spartak Subotica00:23:31 Finansiranje00:30:05 ABA baloni00:34:00 Sledeća sezona00:39:10 Pripreme za Skoplje 00:58:25 Centri i menadžeri 01:06:30 Kutlešić01:11:00 Ross 01:17:25 Slovenci01:29:40 Odrastanje 01:40:15 Uzori01:42:40 Igračka karijera01:48:45 Čitić01:54:00 Dogodine ABA02:00:00 Najteži protivnik02:03:00 Najteži momenat 02:06:00 F402:10:20 Košarka šta je donela02:14:15 Savet za mlade02:18:00 Kina 02:20:25 Top 5Thumbnail designer:https://instagram.com/design33_mk?igshid=MzRlODBiNWFlZA==Pratite nas na društvenim mrežama!Instagramhttps://www.instagram.com/jaomile_podcast/Facebook https://www.facebook.com/JAOMILEPODCASTTikTokhttps://www.tiktok.com/@jaomile_podcastTwitter https://twitter.com/mileilicGost: Strajin NedovićDatum: 29. Maj 2024. Autor i domaćin: Mile IlićLokacija: Studio na kraju UniverzumaProdukcija: Infinity Lighthouse#jaomilepodcast #strajinnedovic #zlatibor #crvenazvezda #kkpartizan #darkomilicic #nba #nikolajokic #abaliga #jokic #bogdanovic #euroleague #doncic #nikolatopic #kkzlatibor
Strajin Nedović, aktuelni trener KK Zlatibor sa kojim pravi neverovatne rezultate u poslednjih pet godina, novi je gost Jao Mile podcasta!Sa ekipom Zlatibora na fantastican nacin osvaja trofej ABA 2 lige, ali zbog "birokratije" nije dobio priliku da ucestvuje u ABA ligi.Kako funkcionise finansiranje kluba je nesto sto moze da posluzi kao primer mnogim klubovima u Srbiji. 00:00:00 Početak 00:00:30 Humanost 00:02:05 Strajin Nedović 00:12:05 Sezona 00:13:00 KLS 00:22:10 Spartak Subotica00:23:31 Finansiranje00:30:05 ABA baloni00:34:00 Sledeća sezona00:39:10 Pripreme za Skoplje 00:58:25 Centri i menadžeri 01:06:30 Kutlešić01:11:00 Ross 01:17:25 Slovenci01:29:40 Odrastanje 01:40:15 Uzori01:42:40 Igračka karijera01:48:45 Čitić01:54:00 Dogodine ABA02:00:00 Najteži protivnik02:03:00 Najteži momenat 02:06:00 F402:10:20 Košarka šta je donela02:14:15 Savet za mlade02:18:00 Kina 02:20:25 Top 5Thumbnail designer:https://instagram.com/design33_mk?igs...Pratite nas na društvenim mrežama!Instagram / jaomile_podcast Facebook / jaomilepodcast TikTok
Marko Boltić, trener BKK Radnički sa Crvenog krsta je novi gost Jao Mile podcast-a! Marko je u jednom momentu razmišljao da digne ruke od profi košarke i okrene se školovanju, ali jedan ČOVEK ga je ubedio da joj da još jednu šansu i nije pogrešio. MARKO primer kako možeš pronaći svoju ULOGU na terenu i da budeš koristan za svoju ekipu, a za tri godine kao trener BKK Radnicki pretrpreo je samo ČETIRI poraza, a ove sezone plasirali su se u KLS.Dva ipo sata razgovora! Uživajte! 00:00:00 Početak 00:00:38 Humanost00:02:15 Marko Boltić00:05:50 Revijalna utakmica00:10:25 Tri godine 00:17:00 Sile osovine00:20:40 Vruć teren00:23:15 Mentor 00:30:20 Pristup igračima00:39:00 Planovi00:40:40 EL 00:53:00 ABA liga 00:56:00 NBA01:06:40 Odrastanje 01:17:15 Incident 01:21:00 Mladost Zemun01:25:13 BKK Radnički 01:39:40 Vojvodina01:48:00 Zmago Sagadin 01:58:00 Rumunija 02:00:00 Makedonija 02:06:45 Maðarska02:12:20 Najtezi protivnik02:14:20 Kraj karijere 02:16:20 Najtezi momenat 02:18:15 Šta je košarka donela02:19:00 Radnicki organizacija02:28:00 Savet za Mlade 02:30:15 Top 5Thumbnail designer:https://instagram.com/design33_mk?igs...Pratite nas na društvenim mrežama!Instagram / jaomile_podcast Facebook / jaomilepodcast TikTok
Kleine-Levin Syndrome or KLS is a condition that affects 1 in 1 million people and causes them to sleep for extended periods of time. Known as Sleeping Beauty Syndrome, those afflicted have slept for days, weeks, and even months. It's unknown what causes the illness and there is no cure. Today, we'll discuss this illness and some people who suffer from it. What is being used to treat them. What sufferers consider to be the worst part of their illness, and we answer the 2 questions on your mind. How do they eat and how do they use the restroom? Follow us on YouTube, TikTok, Facebook, Instagram, Twitter, and Threads @GreetingsTAC, email us at GreetingsTAC@gmail.com, or leave us a voicemail at 915-317-6669 if you have a story to share with us. If you like the show, leave us a review, tell a friend, and subscribe!
Triggerwarnung: Im zweiten Fall geht um sexuelle Gewalt und Gewalt an Kindern. Lebenslang gleich ein Leben lang? In Deutschland nicht wirklich. Selbst wer vor Gericht die lebenslange Haftstrafe erhält, hat nach 15 Jahren die Möglichkeit, aus der Haft entlassen zu werden und wieder ein Leben in Freiheit zu führen. Doch es gibt Verbrechen, die so abscheulich sind, dass Täter:innen diese Chance verspielen. Tschüss Hotel Mama, hallo eigene Wohnung. Moritz kann es kaum erwarten. Gemeinsam mit Julia wird er seine ersten eigenen vier Wände beziehen. Bei den Renovierungsarbeiten bekommen die beiden tatkräftige Unterstützung. Während Julia und die anderen am späten 2. Oktober 2018 bereits den Feierabend eingeläutet haben, werkelt Moritz noch länger in der Wohnung. Unwissend darüber, dass er hier niemals einziehen wird. Wo sind Noah und Lina? Das ist die Frage, die die Polizei beschäftigt. Der 11-Jährige und seine zwei Jahre jüngere Schwester sind vom Spielen nicht nach Hause gekommen. Im Großaufgebot durchkämmen Polizist:innen jeden Quadratmeter ihres Wohnortes in der Hoffnung, die Kinder lebendig zu finden. Doch schnell stellt sich für die Beamt:innen heraus: Sie haben es nicht nur mit einem furchtbaren Verbrechen zu tun, sondern auch mit dunklen Fantasien. In dieser Folge von “Mordlust - Verbrechen und ihre Hintergründe” geht es um die “Besondere Schwere der Schuld”. Ein Urteilszusatz, den Richter:innen an eine lebenslange Freiheitsstrafe knüpfen können und der zur Konsequenz hat, dass eine vorzeitige Haftentlassung nach 15 Jahren nahezu ausgeschlossen ist. Unter anderem erklären wir euch, welche Kriterien für die Besondere Schwere der Schuld gelten, inwieweit Gerichte bei der Feststellung einen Spielraum haben und warum es Jurist:innen gibt, die das Konzept kritisch sehen. Experte in dieser Folge: Dr. Kai-Daniel Weil, Fachanwalt für Strafrecht in der Kanzlei Abel & Kollegen **Kapitel** 1. Fall “Moritz” 2. Aha: Die Besondere Schwere der Schuld 3. Gesamtwürdigung und Zahlen 4. Fall “Noah und Lina” 5. Aha: Die Besondere Schwere der Schuld im Jugendstrafrecht 6. Aussetzung der Reststrafe: Prozedere und Kriterien 7. Kritik und Reformvorschläge **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Jennifer Fahrenholz Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Shownotes** *Fall Moritz* Urteil: Landgericht Aachen, 97 KLs 401 Js 561/18 1/19 Aachener Zeitung: “Die vergebliche Suche nach innerem Frieden”: https://bitly.ws/3hJyf “Tötungsdelikt in Waldfeucht”: Pressekonferenz vom 16.11.2018: https://bitly.ws/3hJyh *Fall “Noah und Lina”* Spiegel: “Mord an Tom und Sonja. Angeklagter schildert grausige Details”: https://bitly.ws/3hJyi Süddeutsche Zeitung: “Notizen vom Kindermordprozess. Ein Blick in den Abgrund”: https://bitly.ws/3hJyk FAZ: “Zweifacher Kindermord. Polizei fahndet nach zwei Tatverdächtigen”: https://bitly.ws/3hJyn *Diskussion* Kriminologische Leitstelle: “Die Vollstreckung lebenslanger Freiheitsstrafen. Dauer und Gründe der Beendigung im Jahr 2021”: https://bitly.ws/3hJyp Monatsschrift für Kriminologie und Strafrechtsreform: “Die Umstände von Gewicht bei der Feststellung der besonderen Schwere der Schuld. Eine Analyse von Mordurteilen im Zeitraum 2000 - 2020”: https://bitly.ws/3hJyr Bundesgerichtshof: 5 StR 524/15, Urteil vom 22. Juni 2016 Strafvollzugsanstalt.org: “Begriffserklärung: Lebenslang, besondere Schwere der Schuld & Sicherheitsverwahrung”: https://bitly.ws/3hJyu Policy Paper der Strafverteidigervereinigungen: “Abschaffung der lebenslangen Freiheitsstrafe”: https://bitly.ws/3hJyx **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
In einem Moment staunen hunderte Besucher:innen über sie, im nächsten sind sie weg: Die Rede ist von wertvollen Kunstwerken, die in gut gesicherten Museen und Galerien ausgestellt sind und die nicht nur Kulturliebhaber:innen anlocken, sondern manchmal auch Verbrecher:innen. Als Nachtwächter Josef im März 2017 seinen letzten Rundgang durchs Bode-Museum in Berlin durchführt, macht sich ein ungutes Gefühl in ihm breit. Es ist, als würde Josef ahnen, dass er in dieser Nacht im Museum nicht allein ist. Denn drei vermummte Gestalten mit Werkzeug haben einen großen, goldenen Schatz im Auge. Knut ist noch ein Kind, als aus der Galerie Schloss Friedenstein in seiner Heimatstadt Gotha fünf wertvolle Gemälde verschwinden. Jeder spricht darüber. Keiner weiß, dass es Knut sein wird, der Jahre später einen riskanten Deal eingeht, um die Kunstwerke zurückzuholen. Die Vorhaben von Kunstdieb:innen liegen meist zwischen genialem Coup und unverschämtem Glück. In dieser Folge von “Mordlust – Verbrechen und ihre Hintergründe” erzählen wir davon. Wir sprechen aber auch über Kunstkriminalität im weiteren Sinne. Etwa über hinterlistige Fälschungen oder über Kunst, die gar nicht in deutsche Museen gehört. Interviewpartner in dieser Folge ist René Allonge, Erster Kriminalhauptkommissar beim Landeskriminalamt Berlin, Fachdienststelle für Kunstkriminalität **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Redaktion: Paulina Krasa, Laura Wohlers, Isabel Mayer Schnitt: Henk Heuer Rechtliche Abnahme: Abel und Kollegen **Fall Goldmünze** Urteil Landgericht Berlin: (509 KLs) 233 Js 1601/17 (41/18) Spiegel TV: Im Verhör: Der Millionen-Coup mit der Goldmünze: https://bitly.ws/3fEcu Spiegel TV: Im Verhör: Die Großfamilie Rammo: https://bitly.ws/3fEcq Taz: Prozess um gestohlene Goldmünze: Diebstahl leicht gemacht: https://bitly.ws/3fEcm SZ: Münzdiebstahl in Berlin: Alles hängt am Gold: https://bitly.ws/3fEch **Fall Gotha** NDR Kultur: Kunstverbrechen - True Crime meets Kultur: Der größte Kunstdiebstahl der DDR: https://bitly.ws/3fEc3 Spiegel: Rekonstruktion des größten DDR-Kunstraubs: Wie Lokführer Rudi an die alten Meister kam: https://bitly.ws/3fEbV MDR: Echt: Der Coup von Gotha: https://bitly.ws/3fEbL Podcast “Lost Art Gotha”: Der große Unbekannte: https://bitly.ws/3fEbI Ernst von Siemens Kunststiftung: Gotha-Krimi: https://bitly.ws/3fEbD **Diskussion** SWR: Die Justizreporter:innen: Kunst und Recht – über Banksy, Berghain und Beltracchi: https://bitly.ws/3fEbw Deutschlandfunk: Tatort Kunst: Die zwei Kisten in Prag: https://bitly.ws/3fEbv Stern: “Artnapping”: Millionendeals mit Meisterwerken: https://bitly.ws/3fD5D ORF: Zehn Jahre Diebstahl der Saliera: https://bitly.ws/3fD5x National Geographic: Raubkunst in Deutschland: 7 Schätze, die Deutschland nicht gehören: https://bitly.ws/3fD5t **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
The Story of STARCRUNCH RECORDS (Miami) with CHRIS LELUGASChris tells the story of Miami record label, Starcrunch Records. Starcrunch started up in the mid 1990's by Chris Lelugas and the late Mari Giordani. Chris takes us through the labels early beginings, origins of the name, memories of each release, the local scene of the era, Churchill's, plus his time owning Yesterday & Today Records, DJing at WVUM, and so much more! [Recorded: 02/22/24]☞ Follow Florida Sound Archive on Instagram! @floridasoundarchive☞ Please SUBSCRIBE if you enjoy the contentMusic:Los Canadians - Never Can (1996)Drug Czars - Cry Miami (1996)Related Episodes:The Story of Kreamy 'Lectric Santa (KLS) with Robert Price & Priya Ray - https://youtu.be/Fzo_by24Agc?si=i8LeIzWs3zo8iw6lThe Story of JEFF HODAPP (Roach Motel, Drug Czars, Trapped By Mormons, Morbid Opera, Dust Traps) - https://youtu.be/WFl1-3CzzLg?si=o6tOsgRuGbceHs7BThe Story of CHUCK LOOSE (The Crumbs, Drug Czar, The Heatseekers, Chickenhead, Iron Forge Press) - https://youtu.be/_Ieek4sSdkw?si=rAog5psFMvIbehYv
Jovan Zdravković, bivši košarkaš Vojvodine, sada uspešan preduzetnik i vlasnik Java Coffee i modnog brenda Gary Tulip, novi je gost Jao Mile podcast-a. https://www.garytulip.comhttps://www.jaomile.com Košarkašku karijeru za počeo je u Novom Sadu, da bi nasavio preko Zastave iz Kragujevca, Ergonoma iz Niša, te Portugala, Poljske, Holandije i Bih. 00:00:00 Početak 00:00:40 Pomoć za rast 00:00:50 Gary Tulip00:02:30 Buldozi francuzi00:05:00 Kafedžija 00:18:35 Gary Tulip00:24:30 EL 00:34:05 ABA liga00:36:15 KLS 00:40:10 NBA 00:44:40 ALL STAR Sprdnja 00:48:22 Dončić 00:52:36 USA iskustvo 00:56:00 ODRASTANJE 01:01:10 Repka01:04:54 Vlada Vidačić 01:06:46 Seniori Voše01:13:10 Zastava KG01:14:20 Pogrešan izbor 01:17:23 Idemo Niiiiiiš 01:22:24 Otašević 01:24:50 Portugal 01:38:20 Poljska01:49:25 Holandija 02:01:35 Vojvodina02:07:30 Borac BL02:14:45 Najtezi protiv 02:16:30 Najtezi moment u rarijeri02:18:40 Kraj karijere i plan za posle 02:22:50 Tattoo 02:25:00 Savet za mlade02:27:00 Benefiti košarke 02:29:30 Top 5 saigracaThumbnail designer:https://instagram.com/design33_mk?igs...Pratite nas na društvenim mrežama!Instagram / jaomile_podcast Facebook
Ivan Žigeranović, bivši osvajač kupa Radivoj Korać i šampion Poljske novi je gost Jao Mile podcast-a. Možete podržati naš rad donacijom preko app PayPal ili kupovinom u prodavnici na
In 2023 we did a few Fundamentals episodes covering Benchmarks 101, Datasets 101, FlashAttention, and Transformers Math, and it turns out those were some of your evergreen favorites! So we are experimenting with more educational/survey content in the mix alongside our regular founder and event coverage. Pls request more!We have a new calendar for events; join to be notified of upcoming things in 2024!Today we visit the shoggoth mask factory: how do transformer models go from trawling a deeply learned latent space for next-token prediction to a helpful, honest, harmless chat assistant? Our guest “lecturer” today is ; you might know him from his prolific online writing on and Twitter, or from his previous work leading RLHF at HuggingFace and now at the Allen Institute for AI (AI2) which recently released the open source GPT3.5-class Tulu 2 model which was trained with DPO. He's widely considered one of the most knowledgeable people on RLHF and RLAIF. He recently gave an “RLHF 201” lecture at Stanford, so we invited him on the show to re-record it for everyone to enjoy! You can find the full slides here, which you can use as reference through this episode. Full video with synced slidesFor audio-only listeners, this episode comes with slide presentation along our discussion. You can find it on our YouTube (like, subscribe, tell a friend, et al).Theoretical foundations of RLHFThe foundation and assumptions that go into RLHF go back all the way to Aristotle (and you can find guidance for further research in the slide below) but there are two key concepts that will be helpful in thinking through this topic and LLMs in general:* Von Neumann–Morgenstern utility theorem: you can dive into the math here, but the TLDR is that when humans make decision there's usually a “maximum utility” function that measures what the best decision would be; the fact that this function exists, makes it possible for RLHF to model human preferences and decision making.* Bradley-Terry model: given two items A and B from a population, you can model the probability that A will be preferred to B (or vice-versa). In our world, A and B are usually two outputs from an LLM (or at the lowest level, the next token). It turns out that from this minimal set of assumptions, you can build up the mathematical foundations supporting the modern RLHF paradigm!The RLHF loopOne important point Nathan makes is that "for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior". For example, it might be difficult for you to write a poem, but it's really easy to say if you like or dislike a poem someone else wrote. Going back to the Bradley-Terry Model we mentioned, the core idea behind RLHF is that when given two outputs from a model, you will be able to say which of the two you prefer, and we'll then re-encode that preference into the model.An important point that Nathan mentions is that when you use these preferences to change model behavior "it doesn't mean that the model believes these things. It's just trained to prioritize these things". When you have preference for a model to not return instructions on how to write a computer virus for example, you're not erasing the weights that have that knowledge, but you're simply making it hard for that information to surface by prioritizing answers that don't return it. We'll talk more about this in our future Fine Tuning 101 episode as we break down how information is stored in models and how fine-tuning affects it.At a high level, the loop looks something like this:For many RLHF use cases today, we can assume the model we're training is already instruction-tuned for chat or whatever behavior the model is looking to achieve. In the "Reward Model & Other Infrastructure" we have multiple pieces:Reward + Preference ModelThe reward model is trying to signal to the model how much it should change its behavior based on the human preference, subject to a KL constraint. The preference model itself scores the pairwise preferences from the same prompt (worked better than scalar rewards).One way to think about it is that the reward model tells the model how big of a change this new preference should make in the behavior in absolute terms, while the preference model calculates how big of a difference there is between the two outputs in relative terms. A lot of this derives from John Schulman's work on PPO:We recommend watching him talk about it in the video above, and also Nathan's pseudocode distillation of the process:Feedback InterfacesUnlike the "thumbs up/down" buttons in ChatGPT, data annotation from labelers is much more thorough and has many axis of judgement. At a simple level, the LLM generates two outputs, A and B, for a given human conversation. It then asks the labeler to use a Likert scale to score which one it preferred, and by how much:Through the labeling process, there are many other ways to judge a generation:We then use all of this data to train a model from the preference pairs we have. We start from the base instruction-tuned model, and then run training in which the loss of our gradient descent is the difference between the good and the bad prompt.Constitutional AI (RLAIF, model-as-judge)As these models have gotten more sophisticated, people started asking the question of whether or not humans are actually a better judge of harmfulness, bias, etc, especially at the current price of data labeling. Anthropic's work on the "Constitutional AI" paper is using models to judge models. This is part of a broader "RLAIF" space: Reinforcement Learning from AI Feedback.By using a "constitution" that the model has to follow, you are able to generate fine-tuning data for a new model that will be RLHF'd on this constitution principles. The RLHF model will then be able to judge outputs of models to make sure that they follow its principles:Emerging ResearchRLHF is still a nascent field, and there are a lot of different research directions teams are taking; some of the newest and most promising / hyped ones:* Rejection sampling / Best of N Sampling: the core idea here is that rather than just scoring pairwise generations, you are generating a lot more outputs (= more inference cost), score them all with your reward model and then pick the top N results. LLaMA2 used this approach, amongst many others.* Process reward models: in Chain of Thought generation, scoring each step in the chain and treating it like its own state rather than just scoring the full output. This is most effective in fields like math that inherently require step-by-step reasoning.* Direct Preference Optimization (DPO): We covered DPO in our NeurIPS Best Papers recap, and Nathan has a whole blog post on this; DPO isn't technically RLHF as it doesn't have the RL part, but it's the “GPU Poor” version of it. Mistral-Instruct was a DPO model, as do Intel's Neural Chat and StableLM Zephyr. Expect to see a lot more variants in 2024 given how “easy” this was.* Superalignment: OpenAI launched research on weak-to-strong generalization which we briefly discuss at the 1hr mark.Note: Nathan also followed up this post with RLHF resources from his and peers' work:Show Notes* Full RLHF Slides* Interconnects* Retort (podcast)* von Neumann-Morgenstern utility theorem* Bradley-Terry model (pairwise preferences model)* Constitutional AI* Tamer (2008 paper by Bradley Knox and Peter Stone)* Paul Christiano et al. RLHF paper* InstructGPT* Eureka by Jim Fan* ByteDance / OpenAI lawsuit* AlpacaEval* MTBench* TruthfulQA (evaluation tool)* Self-Instruct Paper* Open Assistant* Louis Castricato* Nazneen Rajani* Tulu (DPO model from the Allen Institute)Timestamps* [00:00:00] Introductions and background on the lecture origins* [00:05:17] History of RL and its applications* [00:10:09] Intellectual history of RLHF* [00:13:47] RLHF for decision-making and pre-deep RL vs deep RL* [00:20:19] Initial papers and intuitions around RLHF* [00:27:57] The three phases of RLHF* [00:31:09] Overfitting issues* [00:34:47] How preferences get defined* [00:40:35] Ballpark on LLaMA2 costs* [00:42:50] Synthetic data for training* [00:47:25] Technical deep dive in the RLHF process* [00:54:34] Projection / best event sampling* [00:57:49] Constitutional AI* [01:04:13] DPO* [01:08:54] What's the Allen Institute for AI?* [01:13:43] Benchmarks and models comparisonsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have Dr. Nathan Lambert in the house. Welcome.Nathan [00:00:18]: Thanks guys.Swyx [00:00:19]: You didn't have to come too far. You got your PhD in Berkeley, and it seems like you've lived there most of the time in recent years. You worked on robotics and model-based reinforcement learning on your PhD, and you also interned at FAIR and DeepMind. You bootstrapped the RLHF team at Hugging Face, and you recently joined the Allen Institute as a research scientist. So that's your quick bio. What should people know about you that maybe is not super obvious about you on New LinkedIn?Nathan [00:00:43]: I stay sane in various insane sport and ultra-endurance sport activities that I do.Swyx [00:00:50]: What's an ultra-endurance sport activity?Nathan [00:00:52]: Long-distance trail running or gravel biking. Try to unplug sometimes, although it's harder these days. Yeah.Swyx [00:00:59]: Well, you know, just the Bay Area is just really good for that stuff, right?Nathan [00:01:02]: Oh, yeah. You can't beat it. I have a trailhead like 1.2 miles from my house, which is pretty unmatchable in any other urban area.Swyx [00:01:11]: Pretty excellent. You also have an incredible blog, Interconnects, which I'm a fan of. And I also just recently discovered that you have a new podcast, Retort.Nathan [00:01:20]: Yeah, we do. I've been writing for a while, and I feel like I've finally started to write things that are understandable and fun. After a few years lost in the wilderness, if you ask some of my friends that I made read the earlier blogs, they're like, oh, this is yikes, but it's coming along. And the podcast is with my friend Tom, and we just kind of like riff on what's actually happening on AI and not really do news recaps, but just what it all means and have a more critical perspective on the things that really are kind of funny, but still very serious happening in the world of machine learning.Swyx [00:01:52]: Yeah. Awesome. So let's talk about your work. What would you highlight as your greatest hits so far on Interconnects, at least?Nathan [00:01:59]: So the ones that are most popular are timely and or opinion pieces. So the first real breakout piece was when April and I also just wrote down the thing that everyone in AI was feeling, which is we're all feeling stressed, that we're going to get scooped, and that we're overworked, which is behind the curtain, what it feels to work in AI. And then a similar one, which we might touch on later in this, was about my recent job search, which wasn't the first time I wrote a job search post. People always love that stuff. It's so open. I mean, it's easy for me to do in a way that it's very on-brand, and it's very helpful. I understand that until you've done it, it's hard to share this information. And then the other popular ones are various model training techniques or fine tuning. There's an early one on RLHF, which is, this stuff is all just like when I figure it out in my brain. So I wrote an article that's like how RLHF actually works, which is just the intuitions that I had put together in the summer about RLHF, and that was pretty well. And then I opportunistically wrote about QSTAR, which I hate that you have to do it, but it is pretty funny. From a literature perspective, I'm like, open AI publishes on work that is very related to mathematical reasoning. So it's like, oh, you just poke a little around what they've already published, and it seems pretty reasonable. But we don't know. They probably just got like a moderate bump on one of their benchmarks, and then everyone lost their minds. It doesn't really matter.Swyx [00:03:15]: You're like, this is why Sam Altman was fired. I don't know. Anyway, we're here to talk about RLHF 101. You did a presentation, and I think you expressed some desire to rerecord it. And that's why I reached out on Twitter saying, like, why not rerecord it with us, and then we can ask questions and talk about it. Yeah, sounds good.Nathan [00:03:30]: I try to do it every six or 12 months is my estimated cadence, just to refine the ways that I say things. And people will see that we don't know that much more, but we have a bit of better way of saying what we don't know.Swyx [00:03:43]: Awesome. We can dive right in. I don't know if there's any other topics that we want to lay out as groundwork.Alessio [00:03:48]: No, you have some awesome slides. So for people listening on podcast only, we're going to have the slides on our show notes, and then we're going to have a YouTube version where we run through everything together.Nathan [00:03:59]: Sounds good. Yeah. I think to start skipping a lot of the, like, what is a language model stuff, everyone knows that at this point. I think the quote from the Llama 2 paper is a great kind of tidbit on RLHF becoming like a real deal. There was some uncertainty earlier in the year about whether or not RLHF was really going to be important. I think it was not that surprising that it is. I mean, with recent models still using it, the signs were there, but the Llama 2 paper essentially reads like a bunch of NLP researchers that were skeptical and surprised. So the quote from the paper was, meanwhile, reinforcement learning known for its instability seemed a somewhat shadowy field for those in the NLP research community. However, reinforcement learning proved highly effective, particularly given its cost and time effectiveness. So you don't really know exactly what the costs and time that Meta is looking at, because they have a huge team and a pretty good amount of money here to release these Llama models. This is just the kind of thing that we're seeing now. I think any major company that wasn't doing RLHF is now realizing they have to have a team around this. At the same time, we don't have a lot of that in the open and research communities at the same scale. I think seeing that converge would be great, but it's still very early days. And the other thing on the slide is some of Anthropic's work, but everyone knows Anthropic is kind of the masters of this, and they have some of their own techniques that we're going to talk about later on, but that's kind of where we start.Alessio [00:05:17]: Can we do just a one-second RL version? So you come from a robotics background, which RL used to be, or maybe still is, state-of-the-art. And then now you're seeing a lot of LLM plus RL, so you have the gym fans, Eureka, you have MPU, which we had on the podcast when they started with RL. Now they're doing RL plus LLMs. Yeah. Any thoughts there on how we got here? Maybe how the pendulum will keep swinging?Nathan [00:05:46]: I really think RL is about a framing of viewing the world through trial and error learning and feedback, and really just one that's focused on thinking about decision-making and inputs in the world and how inputs have reactions. And in that, a lot of people come from a lot of different backgrounds, whether it's physics, electrical engineering, mechanical engineering. There are obviously computer scientists, but compared to other fields of CS, I do think it's a much more diverse background of people. My background was in electrical engineering and doing robotics and things like that. It really just changes the worldview. I think that reinforcement learning as it was back then, so to say, is really different. You're looking at these toy problems and the numbers are totally different, and everyone went kind of zero to one at scaling these things up, but people like Jim Phan and other people that were... You saw this transition in the decision transformer and papers and when people are trying to use transformers to do decision-making for things like offline RL, and I think that was kind of like the early days. But then once language models were so proven, it's like everyone is using this tool for their research. I think in the long run, it will still settle out, or RL will still be a field that people work on just because of these kind of fundamental things that I talked about. It's just viewing the whole problem formulation different than predicting text, and so there needs to be that separation. And the view of RL in language models is pretty contrived already, so it's not like we're doing real RL. I think the last slide that I have here is a way to make RLHF more like what people would think of with RL, so actually running things over time, but a weird lineage of tools that happen to get us to where we are, so that's why the name takes up so much space, but it could have gone a lot of different ways. Cool.Alessio [00:07:29]: We made it one slide before going on a tangent.Nathan [00:07:31]: Yeah, I mean, it's kind of related. This is a...Swyx [00:07:35]: Yeah, so we have a history of RL.Nathan [00:07:37]: Yeah, so to give the context, this paper really started because I have this more diverse background than some computer scientists, such as trying to understand what the difference of a cost function or a reward function and a preference function would be without going into all of the details. Costs are normally things that control theorists would work with in these kind of closed domains, and then reinforcement learning has always worked with rewards that's central to the formulation that we'll see, and then the idea was like, okay, we now are at preferences, and each step along the way there's kind of different assumptions that you're making. We'll get into these, and those assumptions are built on other fields of work. So that's what this slide is going to say, it's like RLHF, while directly building on tools from RL and language models, is really implicitly impacted and built on theories and philosophies spanning tons of human history. I think we cite Aristotle in this paper, which is fun. It's like going pre-BC, it's like 2,300 years old or something like that. So that's the reason to do this, I think. We kind of list some things in the paper about summarizing what different presumptions of RLHF could be. I think going through these is actually kind of funny. It's fun to talk about these, because they're kind of grab bags of things that you'll see return throughout this podcast that we're talking about it. The core thing of RLHF that, in order to be a believer in this, is that RL actually works. It's like, if you have a reward function, you can optimize it in some way and get a different performance out of it, and you could do this at scale, and you could do this in really complex environments, which is, I don't know how to do that in all the domains. I don't know how to exactly make chat GPT. So it's kind of, we'll overshadow everything. And then there's, go from something kind of obvious like that, and then you read the von Neumann-Morgenstern utility theorem, which is essentially an economic theory that says you can weight different probabilities of different people, which is a theoretical piece of work that is the foundation of utilitarianism, and trying to quantify preferences is crucial to doing any sort of RLHF. And if you look into this, all of these things, there's way more you could go into if you're interested in any of these. So this is kind of like grabbing a few random things, and then kind of similar to that is the Bradley-Terry model, which is the fancy name for the pairwise preferences that everyone is doing. And then all the things that are like, that Anthropic and OpenAI figured out that you can do, which is that you can aggregate preferences from a bunch of different people and different sources. And then when you actually do RLHF, you extract things from that data, and then you train a model that works somehow. And we don't know, there's a lot of complex links there, but if you want to be a believer in doing this at scale, these are the sorts of things that you have to accept as preconditions for doing RLHF. Yeah.Swyx [00:10:09]: You have a nice chart of like the sort of intellectual history of RLHF that we'll send people to refer to either in your paper or in the YouTube video for this podcast. But I like the other slide that you have on like the presumptions that you need to have for RLHF to work. You already mentioned some of those. Which one's underappreciated? Like, this is the first time I've come across the VNM Utility Theorem.Nathan [00:10:29]: Yeah, I know. This is what you get from working with people like to my co-host on the podcast, the rhetoric is that sociologist by training. So he knows all these things and like who the philosophers are that found these different things like utilitarianism. But there's a lot that goes into this. Like essentially there's even economic theories that like there's debate whether or not preferences exist at all. And there's like different types of math you can use with whether or not you actually can model preferences at all. So it's pretty obvious that RLHF is built on the math that thinks that you can actually model any human preference. But this is the sort of thing that's been debated for a long time. So all the work that's here is like, and people hear about in their AI classes. So like Jeremy Bentham, like hedonic calculus and all these things like these are the side of work where people assume that preferences can be measured. And this is like, I don't really know, like, this is what I kind of go on a rant and I say that in RLHF calling things a preference model is a little annoying because there's no inductive bias of what a preference is. It's like if you were to learn a robotic system and you learned a dynamics model, like hopefully that actually mirrors the world in some way of the dynamics. But with a preference model, it's like, Oh my God, I don't know what this model, like I don't know what chat GPT encodes as any sort of preference or what I would want it to be in a fair way. Anthropic has done more work on trying to write these things down. But even like if you look at Claude's constitution, like that doesn't mean the model believes these things. It's just trained to prioritize these things. And that's kind of what the later points I'm looking at, like what RLHF is doing and if it's actually like a repeatable process in the data and in the training, that's just unknown. And we have a long way to go before we understand what this is and the link between preference data and any notion of like writing down a specific value.Alessio [00:12:05]: The disconnect between more sociology work versus computer work already exists, or is it like a recent cross contamination? Because when we had Tri Dao on the podcast, he said FlashAttention came to be because at Hazy they have so much overlap between systems engineer and like deep learning engineers. Is it the same in this field?Nathan [00:12:26]: So I've gone to a couple of workshops for the populations of people who you'd want to include this like R. I think the reason why it's not really talked about is just because the RLHF techniques that people use were built in labs like OpenAI and DeepMind where there are some of these people. These places do a pretty good job of trying to get these people in the door when you compare them to like normal startups. But like they're not bringing in academics from economics, like social choice theory. There's just too much. Like the criticism of this paper that this is based on is like, oh, you're missing these things in RL or at least this decade of RL and it's like it would be literally be bigger than the Sutton and Barto book if you were to include everyone. So it's really hard to include everyone in a principled manner when you're designing this. It's just a good way to understand and improve the communication of what RLHF is and like what is a good reward model for society. It really probably comes down to what an individual wants and it'll probably motivate models to move more in that direction and just be a little bit better about the communication, which is a recurring theme and kind of my work is like I just get frustrated when people say things that don't really make sense, especially when it's going to manipulate individual's values or manipulate the general view of AI or anything like this. So that's kind of why RLHF is so interesting. It's very vague in what it's actually doing while the problem specification is very general.Swyx [00:13:42]: Shall we go to the, I guess, the diagram here on the reinforcement learning basics? Yeah.Nathan [00:13:47]: So reinforcement learning, I kind of mentioned this, it's a trial and error type of system. The diagram and the slides is really this classic thing where you have an agent interacting with an environment. So it's kind of this agent has some input to the environment, which is called the action. The environment returns a state and a reward and that repeats over time and the agent learns based on these states and these rewards that it's seeing and it should learn a policy that makes the rewards go up. That seems pretty simple than if you try to mentally map what this looks like in language, which is that like the language models don't make this easy. I think with the language model, it's very hard to define what an environment is. So if the language model is the policy and it's generating, it's like the environment should be a human, but setting up the infrastructure to take tens of thousands of prompts and generate them and then show them to a human and collect the human responses and then shove that into your training architecture is very far away from working. So we don't really have an environment. We just have a reward model that returns a reward and the state doesn't really exist when you look at it like an RL problem. What happens is the state is a prompt and then you do a completion and then you throw it away and you grab a new prompt. We're really in as an RL researcher, you would think of this as being like you take a state, you get some completion from it and then you look at what that is and you keep kind of iterating on it and all of that isn't here, which is why you'll hear RLHF referred to as bandits problem, which is kind of like you choose one action and then you watch the dynamics play out. There's many more debates that you can have in this. If you get the right RL people in the room, then kind of like this is an RL even when you zoom into what RLHF is doing.Alessio [00:15:22]: Does this change as you think about a chain of thought reasoning and things like that? Like does the state become part of the chain that you're going through?Nathan [00:15:29]: There's work that I've mentioned on one slide called process reward models that essentially rewards each step in the chain of thought reasoning. It doesn't really give the part of interaction, but it does make it a little bit more fine grained where you can think about like calling it at least you have many states from your initial state. That formulation I don't think people have fully settled on. I think there's a bunch of great work out there, like even OpenAI is releasing a lot of this and let's verify step by step is there pretty great paper on the matter. I think in the next year that'll probably get made more concrete by the community on like if you can easily draw out like if chain of thought reasoning is more like RL, we can talk about that more later. That's a kind of a more advanced topic than we probably should spend all the time on.Swyx [00:16:13]: RLHF for decision making. You have a slide here that compares pre-deep RL versus deep RL.Nathan [00:16:19]: This is getting into the history of things, which is showing that the work that people are using now really came from well outside of NLP and it came before deep learning was big. Next up from this paper, Tamer, which is from 2008. Some names that are still really relevant in kind of human centric RL, Bradley Knox and Peter Stone. If you have an agent take an action, you would just have a human give a score from zero to one as a reward rather than having a reward function. And then with that classifier, you can do something with a policy that learns to take actions to maximize that reward. It's a pretty simple setup. It works in simple domains. And then the reason why this is interesting is you compare it to the paper that everyone knows, which is this Paul Christiano et al. Deep Reinforced Learning from Human Preferences paper, which is where they showed that learning from human preferences, you can solve like the basic RL tasks at the time. So various control problems and simulation and this kind of like human preferences approach had higher rewards in some environments than if you just threw RL at the environment that returned a reward. So the preferences thing was you took two trajectories. So in this case, it was like complete trajectories of the agent and the human was labeling which one is better. You can see how this kind of comes to be like the pairwise preferences that are used today that we'll talk about. And there's also a really kind of interesting nugget that is the trajectory that the humans were labeling over has a lot more information than the RL algorithm would see if you just had one state, which is kind of why people think that it's why the performance in this paper was so strong. But I still think that it's surprising that there isn't more RL work of this style happening now. This paper is in 2017. So it's like six years later and I haven't seen things that are exactly similar, but it's a great paper to understand where stuff that's happening now kind of came from.Swyx [00:17:58]: Just on the Christiano paper, you mentioned the performance being strong. I don't remember what results should I have in mind when I think about that paper?Nathan [00:18:04]: It's mostly like if you think about an RL learning curve, which is like on the X axis, you have environment interactions on the Y axis, you have performance. You can think about different like ablation studies of between algorithms. So I think they use like A2C, which I don't even remember what that stands for as their baseline. But if you do the human preference version on a bunch of environments, like the human preference labels, the agent was able to learn faster than if it just learned from the signal from the environment, which means like it's happening because the reward model has more information than the agent would. But like the fact that it can do better, I was like, that's pretty surprising to me because RL algorithms are pretty sensitive. So I was like, okay.Swyx [00:18:41]: It's just one thing I do want to establish as a baseline for our listeners. We are updating all the weights. In some sense, the next token prediction task of training a language model is a form of reinforcement learning. Except that it's not from human feedback. It's just self-supervised learning from a general corpus. There's one distinction which I love, which is that you can actually give negative feedback. Whereas in a general sort of pre-training situation, you cannot. And maybe like the order of magnitude of feedback, like the Likert scale that you're going to talk about, that actually just gives more signal than a typical training process would do in a language model setting. Yeah.Nathan [00:19:15]: I don't think I'm the right person to comment exactly, but like you can make analogies that reinforcement learning is self-supervised learning as well. Like there are a lot of things that will point to that. I don't know whether or not it's a richer signal. I think that could be seen in the results. It's a good thing for people to look into more. As reinforcement learning is so much less compute, like it is a richer signal in terms of its impact. Because if they could do what RLHF is doing at pre-training, they would, but they don't know how to have that effect in like a stable manner. Otherwise everyone would do it.Swyx [00:19:45]: On a practical basis, as someone fine-tuning models, I have often wished for negative fine-tuning, which pretty much doesn't exist in OpenAI land. And it's not the default setup in open-source land.Nathan [00:19:57]: How does this work in like diffusion models and stuff? Because you can give negative prompts to something to like stable diffusion or whatever. It's for guidance.Swyx [00:20:04]: That's for clip guidance.Nathan [00:20:05]: Is that just from like how they prompt it then? I'm just wondering if we could do something similar. It's another tangent.Swyx [00:20:10]: I do want to sort of spell that out for people in case they haven't made the connection between RLHF and the rest of the training process. They might have some familiarity with it.Nathan [00:20:19]: Yeah. The upcoming slides can really dig into this, which is like this in 2018 paper, there was a position paper from a bunch of the same authors from the Christiano paper and from the OpenAI work that everyone knows, which is like, they write a position paper on what a preference reward model could do to solve alignment for agents. That's kind of based on two assumptions. The first assumption is that we can learn user intentions to a sufficiently high accuracy. That doesn't last with me because I don't know what that means. But the second one is pretty telling in the context of RLHF, which is for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. And this is the whole thing. It's like we can compare two poems that the model generates and it can be viewed as liking a positive example, or it could be viewed as really disliking a negative example. And that's what I think a lot of people are doing in like the harm space is like a harmful response to a language model, whether or not you agree with the company's definition of harms is that it's a really bad negative example and they downweight them by preferring something more benign in the RLHF process, among other ways of dealing with safety. So that's a good way of saying it's like this is core, this kind of like comparison and positive or negative example is core to all of the RLHF work that has continued.Swyx [00:21:29]: People often say, I don't know what I want, but I'll know when I see it. This is that expressed in reinforcement learning tools.Nathan [00:21:35]: Yeah, it is. Yeah, it is. That's what everyone's doing in the preference modeling stage that we'll get to. Yeah. Yeah. And you can see there are more papers. This is really just to have all the links for people that go deeper. There's a Ziegler et al. paper in 2019, which shows that you can do this RLHF process on language models. This familiar diagram starts to emerge in 2019, and it's just to show that this goes really far back. I think we can kind of breeze through some of these. And then 2020 is the first open AI experiment that I think caught people's eyes, which is this learning to summarize experiment. It has this three-step process that we'll go to into more when I kind of go into the main concepts. But this is like the first time you see this diagram that they reuse with InstructGPT, they reuse with ChatGPT. And the types of examples that they would have, I don't think I need to read these exactly, but one that I have read a whole bunch of times is like, they took these prompts from Reddit that was like, explain like I'm five or get career advice, and people really pour their heart and soul into these. So these are like multi-paragraph pieces of writing. And then they essentially do comparisons between a vanilla language model, like I think it was either GPT-2 or GPT-3, I don't always get the exact years.Swyx [00:22:42]: 3 was early 2020. So that's about right.Nathan [00:22:45]: Yeah. So this is probably done with GPT-2. It doesn't really matter. But the language model does normal things when you do few shot, which is like it repeats itself. It doesn't have nice text. And what they did is that this was the first time where the language model would generate like pretty nice text from an output. It was restricted to the summarization domain. But I think that I guess this is where I wish I was paying attention more because I would see the paper, but I didn't know to read the language model outputs and kind of understand this qualitative sense of the models very well then. Because you look at the plots in the papers, these Learning to Summarize and Destruct GPT have incredibly pretty plots, just like nicely separated lines with error bars and they're like superfine tuning works, the RL step works. But if you were early to see like how different the language that was written by these models was, I think you could have been early to like things like ChatGPT and knowing RLHF would matter. And now I think the good people know to chat with language models, but not even everyone does this. Like people are still looking at numbers. And I think OpenAI probably figured it out when they were doing this, how important that could be. And then they had years to kind of chisel away at that and that's why they're doing so well now. Yeah.Swyx [00:23:56]: I mean, arguably, you know, it's well known that ChatGPT was kind of an accident that they didn't think it would be that big of a deal. Yeah.Nathan [00:24:02]: So maybe they didn't. Maybe they didn't, but they were getting the proxy that they needed.Swyx [00:24:06]: I've heard off the record from other labs that it was in the air. If OpenAI didn't do it, someone else would have done it. So you've mentioned a couple of other papers that are very seminal to this period. And I love how you say way back when in referring to 2019.Nathan [00:24:19]: It feels like it in my life.Swyx [00:24:21]: So how much should people understand the relationship between RLHF, instruction tuning, PPO, KL divergence, anything like that? Like how would you construct the level of knowledge that people should dive into? What should people know at the high level? And then if people want to dive in deeper, where do they go? Is instruct tuning important here or is that part of the overall process towards modern RLHF?Nathan [00:24:44]: I think for most people, instruction tuning is probably still more important in their day to day life. I think instruction tuning works very well. You can write samples by hand that make sense. You can get the model to learn from them. You could do this with very low compute. It's easy to do almost in like no code solutions at this point. And the loss function is really straightforward. And then if you're interested in RLHF, you can kind of learn from it from a different perspective, which is like how the instruction tuning distribution makes it easier for your RLHF model to learn. There's a lot of details depending on your preference data, if it's close to your instruction model or not, if that matters. But that's really at the RLHF stage. So I think it's nice to segment and just kind of understand what your level of investment and goals are. I think instruction tuning still can do most of what you want to do. And it's like, if you want to think about RLHF, at least before DPO really had taken off at all, it would be like, do you want to have a team of at least like five people if you're really thinking about doing RLHF? I think DPO makes it a little bit easier, but that's still really limited to kind of one data set that everyone's using at this point. Like everyone's using this ultra feedback data set and it boosts AlpacaVal, MTBench, TruthfulQA and like the qualitative model a bit. We don't really know why. It's like, it might just be a data set combined with the method, but you've got to be ready for a bumpy ride if you're wanting to try to do RLHF. I don't really recommend most startups to do it unless it's like going to provide them a clear competitive advantage in their kind of niche, because you're not going to make your model chat GPT like better than OpenAI or anything like that. You've got to accept that there's some exploration there and you might get a vein of benefit in your specific domain, but I'm still like, oh, be careful going into the RLHF can of worms. You probably don't need to.Swyx [00:26:27]: Okay. So there's a bit of a time skip in what you mentioned. DPO is like a couple months old, so we'll leave that towards the end. I think the main result that I think most people talk about at this stage, we're talking about September 2020 and then going into, I guess maybe last year was Vicuña as one of the more interesting applications of instruction tuning that pushed LLAMA1 from, let's say a GPT 3-ish model to a GPT 3.5 model in pure open source with not a lot of resources. I think, I mean, they said something like, you know, they use like under $100 to makeNathan [00:26:58]: this. Yeah. Like instruction tuning can really go a long way. I think the claims of chat GPT level are long overblown in most of the things in open source. I think it's not to say, like Vicuña was a huge step and it's just kind of showing that instruction tuning with the right data will completely change what it feels like to talk with your model. Yeah.Swyx [00:27:19]: From text completion to actually chatting back and forth. Yeah. Yeah.Nathan [00:27:23]: Instruction tuning can be multi-turn. Just having a little bit of data that's like a couple of turns can go a really long way. That was like the story of the whole first part of the year is like people would be surprised by how far you can take instruction tuning on a small model. I think the things that people see now is like the small models don't really handle nuance as well and they could be more repetitive even if they have really good instruction tuning. But if you take that kind of 7 to 70 billion parameter jump, like the instruction tuning at the bigger model is like robustness, little things make more sense. So that's still just with instruction tuning and scale more than anything else.Swyx [00:27:56]: Excellent. Shall we go to technical overview?Nathan [00:27:58]: Yeah. This is kind of where we go through my own version of this like three phase process. You can talk about instruction tuning, which we've talked about a lot. It's funny because all these things, instruction tuning has the fewest slides, even though it's the most practical thing for most people. We could save the debate for like if the big labs still do instruction tuning for later, but that's a coming wave for people. And then like preference data and training and then kind of like what does reinforce learning optimization actually mean? We talk about these sequentially because you really have to be able to do each of them to be able to do the next one. You need to be able to have a model that's chatty or helpful instruction following. Every company has their own word that they like to assign to what instructions mean. And then once you have that, you can collect preference data and do some sort of optimization.Swyx [00:28:39]: When you say word, you mean like angle bracket inst or do you mean something else?Nathan [00:28:42]: Oh, I don't even know what inst means, but just saying like they use their adjective that they like. I think Entropic also like steerable is another one.Swyx [00:28:51]: Just the way they describe it. Yeah.Nathan [00:28:53]: So like instruction tuning, we've covered most of this is really about like you should try to adapt your models to specific needs. It makes models that were only okay, extremely comprehensible. A lot of the times it's where you start to get things like chat templates. So if you want to do system prompts, if you want to ask your model, like act like a pirate, that's one of the ones I always do, which is always funny, but like whatever you like act like a chef, like anything, this is where those types of things that people really know in language models start to get applied. So it's good as a kind of starting point because this chat template is used in our early childhood and all of these things down the line, but it was a basic pointer. It's like, once you see this with instruction tuning, you really know it, which is like you take things like stack overflow where you have a question and an answer. You format that data really nicely. There's much more tricky things that people do, but I still think the vast majority of it is question answer. Please explain this topic to me, generate this thing for me. That hasn't changed that much this year. I think people have just gotten better at scaling up the data that they need. Yeah, this is where this talk will kind of take a whole left turn into more technical detail land. I put a slide with the RLHF objective, which I think is good for people to know. I've started going back to this more, just kind of understand what is trying to happen here and what type of math people could do. I think because of this algorithm, we've mentioned this, it's in the air, direct preference optimization, but everything kind of comes from an equation of trying to learn a policy that maximizes the reward. The reward is some learned metric. A lot can be said about what the reward should be subject to some constraint. The most popular constraint is the KL distraint, which is just a distributional distance. Essentially in language models, that means if you have a completion from your instruction or RLHF model, you can compare that completion to a base model. And looking at the log probs from the model, which are essentially how likely each token is, you can see a rough calculation of the distance between these two models, just as a scalar number. I think what that actually looks like in code, you can look at it. It'd be like a sum of log probs that you get right from the model. It'll look much more simpler than it sounds, but it is just to make the optimization kind of stay on tracks.Make sure it doesn't overfit to the RLHF data. Because we have so little data in RLHF, overfitting is really something that could happen. I think it'll fit to specific features that labelers like to see, that the model likes to generate, punctuation, weird tokens like calculator tokens. It could overfit to anything if it's in the data a lot and it happens to be in a specific format. And the KL constraint prevents that. There's not that much documented work on that, but there's a lot of people that know if you take that away, it just doesn't work at all. I think it's something that people don't focus on too much. But the objective, as I said, it's just kind of, you optimize the reward. The reward is where the human part of this comes in. We'll talk about that next. And then subject to a constraint, don't change the model too much. The real questions are, how do you implement the reward? And then how do you make the reward go up in a meaningful way? So like a preference model, the task is kind of to design a human reward. I think the equation that most of the stuff is based on right now is something called a Bradley-Terry model, which is like a pairwise preference model where you compare two completions and you say which one you like better. I'll show an interface that Anthropic uses here. And the Bradley-Terry model is really a fancy probability between two selections. And what's happening in the math is that you're looking at the probability that the chosen completion, the one you like better, is actually the better completion over the rejected completion. And what these preference models do is they assume this probability is correlated to reward. So if you just sample from this probability, it'll give you a scalar. And then you use that reward later on to signify what piece of text is better. I'm kind of inclined to breeze through the math stuff because otherwise, it's going to be not as good to listen to.Alessio [00:32:49]: I think people want to hear it. I think there's a lot of higher level explanations out there. Yeah.Nathan [00:32:55]: So the real thing is you need to assign a scalar reward of how good a response is. And that's not necessarily that easy to understand. Because if we take back to one of the first works, I mentioned this tamer thing for decision making. People tried that with language models, which is if you have a prompt in a completion and you just have someone rate it from 0 to 10, could you then train a reward model on all of these completions in 0 to 10 ratings and see if you can get chat2BT with that? And the answer is really kind of no. Like a lot of people tried that. It didn't really work. And then that's why they tried this pairwise preference thing. And it happened to work. And this Bradley Terry model comes from the 50s. It's from these fields that I was mentioning earlier. And it's wild how much this happens. I mean, this screenshot I have in the slides is from the DPO paper. I think it might be the appendix. But it's still really around in the literature of what people are doing for RLHF.Alessio [00:33:45]: Yeah.Nathan [00:33:45]: So it's a fun one to know.Swyx [00:33:46]: I'll point out one presumption that this heavily relies on. You mentioned this as part of your six presumptions that we covered earlier, which is that you can aggregate these preferences. This is not exactly true among all humans, right? I have a preference for one thing. You have a preference for a different thing. And actually coming from economics, you mentioned economics earlier. There's a theorem or a name for this called error impossibility, which I'm sure you've come across..Nathan [00:34:07]: It's one of the many kind of things we throw around in the paper.Swyx [00:34:10]: Right. Do we just ignore it?Nathan [00:34:14]: We just, yeah, just aggregate. Yeah. I think the reason this really is done on a deep level is that you're not actually trying to model any contestable preference in this. You're not trying to go into things that are controversial or anything. It's really the notion of preference is trying to stay around correctness and style rather than any meaningful notion of preference. Because otherwise these companies, they don't want to do this at all. I think that's just how it is. And it's like, if you look at what people actually do. So I have a bunch of slides on the feedback interface. And they all publish this.Swyx [00:34:43]: It's always at the appendices of every paper.Nathan [00:34:47]: There's something later on in this talk, which is like, but it's good to mention. And this is when you're doing this preference collection, you write out a very long document of instructions to people that are collecting this data. And it's like, this is the hierarchy of what we want to prioritize. Something amount like factuality, helpfulness, honestness, harmlessness. These are all different things. Every company will rank these in different ways, provide extensive examples. It's like, if you see these two answers, you should select this one and why. And all of this stuff. And then my kind of like head scratching is like, why don't we check if the models actually do these things that we tell the data annotators to collect? But I think it's because it's hard to make that attribution. And it's hard to test if a model is honest and stuff. It would just be nice to understand the kind of causal mechanisms as a researcher or like if our goals are met. But at a simple level, what it boils down to, I have a lot more images than I need. It's like you're having a conversation with an AI, something like type GPT. You get shown two responses or more in some papers, and then you have to choose which one is better. I think something you'll hear a lot in this space is something called a Likert scale. Likert is a name. It's a name for probably some research in economics, decision theory, something. But essentially, it's a type of scale where if you have integers from like one to eight, the middle numbers will represent something close to a tie. And the smallest numbers will represent one model being way better than the other. And the biggest numbers will be like the other models better. So in the case of one to eight, if you're comparing models A to B, if you return a one, if you really liked option A, you return eight if you really like B, and then like a four or five if they were close. There's other ways to collect this data. This one's become really popular. We played with it a bit at Hugging Face. It's hard to use. Filling out this preference data is really hard. You have to read like multiple paragraphs. It's not for me. Some people really like it. I hear I'm like, I can't imagine sitting there and reading AI-generated text and like having to do that for my job. But a lot of these early papers in RLHF have good examples of what was done. The one I have here is from Anthropic's collection demo because it was from slides that I did with Anthropic. But you can look up these in the various papers. It looks like Chat2BT with two responses, and then you have an option to say which one is better. It's nothing crazy. The infrastructure is almost exactly the same, but they just log which one you think is better. I think places like Scale are also really big in this where a lot of the labeler companies will help control like who's doing how many samples. You have multiple people go over the same sample once and like what happens if there's disagreement. I don't really think this disagreement data is used for anything, but it's good to know like what the distribution of prompts is, who's doing it, how many samples you have, controlling the workforce. All of this is very hard. A last thing to add is that a lot of these companies do collect optional metadata. I think the Anthropic example shows a rating of like how good was the prompt or the conversation from good to bad because things matter. Like there's kind of a quadrant of preference data in my mind, which is you're comparing a good answer to a good answer, which is like really interesting signal. And then there's kind of the option of you're comparing a bad answer to a bad answer, which is like you don't want to train your model on two different issues. This is like, we did this at Hugging Base and it was like, our data was like, we don't know if we can use this because a lot of it was just bad answer to bad answer because you're like rushing to try to do this real contract. And then there's also good answer to bad answer, which I think is probably pretty reasonable to include. You just prefer the good one and move on with your life. But those are very different scenarios. I think open AIs of the world are all in good answer, good answer, and have learned to eliminate everything else. But when people try to do this in open source, it's probably like what Open Assistance saw is like, there's just a lot of bad answers in your preference data. And you're like, what do I do with this? Metadata flags can help. I threw in the instruct GPT metadata. You can see how much they collect here. And like everything from the model fails to actually complete the task, hallucinations, different types of offensive or dangerous content, moral judgment, expresses opinion. Like, I don't know exactly if they're doing this now, but you can kind of see why doing RLHF at scale and prioritizing a lot of different endpoints would be hard because these are all things I'd be interested in if I was scaling up a big team to do RLHF and like what is going into the preference data. You do an experiment and you're like, okay, we're going to remove all the data where they said the model hallucinates like just that and then retrain everything. Like, what does that do?Swyx [00:38:59]: Yeah, so hallucination is big, but some of these other metadata categories, and I've seen this in a lot of papers, it's like, does it contain sexual content? Does it express a moral judgment? Does it denigrate a protected class? That kind of stuff, very binary. Should people try to adjust for this at the RLHF layer or should they put it as a pipeline where they have a classifier as a separate model that grades the model output?Nathan [00:39:20]: Do you mean for training or like a deployment? Deployment. I do think that people are doing it at deployment. I think we've seen safety and other things in the RLHF pipeline. Like Lama 2 is famous for kind of having this like helpfulness and safety reward models. Deep in the Gemini report is something that Gemini has like four things, which is like helpfulness, factuality, maybe safety, maybe something else. But places like Anthropic and Chattopadhyay and Bard almost surely have a classifier after, which is like, is this text good? Is this text bad? That's not that surprising, I think, because you could use like a hundred times smaller language model and do much better at filtering than RLHF. But I do think it's still so deeply intertwined with the motivation of RLHF to be for safety that some of these categories still persist. I think that's something I'll kind of settle out, I think.Swyx [00:40:11]: I'm just wondering if it's worth collecting this data for the RLHF purpose, if you're not going to use it in any way, separate model to-Nathan [00:40:18]: Yeah, I don't think OpenAI will collect all of this anymore, but I think for research perspectives, it's very insightful to know, but it's also expensive. So essentially your preference data scales with how many minutes it takes for you to do each task and every button is like, it scales pretty linearly. So it's not cheap stuff.Swyx [00:40:35]: Can we, since you mentioned expensiveness, I think you may have joined one of our spaces back in Lama 2 was released. We had an estimate from you that was something on the order of Lama 2 costs $3 to $6 million to train GPU-wise, and then it was something like $20 to $30 million in preference data. Is that something that's still in the ballpark? I don't need precise numbers.Nathan [00:40:56]: I think it's still a ballpark. I know that the 20 million was off by a factor of four because I was converting from a prompt number to a total data point. So essentially when you do this, if you have multi-turn setting, each turn will be one data point and the Lama 2 paper reports like 1.5 million data points, which could be like 400,000 prompts. So I would say it's still say like 6 to 8 million is safe to say that they're spending, if not more, they're probably also buying other types of data and or throwing out data that they don't like, but it's very comparable to compute costs. But the compute costs listed in the paper always are way lower because all they have to say is like, what does one run cost? But they're running tens or hundreds of runs. So it's like, okay, like... Yeah, it's just kind of a meaningless number. Yeah, the data number would be more interesting.Alessio [00:41:42]: What's the depreciation of this data?Nathan [00:41:46]: It depends on the method. Like some methods, people think that it's more sensitive to the, this is what I was saying. It was like, does the type of instruction tuning you do matter for RLHF? So like, depending on the method, some people are trying to figure out if you need to have like what is called like, this is very confusing. It's called like on policy data, which is like your RLHF data is from your instruction model. I really think people in open source and academics are going to figure out how to use any preference data on any model just because they're scrappy. But there's been an intuition that to do like PPO well and keep improving the model over time and do like what Meta did and what people think that OpenAI does is that you need to collect new preference data to kind of edge the distribution of capabilities forward. So there's a depreciation where like the first batch of data you collect isn't really useful for training the model when you have the fifth batch. We don't really know, but it's a good question. And I do think that if we had all the LLAMA data, we wouldn't know what to do with all of it. Like probably like 20 to 40% would be pretty useful for people, but not the whole data set. Like a lot of it's probably kind of gibberish because they had a lot of data in there.Alessio [00:42:51]: So do you think like the open source community should spend more time figuring out how to reuse the data that we have or like generate more data? I think that's one of the-Nathan [00:43:02]: I think if the people are kind of locked into using synthetic data, people also think that synthetic data is like GPT-4 is more accurate than humans at labeling preferences. So if you look at these diagrams, like humans are about 60 to 70% agreement. And we're like, that's what the models get to. And if humans are about 70% agreement or accuracy, like GPT-4 is like 80%. So it is a bit better, which is like in one way of saying it.Swyx [00:43:24]: Humans don't even agree with humans 50% of the time.Nathan [00:43:27]: Yeah, so like that's the thing. It's like the human disagreement or the lack of accuracy should be like a signal, but how do you incorporate that? It's really tricky to actually do that. I think that people just keep using GPT-4 because it's really cheap. It's one of my like go-to, like I just say this over and over again is like GPT-4 for data generation, all terms and conditions aside because we know OpenAI has this stuff is like very cheap for getting pretty good data compared to compute or salary of any engineer or anything. So it's like tell people to go crazy generating GPT-4 data if you're willing to take the organizational like cloud of should we be doing this? But I think most people have accepted that you kind of do this, especially at individuals. Like they're not gonna come after individuals. I do think more companies should think twice before doing tons of OpenAI outputs. Also just because the data contamination and what it does to your workflow is probably hard to control at scale.Swyx [00:44:21]: And we should just mention at the time of recording, we've seen the first example of OpenAI enforcing their terms of service. ByteDance was caught, reported to be training on GPT-4 data and they got their access to OpenAI revoked. So that was one example.Nathan [00:44:36]: Yeah, I don't expect OpenAI to go too crazy on this cause they're just gonna, there's gonna be so much backlash against them. And like, everyone's gonna do it anyways.Swyx [00:44:46]: And what's at stake here to spell it out is like, okay, that's like cost $10 to collect one data point from a human. It's gonna cost you like a 10th of a cent with OpenAI, right? So like it's just orders of magnitude cheaper. And therefore people-Nathan [00:44:58]: Yeah, and it's like the signal you get from humans is from preferences isn't that high. The signal that you get from humans for instructions is pretty high, but it is also very expensive. So like the human instructions are definitely like by far and away the best ones out there compared to the synthetic data. But I think like the synthetic preferences are just so much easier to get some sort of signal running with and you can work in other, I think people will start working in other goals there between safety and whatever. That's something that's taking off and we'll kind of see that. I think in 2024, at some point, people will start doing things like constitutional AI for preferences, which will be pretty interesting. I think we saw how long it took RLHF to get started in open source. Instruction tuning was like the only thing that was really happening until maybe like August, really. I think Zephyr was the first model that showed success with RLHF in the public, but that's a long time from everyone knowing that it was something that people are interested in to having any like check mark. So I accept that and think the same will happen with constitutional AI. But once people show that you can do it once, they continue to explore.Alessio [00:46:01]: Excellent.Swyx [00:46:01]: Just in the domain of human preference data suppliers, Scale.ai very happily will tell you that they supplied all that data for Lama 2. The other one is probably interesting, LMSYS from Berkeley. What they're running with Chaterina is perhaps a good store of human preference data.Nathan [00:46:17]: Yeah, they released some toxicity data. They, I think, are generally worried about releasing data because they have to process it and make sure everything is safe and they're really lightweight work. I think they're trying to release the preference data. I have, if we make it to evaluation, I'd pretty much say that Chaterina is the best limited evaluation that people have to learn how to use language models. And like, it's very valuable data. They also may share some data with people that they host models from. So like if your model is hosted there and you pay for the hosting, you can get the prompts because you're pointing the endpoint at it and that gets pinged to you and you're any real LLM inference stack saves the prompts tha
Marko Dimitrijević, trener KK Sloga Kraljevo, novi je gost Jao Mile podcast. Iskoristili smo priliku da čujemo sa kojim poteškoćama se suočava KK SlogaKako je trenutno stanje u KLS i možemo li očekivati bolje u bliskoj budućnosti. Prisetili smo se nekih imena koja su obeležila košarku u srpskoj "Atini" i anegdota koje ih prate. Uživajte! Mozete nas podrzati kupovinom https://www.jaomile.com00:00:00 Pocetak00:01:26 Humanitarni deo00:02:00 Predstavljanje gosta00:04:45 3x3 i KLS00:23:00 Rani odlazak za BG00:34:30 Liga bez poente00:41:10 Anegdote iz Kraljeva00:50:25 Sloga i dugovi01:00:45 Odrastanje01:17:40 Seniorska kosarka01:31:40 Saigrači 01:36:00 INO karijera01:39:50 Sećanja 01:46:15 Najteži protivnik 01:50:50 Trenerski poziv01:56:55 Reprezentacija02:01:15 Topić Nikola02:11:38 Najteži momenat02:13:13 Savet za mlade 02:19:50 Humanitarni radJoin this channel to get access to perks:https://www.youtube.com/channel/UCtcmRUuJVYYbyfi__I_JBnA/joinThumbnail designer:https://instagram.com/design33_mk?igshid=MzRlODBiNWFlZA==Pratite nas na društvenim mrežama!Instagramhttps://www.instagram.com/jaomile_podcast/Facebook https://www.facebook.com/JAOMILEPODCASTTikTokhttps://www.tiktok.com/@jaomile_podcastTwitter https://twitter.com/mileilicGost: Marko DimitrijevićDatum: 22. novembar 2023.Autor i domaćin: Mile IlićLokacija: Studio na kraju UniverzumaProdukcija: Infinity Lighthouse#jaomilepodcast #kksloga #markodimitrijevic #nikolajokic #dimitrijević #crvenazvezda #nba #kosarka #abaliga #jokic #bogdanovic #grobari #doncic #nikolatopic #obradovic
The Story of Kreamy 'Lectric Santa (KLS) with Robert Price & Priya RayRobert Price and Priya Ray are founding members of Kreamy 'Lectric Santa (KLS). Originating out of Miami, FL, In the early 90's, KLS started out of a band called The Prom Sluts and blended punk, psych experimental and electronic music. The KLS collective had more than 50 members over the years and were a staple of of the Miami scene. [Episode: 66 - Recorded 11/05/2023]☞ Follow Florida Sound Archive on Instagram! @floridasoundarchiveMusic:KLS - Holdin yerself (2009)KLS - Messin With My Head (1995)Follow KLS:Instagram - @kreamy_kectric_santa_ / Facebook - https://www.facebook.com/kreamyls/Bandcamp - https://kreamylectricsanta.bandcamp.com/ / Website - http://www.kreamy.org/kls_html/home.html***A message from Robert & Priya***We were so caffeinated that we completely forgot to mention long time member Jan 9, loyal drummer Ricky Pollo, experimental projects Monstadt 3 with Adris Hoyos, Bank o Christ with Marlon Kassberg, and Roberts time playing junk on the streets of Miami, and then SF with Stevie Ray Funyon of the Funyons. Anyone else omitted we're so dam sorry. Hard to consolidate 30 years into a couple of hours.Related Inteviews:Jeff Hodapp (Morbid Opera/Drug Czars) - https://www.youtube.com/watch?v=WFl1-3CzzLgChuck Loose (Chickenhead/Drug Czars) - https://youtu.be/_Ieek4sSdkw?si=5iZLWpPdd1MJg8GBCharlie Pickett - https://youtu.be/-geZ4dTJEG8?si=dgDjJgqF18ZPQuAa
Vuk Popović, najstariji DEBITANT u KLS a ujedno i vlasnik KK Čačak 94 Quantox, novi je gost Jao Mile podcast-a! Entuzijasta koji svoje preduzetničke sposobnosti iz ljubavi prema košarci prenosi na NJIHOV klub. Ima MASTER plan za košarku u Čačku! Takmičar velikog srca! ❤️Uživajte u razgovoru! 00:00:00 Početak 00:01:06 Subscribe 00:01:15 Predstavljanje gosta00:02:15 Klub je nešto naše 00:04:05 Čačak sa PET klubova00:08:50 Ljubav i takmičenje 00:10:57 Uspesi kluba00:12:45 Organizacija kluba00:17:35 Nameštanje u KLS00:21:55 Brendiran BUS00:25:00 Nema termina u Čačku00:45:50 Odrastanje00:52:33 Posao 00:58:00 Radim ovo za svoju dušu 00:59:50 Proces stvaranja 01:07:00 Savet za mlade 01:10:00 Put na NBA finale01:17:40 SP Manila01:18:45 Veliko ❤️ HVALA! Join this channel to get access to perks:https://www.youtube.com/channel/UCtcmRUuJVYYbyfi__I_JBnA/joinThumbnail designer:https://instagram.com/design33_mk?igshid=MzRlODBiNWFlZA==Pratite nas na društvenim mrežama!Instagramhttps://www.instagram.com/jaomile_podcast/Facebook https://www.facebook.com/JAOMILEPODCASTTikTokhttps://www.tiktok.com/@jaomile_podcastTwitter https://twitter.com/mileilicGost: Vuk PopovićDatum: 18. oktobar 2023.Autor i domaćin: Mile IlićLokacija: Studio na kraju UniverzumaProdukcija: Infinity Lighthouse#jaomilepodcast #cacak94 #quantox #nikolajokic #vukpopovic #popović #crvenazvezda #srbija #nba #kosarka #abaliga #jokic #bogdanovic #partizan #cacak #klsrb
Triggerwarnung: In der gesamten Folge geht es um übergriffiges Verhalten. “Ja, ich will” sind drei kleine Worte, die signalisieren, dass alles in Ordnung ist. Aber wenn jemand nicht will, können aus bestimmten Handlungen schnell Straftaten werden. Und darum geht's in dieser Folge “Mordlust - Verbrechen und ihre Hintergründe”. Mit dem freundlichen Motorradfahrer Roman hat Iris eine neue Liebe gefunden. Doch als sie ins Krankenhaus muss, fällt sie von Wolke 7 auf den harten Boden der Tatsachen: Die 39-Jährige ist unfreiwillig zu einer Figur in einem lebensgefährlichen Spiel geworden. Und nicht nur sie kann dabei alles verlieren. Jacoba wünscht sich sehnlichst Geschwister. Ein DNA-Test lässt den großen Traum der 34-Jährigen aus Indiana wahr werden. Doch der wird zum Albtraum, als Jacoba durch das Testergebnis auch ein Familiengeheimnis zutage fördert, das das Leben von weit über 100 Menschen erschüttern wird. Dass Consent, also Zustimmung, lange kein Thema war, das Gesellschaft und Justiz ernst genommen haben, zeigt ein Blick ins Strafgesetzbuch: Einige Taten sind erst seit Kurzem strafbar. Dass dafür oft gar keine Gewalt nötig ist, wie quälend lang zehn Sekunden sein können und was geschieht, wenn eine Person beim Sex heimlich das Kondom abzieht, erfahrt ihr in dieser Episode. Interviewpartnerin in dieser Folge: Sexualstrafrechtlerin Prof. Dr. Tatjana Hörnle **Kapitel** 1. Fall “Iris“ 2. Aha: Infektion mit Krankheiten 3. Fall Jacoba 4. Aha: Illegale Insemination 5. Upskirting & Downblousing 6. Consent-Taktiken vor Gericht 7. Erzwungenes Outing von trans* Menschen 8. Stealthing 9. “Nein heißt nein” **Credit** Produzentinnen/ Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Magdalena Höcherl Schnitt: Pauline Korb Rechtliche Abnahme: Abel und Kollegen **Shownotes** Podcast-Empfehlung: Moreno+1: https://bitly.ws/VIxE *Fall “Iris”* Urteil 10 KLs 1/10: https://bitly.ws/VIxH Shz: “Er hat mich absichtlich angesteckt”: https://bitly.ws/VIxQ Shz: “Sex war der Lebenszweck”: https://bitly.ws/VIxU *Fall Jacoba* “Our father” Netflix-Dokumentation, 2022 Donald Cline: The fertility doctor accused of fraud: https://bitly.ws/VIy2 Welt: Arzt befruchtete 50 Frauen mit eigenem Sperma: https://bitly.ws/VIy8 *Diskussion* ZDF heute: “Zehn-Sekunden-Grapsch-Urteil” empört Italien: https://bitly.ws/VIys Spiegel: Zwei Jahre auf Bewährung für Nadja Benaissa: https://bitly.ws/VIyA Jurist: UK to impose harsher sentences in domestic homicide cases: https://bitly.ws/VIyD SZ Jetzt: Straftatbestand: Transsexualität verschwiegen: https://bitly.ws/VIyI BGH zu Stealthing: https://bitly.ws/VIyN Statistik sexuelle Belästigung: Geschlechterspezifisch: https://bitly.ws/VIyU **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Zlatko Bolić, rekorder domaćih takmičenja sa najviše postignutih poena svih vremena, novi je gost Jao Mile podcasta.Razgovarali smo o radu KSS i šta ih sve očekuje ovo leto.Kako bi trebalo da izgleda KLS.Kako je izgledalo njegovo detinjstvo.Kako je napustio fudbal i zavoleo košarku.Ulazak u seniorsku košarku.Kako su izgledali nastupi za KKCZ.Mnogo dobrih priča i anegdota očekuje vas u preko dva ipo sata razgovora.Ukoliko želite da podržite i pomognete naš rad možete to učiniti preko PayPal-a u linku ispodPayPalhttps://www.paypal.com/paypalme/jaomi...Pratite nas na društvenim mrežama!Instagramhttps://www.instagram.com/jaomile_pod...Facebook https://www.facebook.com/JAOMILEPODCASTTikTokhttps://www.tiktok.com/@jaomile_podcastTwitter https://twitter.com/mileilicGosti: Zlatko BolićDatum: 7. jun 2023.Autor i domaćin: Mile IlićLokacija: Studio na kraju UniverzumaProdukcija: Infinity Lighthouse#jaomilepodcast #zlatkobolic #srbija #crvenazvezda #jaomile #podcast #basketball #kkpartizan #zlatko #bolić #nikolajokic #abaliga #vojvodina #kls #kosarka
Im 2. Fall geht es um sexualisierte Gewalt. Gewalt darf man mit Gewalt begegnen. So steht's im Gesetz. Doch wie immer bestätigen Ausnahmen die Regel. In dieser Folge “Mordlust - Verbrechen und ihre Hintergründe” geht es um die Notwehr und ihre Grenzen. Ihr Laden ist ihr Leben: Juwelier Arthur Koller und seine Frau Elke führen das Geschäft für Schmuck und Antiquitäten in der Celler Innenstadt schon seit Jahrzehnten. Als sie 1998 überfallen werden, rät ihnen ein Polizist, sich zu bewaffnen. Doch als der Laden der Kollers 22 Jahre später wieder zum Ziel eines Raubüberfalls wird, endet dieser in einer menschlichen Tragödie. Martina ist eine gute Seele: Seit Jahren versorgt die Hausfrau aus dem Harz ihren kranken Bekannten Richard. Sie putzt, wäscht und kocht für den Rentner, sorgt dafür, dass sein Leben weiterläuft, auch als es schon beendet ist. Bis sich Jahre später Richards Rentenversicherung bei der Polizei meldet. Deren Ermittlungen fördern wortwörtlich ein Geheimnis zutage, das die Justiz vor eine große Herausforderung stellt. Einen Angriff darf man abwehren, aber nicht bei Volltrunkenen und Kindern und auch nicht mit allen Mitteln. Wann das Pfefferspray und wann “krasse” Fäuste eingesetzt werden dürfen und was passiert, wenn die Person, die zuerst angegriffen hat bei der Abwehr stirbt, hört ihr in dieser Episode! **Kapitel** 1. Fall “Arthur & Elke” 2. Aha: Mittel für Notwehr 3. Fall “Martina” 4. Aha: Wann ist Notwehr erlaubt? 5. Darf man sein Hab und Gut mit Waffengewalt verteidigen? 6. Wo liegen die Grenzen der Notwehr? **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Magdalena Höcherl Schnitt: Pauline Korb **Shownotes** *Fall “Arthur & Elke”* Urteil: LG Lüneburg, 111 KLs 3/21 Doku: Tätern auf der Spur - Notwehr: https://bit.ly/3KdNE4H CZ: Celler Juwelier nach Urteil weiter in Angst: https://bit.ly/3FVmAol CZ: Als wäre es erst gestern passiert: https://bit.ly/3FWKgZm Fall “Martina” Urteil: LG Magdeburg, 21 Ks 853 Js 85153/16 (4/18) Spiegel: Frau betoniert Nachbarn ein: https://bit.ly/40C78G9 Spiegel: Frau betoniert Nachbarn ein und kassiert 20 Jahre lang die Rente: https://bit.ly/3nnLS8a Spiegel: Spiegel TV über Deutschlands längsten Rentenbetrug: https://bit.ly/40m5zff **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Come on! Snerrede Lars Løkke, da han fik spørgsmål om Jon Stephensen på et Ukraine-pressemøde. Det skal nok virke. For danskerne elsker politikere, der giver journalister tørt på. Men det betyder jo ikke, at sagen går væk af den grund. Kan mandaternes logik redde Stephensen? Det er et af de store spørgsmål i en uge, der også bød på en politianmeldelse af Alex Vanopslagh. Vi ser også på de store linjer i dansk politik med den store Ukriane-pakke til syv milliarder kroner og med KLs topmøde i Aalborg, hvor fremtidens velfærdssamfund er på dagsordenen. Thomas Qvortrup stiller spørgsmålene. Politisk kommentator på Ekstra Bladet, Henrik Qvortrup har svarene. Borgen Unplugged produceres i denne uge i samarbejde med ida.dk/om-ida/danmarks-bedste-fagforening
Branislav Bane Ratkovica, bivši igrač Partizana, Cibone, Mege i Beopetrola gost je podkasta ove srede.Trenutno se bavi trenerskim pozivom i lider je KLS lige sa ekipom Metalca iz Valjeva.Kaže da je KLS liga iz godine u godinu sve bolja, da ima kvalitet ali i telentovane igrače.Koliko Teodosić utiče i pomaže košarkaškom klubu i koje su njegove zasluge za trenutni uspeh.Pričali smo o njegovom odrastanju i košarkaški počecima.Preneo nam je kako izgleda trenirati kod Moke Slavnića i podelio par anegdota sa njim.U Nemačkoj je biciklom išao na trening zbog oduzimanja vozačke dozvole a bio je i umalo izbačen iz naselja zbog loše sortirano smeća.
Obwohl Schönheit bekanntlich im Auge des Betrachters liegt, haben die meisten eine ganz bestimmte Vorstellung davon - vor allem, wenn es um Frauen geht. Um Ideale zu erreichen, unterziehen sich jedes Jahr mehr Menschen kleineren und größeren Eingriffen, die mit Risiken und teilweise auch mit Schmerzen verbunden sind. Denn wer schön sein will, “muss leiden” und das haben die Opfer in dieser Folge von „Mordlust – Verbrechen und ihre Hintergründe“ ganz besonders. Als sie zu ihrem neuen Freund “Paul” nach Hamburg zieht, freut sich Carolin auf das gemeinsame Großstadtleben. Doch bald schon wird das Geld knapp, also fängt das Paar an, gemeinsam Porno-Filme zu drehen und sie online zu verkaufen. Während Paul vor allem hinter der Kamera steht, zieht sich „Sexy Cora“ aus und hat vor der Kamera Sex mit fremden Männern. Mehrmals lässt sie sich für ihren neuen Job die Brüste machen. Auch dann noch als ihr Ärzt*innen davon abraten. Doch Carolin will Körbchengröße G - koste es, was es wolle. Rajee ist im Körper eines Jungen geboren, obwohl sie eine Frau ist – das macht ihr das Leben in den 70er Jahren in den USA nicht gerade leicht. Um ihren Körper ihrer Identität anzugleichen, nimmt sie schließlich Hormone ein, doch die Veränderung reicht ihr nicht. Da hört sie von einer Frau, die ihr auf dem Schwarzmarkt mit günstigen Silikon-Fillern weichere Gesichtszüge, größere Brüste und weibliche Kurven verschaffen kann. Rajee vertraut der Fremden. Doch statt sich endlich wohl im eigenen Körper zu fühlen, erwacht Rajee bald in einem Albtraum, der ihr Äußeres für immer verändert. Die Zahl der Schönheitsoperationen und -eingriffe steigt in den letzten Jahren stetig an. Häufig fehlt Mediziner*innen wichtiges Fachwissen, was die Risiken eines Eingriffs stark erhöht und manchmal sogar zum Tod führt. In dieser Folge sprechen wir über fehlende Regulierungen in der Branche, absurde Schönheitsideale und warum man eigentlich von anderen als attraktiv wahrgenommen werden will. Interviewpartner in dieser Folge: Dr. Ludger Meyer, Facharzt für Plastische und Ästhetische Chirurgie, Chefarzt Klinik Villa Bella in München und Dr. Steffen Handstein, Facharzt für Plastische und Ästhetische Chirurgie in der Praxis Königspark in Dresden, ehemaliger Präsident der VDÄPC – Vereinigung der Deutschen Ästhetisch-Plastischen Chirurgen **Kapitel** 1. Fall Carolin 2. Aha: Verantwortung von plastischen Chirurg*innen 3. Fall Rajee 4. Aha: Problematik mit der Berufsbezeichnung „Schönheitschirurg*in“ 5. Zahlen und Trends 6. Brazilian Butt Lift 7. Wie wichtig ist uns Schönheit **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Isabel Mayer Schnitt: Pauline Korb **Shownotes** Neuer Merch: https://shop.partnerincrime.de *Fall Carolin* Urteil Landgericht Hamburg 632 KLs 6/12, 632 KLs 6/12 – 7200 Js 9/11 SZ: Porträt einer Pornodarstellerin: Der Tag, als „Sexy Cora“ starb: https://bit.ly/3tdHgkG SZ: Verstorbene Pornodarstellerin „Sexy Cora“: Tod durch Größenwahn: https://bit.ly/3EbqOYv ZDF: Die Geheimnisse der Toten: Der Fund im See: https://bit.ly/3EdCjP2 Bild: Sie starb bei ihrer 6. Busen-OP: So landete Cora im Rotlicht-Milieu: https://bit.ly/3UErSJL *Fall Rajee* Rajee Rajindra Narinesingh: "Beyond Face Value. A Journey to true Beauty. A Memoir." Rajee Narinesingh auf YouTube: https://bit.ly/3DSVmg2 FAY WHAT?! Trans Life Rajee Narinesingh: https://bit.ly/3EgKzxX The Sydney Morning Herald: Fake plastic surgeon gets 10 years for manslaughter after injecting patient's buttocks with ‚cement‘: https://bit.ly/3G1Oq31 *Diskussion* Urteil Landgericht Düsseldorf 1 Ks 24/20 VDÄPC-Statistik 2022: Zahlen, Fakten, Entwicklungen in der Ästhetisch-Plastischen Chirurgie: https://bit.ly/3UGxH9p Zervakis & Opdenhövel. Live.: Lebensgefahr durch Beauty-Wahn: Teenager jagen Schönheitsideale: https://bi **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Annick de Souzenelle, une femme qui exerce une influence intellectuelle et spirituelle majeure dans notre société, vient de fêter son centenaire. Quelques jours avant son anniversaire, qui est le 4 novembre, elle nous a reçu pour cet épisode exceptionnel diffusé à partir du 6 novembre. C'est dans sa maison de Rochefort-sur-Loire que nous sommes allé la retrouver, quelques mois après sa première participation à Zeteo. Ici, une nouvelle rencontre étonnante, avec une femme à la pensée aussi profonde que rayonnante, nourrie par une connaissance de l'Écriture Sainte lumineuse : Parce qu'elle est éclairée par une compréhension intérieure et exceptionnelle du sens et des symboles de la parole de Dieu. Chez Annick de Souzenelle, l'enracinement dans la tradition de tout ce qui est sacré depuis nos origines, se prolonge dans les ramifications d'une vision chrétienne renouvelée, particulièrement adaptée aux crispations de notre époque. Pour elle, nous sommes arrivés à une étape de transformation du monde, où les religions et les sagesses sont bousculées par la venue d'un temps nouveau. Les inquiétudes légitimes qui nous oppressent contiennent déjà les germes d'une immense espérance. Le spirituel redevient essentiel, il redevient accessible à l'homme d'aujourd'hui. Et c'est, ici, en méditant sur la signification universelle de la lutte de Jacob avec l'ange, qu'elle nous oriente à aller chercher des réponses qui sont toutes contenues dans notre vie intérieure. C'est dans cette vie intérieure que le Christ nous attend, qu'il nous libère et qu'il fait de nous des hommes nouveaux. En nous donnant, avec l'aide des saints et des anges, toute la force pour surmonter les épreuves et pour vivre intensément son amour et sa paix. Au cours de cet épisode, Annick de Souzenelle réitère son appel à reconstruire, d'urgence, nos églises intérieures, reprenant ainsi la parole de Jésus à la Samaritaine (Jean 4, 20-26). Ce 182ème épisode de Zeteo contient un lien très fort avec la méditation de la semaine dernière de Frédérique Lemarchand. Pour son centenaire, l'artiste de Vézelay a offert à Annick de Souzenelle une sculpture magnifique, La danse de Jacob avec l'ange, que nous avons eu la joie de lui apporter. Pour écouter les paroles souvent décapantes, parfois inattendues et tellement réconfortantes d'Annick de Souzenelle - qui dit être une femme « émerveillée continuellement » - il suffit de cliquer sur le bouton en haut de ce texte, ou de cliquer ici pour l'écouter sur le site de Zeteo, ou encore de cliquer ici pour l'écouter sur Spotify, Deezer et toutes les bonnes applications de podcasts. Pour découvrir l'oeuvre d'Annick de Souzenelle, cliquer ici. Pour (ré)écouter L'humanité est l'épouse divine de Dieu, la première participation d'Annick de Souzenelle sur Zeteo, cliquer ici. Pour (ré)écouter Naître à l'Âme Or, la méditation de Frédérique Lemarchand sur Zeteo, cliquer ici. -------------- En réelle communionNous avons de plus en plus le sentiment d'être reliés avec tous ceux d'entre vous qui écoutez nos podcasts. Nous apprenons parfois à nous connaître, à échanger, à confier nos intentions de prière, et à prier les uns pour les autres. Nous découvrons ainsi que la Communion des Saints n'est pas une fiction ou une réalité pour l'au-delà ! Elle est déjà à l'oeuvre. Elle nous touche tous, pour peu que nous nous aimions les uns les autres, et que nous nous portions des intentions bienveillantes et des prières. Elle est un avant-goût de l'infini éternel, parce qu'elle rassemble tous les saints, tous les héros et les martyrs du passé et du présent. Et aussi tous les défunts. Cette Communion des Saints est gardée par la plus belle, la plus douce et la plus efficace des armées au monde : celle des anges. Qu'en ces temps si agités, vous gardiez toujours en vous la confiance, la paix et la joie, telle est notre prière pour vous. Particulièrement pour ceux qui souffrent, qui doutent, qui attendent. De plus en plus, nous avons le sentiment que nous avons avec vous ce lien merveilleux de la prière. Nous prions pour vous, et vous demandons en retour, si vous le voulez bien, de prier pour nous. C'est ce dont nous avons le plus besoin. Nous connaissons une fin d'année 2022 magnifique, avec des épisodes forts, des témoins rayonnants, des audiences record. De plus en plus de coeurs sont touchés : Nous rendons grâce au Seigneur qui rend cette aventure possible. Si telle est sa volonté, il la guidera encore demain et après-demain, en suscitant parmi vous les donateurs qui peuvent nous soutenir et apporter la seule manne dont nous dépendons matériellement, Tout est bien. Fraternellement, Guillaume Devoud Ceux d'entre vous qui souhaitent faire un don (défiscalisé à hauteur de 66%), le peuvent en cliquant sur ici : Faire un don Ceux qui préfèrent payer par chèque le peuvent en l'adressant à l'Association Telio, 116 boulevard Suchet – 75016 Paris. Virement : nous contacter. -------------- BETHESDA "Veux-tu être guéri ?" Depuis quelques jours, le 43ème épisode de Bethesda est en ligne, avec Clémentine : une jeune femme qui a vécu une enfance heureuse : une famille aimante, une fratrie joyeuse, tout pour qu'elle vive sa jeunesse avec bonheur. Pourtant, elle a de plus en plus connu des sentiments négatifs comme la jalousie, l'envie, ou le besoin destructeur de se comparer aux autres, en particulier avec sa sœur cadette. À la clé, une tristesse grandissante, allant jusqu'à la morbidité, qui ont peu à peu dominé sa vie. Clémentine témoigne des rencontres importantes qui ont transformé sa vie de jeune femme : Benjamin, devenu son mari ; un groupe de prière de la Communauté du Chemin Neuf, où elle a vécu une guérison spirituelle. Avec Clémentine, nous rencontrons une jeune femme d'une profondeur étonnante, pour qui la prière est devenue le moteur de sa vie. Une chrétienne rayonnante, en qui pourront s'identifier de nombreux jeunes d'aujourd'hui, et dont le parcours de vie apparemment « ordinaire » est constellé d'étoiles extraordinaires. Pour écouter La prière m'a libérée du poids de ma tristesse, le témoignage de Clémentine, cliquer ici. -------------- CANOPÉELe podcast pour un monde meilleur Depuis quelques jours, le 4ème épisode de notre nouveau podcast avec Audrey Cattoz, la fondatrice de KLS, (Konnexion Lunette Solidaires) : Une entreprise d'optique très innovante, qui associe une vision du développement durable, de l'intégration de circuits courts, de l'écologie humaine associée à l'écologie de la nature. Avec une foi chrétienne ardente, Audrey Cattoz s'engage à répondre à un appel de l'évangile qui l'invite avant tout à servir l'autre : depuis le client jusqu'aux partenaires industriels, en passant par les collaborateurs, les associés, et les artisans locaux à qui elle ouvre ses points de vente situés à Lyon et dans sa région. Pour écouter Avec le Seigneur comme associé, pour une entreprise juste et équitable, le témoignage d'Audrey Cattoz, cliquer ici. -------------- Pour en savoir plus au sujet de Zeteo, cliquer ici. Pour en savoir plus au sujet de Bethesda, cliquer ici. Pour en savoir plus au sujet de Telio, cliquer ici. Pour en savoir plus au sujet de Canopée, cliquer ici. Pour lire les messages de nos auditeurs, cliquer ici. Nous contacter : contact@zeteo.fr Proposer votre témoignage ou celui d'un proche : temoignage@zeteo.fr
Frédérique Lemarchand ouvre une nouvelle fois son âme aux auditeurs de Zeteo, quelques semaines après une première participation très écoutée. Ici, c'est une méditation très personnelle qu'elle offre sur le sens de la vie et de la mort. Un épisode particulier, diffusé à quelques jours seulement de la Toussaint et de la Fête des Morts qui la suit. Frédérique Lemarchand est peintre et sculptrice. Son œuvre est inspirée par la relation très forte qu'elle entretient depuis sa toute petite enfance avec l'éternité. N'était-elle pas condamnée, dès sa naissance, à ne vivre que quelques années, en raison d'une grave malformation du cœur et des poumons ? C'était sans compter sur Dieu et de ses anges, venus très tôt combler sa vie du grand Amour, et ses rêves de la vraie Beauté. Il y a dix ans, Frédérique Lemarchand a reçu une double transplantation du cœur et des poumons. Elle a vécu la mort physique et le coma qu'elle a traversé pendant 40 jours. Depuis, avec son œuvre, elle rayonne de cet amour et de cette beauté irradiants, qui illuminent tous ceux qui la rencontrent. Naître à l'Âme d'Or est une méditation émerveillée. Pour écouter ce 181ème épisode avec Frédérique Lemarchand, il suffit de cliquer sur le bouton en haut de ce texte, ou de cliquer ici pour l'écouter sur le site de Zeteo, ou encore de cliquer ici pour l'écouter sur Spotify, Deezer et toutes les bonnes applications de podcasts. Pour découvrir l'oeuvre de Frédérique Lemarchand, cliquer ici. Pour (ré)écouter Ma nuit s'est éclairée jusqu'à son embrasement, la première participation de Frédérique Lemarchand sur Zeteo, cliquer ici Pour ceux qui souhaitent la version écrite de Naître à l'Âme Or, la méditation de Frédérique Lemarchand, cliquer ici. Il est de nouveau possible de se procurer Cantique du Coeur, le livre écrit par Frédérique Lemarchand : cliquer ici. L'extrait musical en fin d'épisode, choisi par Frédérique Lemarchand, est le début de l'adagio de la 7ème symphonie d'Anton Bruckner, dans l'interprétation orchestrée en 1942 par Wilhem Furtwängler. Pour voir les illustrations de l'oeuvre évoquée au cours de cet épisode, vous pouvez vous rendre sur le site de Zeteo en cliquant ici. -------------- BELLE FÊTE DE LA TOUSSAINT !Nous recevons régulièrement des messages de certains d'entre vous, qui expriment ce que nos podcasts leur apportent personnellement. Nous tenons ici à vous remercier infiniment pour ces témoignages. À la fois, ils nous comblent, nous émeuvent et nous dépassent. Je voulais écrire qu'ils nous obligent à faire toujours mieux, mais en fait, s'agit-il vraiment d'obligation ? Car ce qui est à l'œuvre ici, surtout, c'est la grâce du Seigneur. Il touche les cœurs, parfois avec des moyens inattendus – comme les podcasts – et avec des ouvriers encore plus inattendus – comme votre serviteur. Cette grâce, c'est une immense joie d'essayer d'y contribuer ! Nous avons de plus en plus le sentiment d'être reliés avec tous ceux d'entre vous qui écoutez nos podcasts. Nous apprenons parfois à nous connaître, à échanger, à confier nos intentions de prière, et à prier les uns pour les autres. Nous découvrons ainsi que la Communion des Saints n'est pas une fiction ou une réalité pour l'au-delà ! Elle est déjà à l'oeuvre. Elle nous touche tous, pour peu que nous nous aimions les uns les autres, et que nous nous portions des intentions bienveillantes et des prières. Elle est un avant-goût de l'infini éternel, parce qu'elle rassemble tous les saints, tous les héros et les martyrs du passé et du présent. Et aussi tous les défunts. Cette Communion des Saints est gardée par la plus belle, la plus douce et la plus efficace des armées au monde : celle des anges. Que cette Toussaint 2022, en ces temps si agités, soit pour vous un temps de ressourcement, de paix et de joie, particulièrement pour ceux parmi vous qui pleurent le décès d'un proche, et ceux qui souffrent, qui doutent, qui attendent… Nous avons le sentiment que nous avons avec vous ce lien merveilleux de la prière. Nous prions pour vous, et vous demandons en retour, si vous le voulez bien, de prier pour nous. C'est ce dont nous avons le plus besoin. Nous connaissons une fin d'année 2022 magnifique, avec des épisodes forts, des témoins rayonnants, des audiences record. Nous rendons grâce au Seigneur qui rend cette aventure possible. Si telle est sa volonté, il la guidera encore demain et après-demain, en suscitant parmi vous les donateurs qui peuvent nous soutenir et apporter la seule manne dont nous dépendons matériellement, Tout est bien. Fraternellement, Guillaume Devoud Ceux d'entre vous qui souhaitent faire un don (défiscalisé à hauteur de 66%), le peuvent en cliquant sur ici : Faire un don Ceux qui préfèrent payer par chèque le peuvent en l'adressant à l'Association Telio, 116 boulevard Suchet – 75016 Paris. Virement : nous contacter. ps : Nous avons régularisé la situation auprès des donateurs qui n'avaient pas encore obtenu leur reçu fiscal. En cas de non-réception, nous vous serons très reconnaissant de nous le faire savoir à contact@zeteo.fr. Merci ! -------------- BETHESDA Veux-tu être guéri ? Depuis quelques jours, le 42ème épisode de Bethesda est en ligne, avec Clémentine : une jeune femme qui a vécu une enfance heureuse : une famille aimante, une fratrie joyeuse, tout pour qu'elle vive sa jeunesse avec bonheur. Pourtant, elle a de plus en plus connu des sentiments négatifs comme la jalousie, l'envie, ou le besoin destructeur de se comparer aux autres, en particulier avec sa sœur cadette. À la clé, une tristesse grandissante, allant jusqu'à la morbidité, qui ont peu à peu dominé sa vie. Clémentine témoigne des rencontres importantes qui ont transformé sa vie de jeune femme : Benjamin, devenu son mari ; un groupe de prière de la Communauté du Chemin Neuf, où elle a vécu une guérison spirituelle. Avec Clémentine, nous rencontrons une jeune femme d'une profondeur étonnante, pour qui la prière est devenue le moteur de sa vie. Une chrétienne rayonnante, en qui pourront s'identifier de nombreux jeunes d'aujourd'hui, et dont le parcours de vie apparemment « ordinaire » est constellé d'étoiles extraordinaires. Pour écouter La prière m'a libérée du poids de ma tristesse, le témoignage de Clémentine, cliquer ici. -------------- CANOPÉELe podcast pour un monde meilleur Depuis quelques jours, le 4ème épisode de notre nouveau podcast avec Audrey Cattoz, la fondatrice de KLS, (Konnexion Lunette Solidaires) : Une entreprise d'optique très innovante, qui associe une vision du développement durable, de l'intégration de circuits courts, de l'écologie humaine associée à l'écologie de la nature. Avec une foi chrétienne ardente, Audrey Cattoz s'engage à répondre à un appel de l'évangile qui l'invite avant tout à servir l'autre : depuis le client jusqu'aux partenaires industriels, en passant par les collaborateurs, les associés, et les artisans locaux à qui elle ouvre ses points de vente situés à Lyon et dans sa région. Pour écouter Avec le Seigneur comme associé, pour une entreprise juste et équitable, le témoignage d'Audrey Cattoz, cliquer ici. -------------- Pour en savoir plus au sujet de Zeteo, cliquer ici. Pour en savoir plus au sujet de Bethesda, cliquer ici. Pour en savoir plus au sujet de Telio, cliquer ici. Pour en savoir plus au sujet de Canopée, cliquer ici. Pour lire les messages de nos auditeurs, cliquer ici. Nous contacter : contact@zeteo.fr Proposer votre témoignage ou celui d'un proche : temoignage@zeteo.fr
Böse Geister, Dämonen oder der Teufel sind Wesen, die normalerweise nichts mit irdischen Verbrechen zu tun haben. Passend zu Halloween ist das in dieser Folge von “Mordlust - Verbrechen und ihre Hintergründe” ausnahmsweise mal anders - zumindest sind die Beteiligten der Fälle der Überzeugung, dass etwas Böses von ihnen Besitz ergriffen hat. Cherry und zwei weitere Verwandte machen sich mit ihren jeweils zwei Kindern auf den Weg von Südkorea nach Deutschland, in ein neues Leben. Doch kaum hat sich die Gruppe in einer Doppelhaushälfte in der Nähe von Frankfurt niedergelassen, beschleicht Cherry ein mulmiges Gefühl. Ihre beiden Söhne wälzen sich im Schlaf und Cherry meint Stimmen zu hören. Ihre Cousine hat dafür nur eine Erklärung: Im Haus seien Dämonen zugange und der Teufel persönlich habe es auf sie abgesehen. Trotz mehrerer Versuche ihn fernzuhalten, fällt ihm laut der Cousine einer nach dem anderen zum Opfer. Für die Gruppe beginnt ein Kampf gegen Luzifer persönlich - oder ist es doch nur ein kollektiver Wahn, der die Gruppe Gespenster sehen lässt? Arne kann es kaum erwarten mit seiner Freundin Debbie in ihr neues Zuhause zu ziehen. Doch als sie dort ankommen, wird ihr Traum zum Albtraum. Denn Debbies Bruder David behauptet, in dem Haus einen alten Mann gesehen zu haben, der nicht wolle, dass Arne und Debbie dort einziehen. Zunächst gibt das junge Paar nicht viel auf das Gerede des 11-Jährigen, doch dann meinen sie selbst paranormale Ereignisse wahrzunehmen. Das geht soweit, dass Arne meint der Teufel sei in ihn gefahren und habe ihm befohlen einen Mann zu töten. Als Arne deswegen vor Gericht steht, heißt die Strategie seines Verteidigers tatsächlich: Unschuldig wegen dämonischer Besessenheit. Noch immer werden in Deutschland jeden Tag mehrere Exorzismen durchgeführt. Auch in anderen Ländern und Religionen steht die Teufelsaustreibung hoch im Kurs. Welche weltlichen Erklärungen es für eine “Besessenheit” gibt und wie das Justizsystem mit gewaltvollen Exorzismen umgeht, besprechen wir in dieser Folge. Außerdem erzählen wir euch von paranormalen Sichtungen, die uns Hörer:Innen geschildert haben. Daher: Anhören auf eigene Gefahr. Interviewpartner: Dr. habil. med. Nadine Metzger, arbeitet an Friedrich-Alexander-Universität Erlangen-Nürnberg im Institut für Geschichte und Ethik der Medizin, Arbeitsschwerpunkt Geisteskrankheiten und Dämonenglaube und Rechtsanwalt Helmut Linck **Kapitel** 1. Fall “Cherry” 2. Aha: Glaube & Exorzismus in Südkorea 3. Exorzismen weltweit 4. Fall “Arne” 5. Aha: Schuldunfähig, aufgrund von Besessenheit 6. Juristische Konsequenzen für die Exorzist:innen 7. Erklärungen für Symptome 8. Paranormale Erfahrungen **Shownotes** *Fall “Cherry”* Teil-Urteil 5/3 KLs 12/16 Spiegel: Tod in Zimmer 433: https://bit.ly/3gtkoue SZ: Exorzisten-Prozess: Dämonen in Zimmer 433: https://bit.ly/3Stug4t *Fall “Arne”* A Haunting Sea: Where Demons Dwell: https://bit.ly/3TCAtN8 Brittle, Gerald: The Devil in Connecticut: From the Terrifying Case File that Inspired the Film “The Conjuring: The Devil Made Me Do It” Washington Post: By Demons Possessed: https://wapo.st/3FeyFpc *Diskussion* SWR2: Exorzismus in Polen: https://bit.ly/3FfDNJK Podcast - Echte Verbrechen, Folge Exorzismus - Opfer des Wahns Spiegel: Aberglaube an Dschinn-Geister: https://bit.ly/3ssBpI0 Zeit: Ungewöhnliche Berufe: https://bit.ly/3TriUiR Stern: Das Dämonenhaus von Indiana: https://bit.ly/3SyRGFP **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Vera Grün Schnitt: Pauline Korb **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Isabelle d'Ornano est, avec son mari Hubert, la fondatrice de la grande marque de parfum et de cosmétiques Sisley. Elle consacre sa vie à l'excellence et à la beauté, avec une foi chrétienne intense. Avec Isabelle d'Ornano, nous entrons dans un univers où l'élégance, la féminité, le talent et la rareté tendent vers la perfection. Une femme qui pourrait sembler inaccessible et intimidante. Plutôt, une femme à la personnalité étonnante et à l'esprit très ouvert, qui puise dans sa foi chrétienne, ses racines polonaises et son enfance difficile, ce qui fait sa force, son humilité et sa profondeur. Isabelle d'Ornano incarne le meilleur de notre société, manifestée par la réussite mondiale de Sisley. Sa vie est jalonnée par ses rencontres avec les plus grands de ce monde, comme avec les plus démunis qu'elle accompagne chaque année, notamment, aux Journées Mondiales des Pauvres, initiées par le Pape François, inspirées par Étienne Villemain, le fondateur de Fratello. Isabelle d'Ornano est une femme dont l'histoire personnelle intense s'entrelace avec la grande histoire, celle à laquelle sa famille aristocratique polonaise est étroitement liée depuis déjà de nombreux siècles. Une femme qui témoigne ici de sa confiance et de sa foi dans le Christ, et de l'importance de la famille et du couple. Cette rencontre avec une des femmes les plus inspirantes de son temps, réconfortera ceux qui ne baissent pas les bras, ceux qui donnent tout d'eux-mêmes pour que le monde soit meilleur et plus beau. Le mieux et le beau sont non seulement possibles. Ils sont promis, ici et au-delà et quoiqu'il arrive, à ceux qui remettent tout dans les mains du Christ. Car tout est bien entre ses mains. Vous pouvez aussi écouter ce 180ème épisode avec Isabelle d'Ornano sur le site de Zeteo en cliquant ici. Le texte entier de la prière dite en fin d'épisode par Isabelle d'Ornano : J'ai tout remis entre tes mains : ce qui m'accable et qui me peine, ce qui m'angoisse et qui me gêne, et le souci du lendemain. J'ai tout remis entre tes mains. J'ai tout remis entre tes mains : le lourd fardeau traîné naguère, ce que je pleure, ce que j'espère, et le pourquoi de mon destin. J'ai tout remis entre tes mains. J'ai tout remis entre tes mains : que ce soit la joie, la tristesse, la pauvreté ou la richesse, et tout ce qu'à ce jour j'ai craint. J'ai tout remis entre tes mains. J'ai tout remis entre tes mains : que ce soit la mort ou la vie, la santé ou la maladie, le commencement ou la fin. J'ai tout remis entre tes mains, car tout est bien entre tes mains. -------------- NOTRE PRIÈREChers amis, chers auditeurs, Nos podcasts sont diffusés gratuitement, ils dépendent exclusivement des dons de ceux qui, parmi les auditeurs, les apprécient et peuvent contribuer. D'où les appels aux dons réguliers que nous lançons depuis maintenant un peu plus de deux ans, alors que Zeteo est en train de boucler sa troisième année de diffusion. Au début de notre activité, ces appels étaient inquiets, parce que le pourcentage de donateurs parmi les auditeurs étaient très bas. C'est le Père Matthieu Dauchez qui, l'an dernier, nous a ouvert à une autre dimension : celle de la confiance. L'aventure de Zeteo, et de Bethesda, Canopée et Telio, nos autres podcasts, si elle est voulue par le Seigneur, trouvera les donateurs que Lui-même suscitera. Nous renouvelons chaque jour notre confiance en Lui, et nous le prions pour qu'il nous montre le chemin. Notre prière auprès de vous, chers amis, chers auditeurs, est de vous joindre à nous dans cette prière. Elle est de prier pour nous, pour que nous demeurions à tout moment les ouvriers de la moisson avec nos podcasts, si telle est la volonté du Seigneur. Nous vous portons dans notre prière, particulièrement ceux qui souffrent, ceux qui doutent, comme ceux qui nous transmettent des messages souvent bouleversants, et ceux qui, quand ils le peuvent, contribuent à notre effort. La communion des saints commence sur terre. Elle se prolonge au Ciel avec tous nos aimés qui ont rejoint la maison du Père, avec les anges et les saints, Ceux d'entre vous qui souhaitent faire un don (défiscalisé à hauteur de 66%), le peuvent en cliquant ce lien : Faire un don En communion, Guillaume Devoud Ceux qui préfèrent payer par chèque le peuvent en l'adressant à l'Association Telio, 116 boulevard Suchet – 75016 Paris. Virement : nous contacter. ps : Lentement mais sûrement, nous comblons notre retard dans l'envoi de reçus fiscaux pour certains dons de ces dernières semaines. Nous présentons nos excuses à tous les donateurs concernés, et n'en oublierons aucun ! -------------- BETHESDA Veux-tu être guéri ? Depuis quelques jours, le 42ème épisode de Bethesda est en ligne, avec Marie qui témoigne du drame familial qui a frappé sa famille il y a quatre ans : le suicide de son fils Adrien. Une déflagration qui l'a projetée dans un long état de sidération, avant de lentement commencer à essayer à comprendre. Les années ont passé. Marie a beaucoup prié, elle a aussi crié. Puis, elle a trouvé l'apaisement et la consolation. Ce qui lui a permis de passer des ténèbres à la lumière : Sa foi chrétienne, qui a toujours été au cœur de sa vie, ainsi que le souvenir d'Adrien, qui était un jeune homme passionné et intense, avec de nombreux talents. Aujourd'hui, Marie transmet un message débordant de joie, d'espérance et d'énergie, destiné particulièrement aux familles confrontées à la même épreuve que la sienne. Pour écouter Après l'hiver vient toujours le printemps, le témoignage de Marie, cliquer ici. -------------- CANOPÉELe podcast pour un monde meilleur Depuis quelques jours, le 4ème épisode de notre nouveau podcast avec Audrey Cattoz, la fondatrice de KLS, (Konnexion Lunette Solidaires) : Une entreprise d'optique très innovante, qui associe une vision du développement durable, de l'intégration de circuits courts, de l'écologie humaine associée à l'écologie de la nature. Avec une foi chrétienne ardente, Audrey Cattoz s'engage à répondre à un appel de l'évangile qui l'invite avant tout à servir l'autre : depuis le client jusqu'aux partenaires industriels, en passant par les collaborateurs, les associés, et les artisans locaux à qui elle ouvre ses points de vente situés à Lyon et dans sa région. Pour écouter Avec le Seigneur comme associé, pour une entreprise juste et équitable, le témoignage d'Audrey Cattoz, cliquer ici. -------------- Pour en savoir plus au sujet de Zeteo, cliquer ici. Pour en savoir plus au sujet de Bethesda, cliquer ici. Pour en savoir plus au sujet de Telio, cliquer ici. Pour en savoir plus au sujet de Canopée, cliquer ici. Pour lire les messages de nos auditeurs, cliquer ici. Nous contacter : contact@zeteo.fr Proposer votre témoignage ou celui d'un proche : temoignage@zeteo.fr
Triggerwarnung: Im zweiten Fall geht es um sexualisierte Gewalt. Der Staat ist verpflichtet, Opfer von Gewalttaten zu entschädigen, wenn er sie nicht schützen konnte. Doch genau dagegen verwehrt er sich in vielen Fällen. In dieser Folge von „Mordlust - Verbrechen und ihre Hintergründe“ geht es um das, was nach der Tat passiert. Es wird gefeiert, angestoßen und auf sein Wohl getrunken. Daniel ist auf dem Geburtstag eines Freundes eingeladen. Doch wenig später ist er in der Menge verschwunden und taucht auch nicht mehr wieder auf. Nichts ungewöhnliches für ihn, er macht öfter mal einen „Polnischen“. Doch dann wird der 23-Jährige wenig später stark verletzt auf der Autobahn gefunden. Wie er dahin gekommen ist und was mit ihm passiert ist, kann er selbst nicht beantworten. Tanja liebt es von einer Kneipe zur nächsten zu ziehen. An diesem Freitagabend lernt die 32-Jährige in einer Bar auf der Schönleinstraße in Berlin dann einen jungen Mann kennen. Es wird angestoßen und ein Drink nach dem anderen gekippt. Und so bekommt Tanja immer weniger mit, was um sie herum geschieht. Auch nicht, dass sie gegen 5 Uhr morgens die Kneipe mit dem Fremden verlässt. Doch es dauert nicht lange, da wird sie wieder klar: Denn plötzlich findet sich Tanja unter dem Gewicht des Mannes wieder, der seine Hände um ihren Hals schließt. Die beiden Fälle zeigen, dass es noch einen Kampf „danach“ gibt. Gegen Ämter und um die Anerkennung dessen, was den Betroffenen zugestoßen ist. In dieser Folge zeigen wir wie es für die Opfer ist, durch den Staat ein zweites Mal in diese Rolle gedrängt zu werden. **Kapitel** 00:05:47 - Fall “Daniel” 00:34:07 - Aha: Wer hat Anspruch? 00:39:53 - Dritte Schockschäden 00:43:56 - Fall “Tanja” 01:11:58 - Aha: Leistungen & Grad der Schädigungsfolgen 01:15:32 - Was macht das OEG-Verfahren mit den Opfern? 01:17:51 - Ablehnungsquote 01:22:15 - Der Weiße Ring **Shownotes** *Beratungsstellen & Informationen für Anträge* BMAS: Opferentschädigungsrecht: https://bit.ly/3d5eO01 Hilfetelfon: https://bit.ly/2zpno1C Verkehrsopferhilfe: https://bit.ly/3d82iwy BMJ: Übersicht Traumaambulanzen: https://bit.ly/3S2dhXs ODABS: Wurden Sie Opfer einer Straftat?: https://bit.ly/3U37HWp *Fall “Daniel”* Dieser Fall ist in Zusammenarbeit mit der Journalistin Aylin Mercan produziert worden, die engen Kontakt zur Mutter von Daniel pflegt. Express: Bonner Kriminalfall bei “Aktenzeichen XY”: https://bit.ly/3S0W4O4 TZ: Schwer verletzt auf Autobahn entdeckt: https://bit.ly/3DlTrCm *Aha: Wer hat Anspruch? * JuraForum: OEG: https://bit.ly/3dausHe Rechtsindex: Opferentschädigung: https://bit.ly/3QGHVo1 Soziales Entschädigungsrecht: Opferentschädigungsgesetz: https://bit.ly/3dfXi9b Bayer.Recht: Opferentschädigung: https://bit.ly/3RVeJeB Rechtsportal: Opferentschädigungsansprüche nach dem Amoklauf im Olympia-Einkaufszentrum München: https://bit.ly/3L7zOQg *Fall Tanja* Urteil: (504 KLs) 284 Js 1011/16 (25/16) *Diskussion* Weisser Ring: Das neue Entschädigungsrecht:https://bit.ly/3xjMKgl Bürgerratgeber: OEG: https://bit.ly/3RGyfeA T. Soliman: Ohne Erbarmen: https://bit.ly/3xikHO0 BMAS: Entschädigung für Opfer von Gewalttaten: https://bit.ly/3eK5W06 BMAS: Hilfe für Opfer von Gewalttaten: https://bit.ly/3S0wSao Weisser Ring: staatliche Opferentschädigung in Deutschland im Jahr 2021: https://bit.ly/3qA1nIu Panorama: Ohne Erbarmen: https://bit.ly/3U5ljR2 Panorama: Opferentschädigung: https://bit.ly/3xjeb9V **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Aylin Mercan Schnitt: Pauline Korb **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Kraj sezone 3. Rekapitulacija finala plejofa ABA lige. O učinku igrača u sezoni i odluci KKP da ne učestvuje u završnici KLS. U fudbalu napokon imamo trenera! Dobrodošlica Iliji Stolici. Crk, Lemmy, Gazza i Sleaford Mods. Trajanje: 93 minuta. Pokrenuli smo PATREON, pa ko želi sada može da časti za neko pivo, dodatne sadržaje i/ili tehničke popravke na podcastu: www.patreon.com/pfchisterical A ima i: paypal.me/partizanhisterical HISTI RADIO MIX br. 66: Pesma za letnju pauzu... i novog trenera! https://youtube.com/playlist?list=PLwhKCZrlUHqhV8lGDfnExcZoE7z1MR1-c Twitter: www.twitter.com/pfchisterical Instagram: www.instagram.com/pfchisterical/ Youtube: www.youtube.com/c/PartizanHistericalP... podcast.rs/show/histerical/ podcasts.apple.com/us/podcast/part…al/id1494766516 open.spotify.com/show/0HKmzzodsui…013451d2f4814212 NAPRED PARTIZAN! NAPOMENA: Ovo je "uradi sam" podkast u kome navijači Partizana razgovaraju o zbivanjima u svom voljenom klubu. Izneta mišljenja i stavovi su lični i ne predstavljaju stavove bilo koje navijačke grupe, frakcije, dela uprave, radne ili druge organizacije. Nismo insajderi, eksperti, sportski radnici. Apsolutna vernost, ne apsolutna istina. Trudimo se da jezik koji koristimo bude fin književni - možete ga čuti na svakom stadionu i u svakoj sportskoj hali. Cover foto: KK Partizan Zvuk: Rihard MeCrk Realizacija: Slavko Tatić, Sleaford Mods i Histerikal
Triggerwarnung: Beide Fälle handeln von Gewalt gegen Kinder. Mordlust, Heimtücke und Habgier - diese Merkmale charakterisieren einen Mord, genauso wie die Grausamkeit: unser Thema dieser Episode. Sie bezeichnet WIE der Mord begangen wurde. Und wie eine grausame Tat aussehen kann, das zeigt euch diese Folge von “Mordlust - Verbrechen und ihre Hintergründe”. Lea ist schwer verliebt und zwar in Ümit. Mit ihm kann sie sich eine gemeinsame Zukunft vorstellen - zusammenziehen, heiraten, Kinder. Und die Beziehung der beiden läuft auch gut, mehrere Monate sind sie zusammen, dann wird Lea ungeplant schwanger. Sie möchte das Kind behalten. Anders als Ümit, er fühlt sich so gar nicht bereit. Lea soll abtreiben. Doch sie weigert sich und Ümit beginnt einen grausamen Plan zu schmieden. Als Janine schwanger wird, ist sie überglücklich. Denn jetzt kann sie es besser machen. Besser als ihre eigenen Eltern, die alkoholsüchtig waren und sie irgendwann an Pflegefamilien abgegeben haben. Als Janines Tochter dann geboren wird, bleibt das Mutterglück allerdings nicht lange erhalten. Die ständige Betreuung der Kleinen überfordert die 21-Jährige. Ihren Ausgleich findet die junge Mutter, indem sie Party machen geht, immer und immer wieder - bis sie im Rausch des Feierns vergisst, dass sie ein kleines Baby Zuhause hat. Jemanden bei lebendigem Leib zu verbrennen oder langsam verhungern zu lassen, kann man zurecht als besonders grausam bezeichnen. So sieht es auch unsere Rechtsprechung. Menschen, die ihre Opfer so quälen, wandern lebenslang ins Gefängnis. Aber wie stark müssen die Qualen denn sein, um als “grausam” zu gelten? Dem gehen wir in dieser Folge nach. **Kapitel** 00:06:55 - Fall Lea 00:34:02 - Aha Grausamkeit durch Verbrennung 00:41:40 - Fall Mila 01:10:15 - Aha Grausamkeit durch Verhungern lassen 01:17:32 - Mordmerkmal Grausamkeit **Shownotes** *Fall Lea* Urteil Landgericht Berlin Zeit: Im Blutrausch: https://bit.ly/3zmphNl *Aha Grausamkeit durch Verbrennung* BGH Beschluss: 5 StR 390/16 The New York Times: Falling Bodies, a 9/11 Image Etched in Pain: https://nyti.ms/3axxTWJ Sky News: 9/11 anniversary: 'I was burning alive': https://bit.ly/3Q4l2vF *Fall Mila* Urteil Landgericht Arnsberg 2 Ks-412 Js 457/13-13/14 openjur.de: Urteil Landgericht Arnsberg 2 Ks-412 Js 457/13-13/14: https://bit.ly/3NXO1PX Spiegel: Mutter nach Hungertod ihres Babys zu lebenslanger Haft verurteilt: https://bit.ly/3tirB3U Die Glocke: Baby verhungert - Mutter lebenslang in Haft: https://bit.ly/3Q4ag8y Soester-Anzeiger: Wie aus Wunschkind Fee eine Belastung wurde: https://bit.ly/3NuLkW4 *Aha Grausamkeit durch Verhungern lassen* hrr-strafrecht.de: "Fall Dennis" Beschluss vom 13. März 2007: https://bit.ly/3zhfcBb n-tv: Dennis' Eltern weinen über Urteil: https://bit.ly/3xegf22 wunderweib: Das passiert in deinem Körper, wenn du verdurstest: https://bit.ly/3GVnC2O *Diskussion* strafrecht-online.org: Auslegung des Mordmerkmals "grausam": https://bit.ly/3Morhre openjur.de: LG Bonn, Urteil vom 04.10.2007 - 8 Kls 16/07: https://bit.ly/3ayuY04 guelpen.de: Mittäterschafts- und Teilnahmekonstellationen beim Mord: https://bit.ly/3NtHUD1 Welt: Der „Schleifer von Hameln“ will einen neuen Prozess: https://bit.ly/3axXWgP **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Maike Frye Schnitt: Pauline Korb **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust
Triggerwarnung: Im ersten Fall geht es um sexualisierte Gewalt an Kindern. Sie haben ihr ganzes Leben noch vor sich, doch sind schon auf den falschen Pfad abgebogen. In dieser Folge “Mordlust - Verbrechen und ihre Hintergründe” beschäftigen wir uns mit Kindern, Jugendlichen und Heranwachsenden, die schon früh auf Abwege geraten sind. Nachdem zunächst der zehnjährige Nico verschwunden war und später tot aufgefunden wurde und nun noch ein weiteres Kind vermisst wird, ist man in Leipzig fassungslos. Bisher weiß man nicht, dass sich schon seit einiger Zeit jemand auf Schulhöfen und Spielplätzen rumtreibt, um den Kindern aufzulauern - und, dass die Person deswegen nicht vorher aufgefallen ist, weil sie selbst fast noch ein Kind ist. “Ich bin im Feld an so einer Scheune und mein Freund, der liegt hier. Der hat die Kehle irgendwie aufgeschnitten. Sie müssen ganz schnell kommen”, schreit Tobias in sein Handy. Sein bester Freund Leon liegt vor ihm - tot. Seit über 14 Jahren kennen sich die beiden schon - Tobias ist sowas wie das 5. Mitglied in Leons Familie. Die zwei hatten sogar gemeinsame Träume für die Zukunft, die nun niemals in Erfüllung gehen werden. Handyverbot und Jugendarrest - das Spektrum der “Strafen” für Jugendliche ist groß. In allererster Linie geht es nämlich um Erziehung und nicht um Bestrafung. Doch wenn Kinder oder Jugendliche jemanden töten, dann reicht das nicht. Was passiert, wenn 14-Jährige schwer kriminell werden und 19-jährige Totschläger noch “zu” jugendlich für das Erwachsenenstrafrecht sind, hört ihr in dieser Episode. Außerdem verraten wir euch, ob die Jugend von heute wirklich so schlimm ist wie alle immer sagen. Interviewpartnerin in dieser Folge: Vizepräsidentin der Bundesrechtsanwaltskammer Ulrike Paul **Kapitel** 00:03:50 - Fall Nico 00:27:47 - Aha: Straffällige Kinder 00:33:01 - Fall Leon 01:00:51 - Aha: Heranwachsende vor Gericht 01:10:42 - Zahlen & Fakten 01:15:35 - Jugendstrafrecht 01:23:18 - Debatte **Shownotes** *Fall Nico* S. Harbort: „Wenn Kinder töten” Urteil: Landgericht Leipzig 3 KS 33 JS 2059/92 + 12 VRs 303 Js 2059/92 Spiegel: Je mehr gewalt, desto schöner: https://bit.ly/38dMBlg *Aha: Straffällige Kinder* koerperverletzung.com: Wann liegt verminderte Schuldfähigkeit laut Strafrecht vor?: https://bit.ly/3sopMBW Focus: 13-jähriges Kind gesteht Schlag auf Fabian S.: https://bit.ly/3l26SwA DAHAG: Deliktsfähigkeit: Definition und Rechtsgrundlage: https://bit.ly/3FuKXrD *Fall Leon* Urteil: Landgericht Paderborn, 01 KLs 3/16 Urteil: Bundesgerichtshof 4 StR 223/15 WDR: „Der Junge, der seinen besten Freund erschlug“: https://bit.ly/38bPnYd NW: Urteil im Scheunenmord: https://bit.ly/39Rxq1x NW: Paderborner Scheunenmord wird neu aufgerollt: https://bit.ly/3wf3iVb *Aha: Heranwachsende vor Gericht* ZJJ: https://bit.ly/3MUVgHU N. C: Habermann: Die gutachterliche Praxis im Strafverfahren mit jugendlichen und heranwachsenden Angeklagten: https://bit.ly/38bMvdW *Diskussion* Deutsches Jugendinstitut: Zahlen - Daten - Fakten - Jugendgewalt: https://bit.ly/3958umI Deutschlandfunk Kultur: Das Ende der Unschuld: https://bit.ly/3M2pc4O **Credit** Produzentinnen/Hosts: Paulina Krasa, Laura Wohlers Recherche: Paulina Krasa, Laura Wohlers, Johann Subklew, Selina Hare, Denise Friemann Schnitt: Pauline Korb **Partner der Episode** Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte: https://linktr.ee/Mordlust