POPULARITY
"There's almost no story of the future going well that doesn't have a part that's like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You're training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It's hard to tell a story where it's not a factor. It's easy to tell a story where it is a factor." — Holden KarnofskyWhat happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn't yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You'll hear:Holden Karnofsky on why every good future relies on strong infosec, and how hard it's been to hire security experts (from episode #158)Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)Lennart Heim on on Rob's computer security nightmares (episode #155)Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)Bruce Schneier on whether AI could eliminate software bugs for good, and why it's bad to hook everything up to the internet (episode #64)Nita Farahany on the dystopian risks of hacked neurotech (episode #174)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)Nathan Labenz on how even internal teams at AI companies may not know what they're building (episode #176)Allan Dafoe on backdooring your own AI to prevent theft (episode #212)Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)Plus lots of concrete advice on how to get into this field and find your fitCheck out the full transcript on the 80,000 Hours website.Chapters:Cold open (00:00:00)Rob's intro (00:00:49)Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)Sella Nevo on why AI model weights are so valuable to steal (00:28:56)Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)Bruce Schneier on why it's bad to hook everything up to the internet (00:55:54)Nita Farahany on the possibility of hacking neural implants (01:04:47)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)Nova DasSarma on exciting progress in information security (01:19:28)Nathan Labenz on how even internal teams at AI companies may not know what they're building (01:30:47)Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)Nova DasSarma on politically motivated cyberattacks (02:03:44)Bruce Schneier on the day-to-day benefits of improved security and recognising that there's never zero risk (02:07:27)Holden Karnofsky on why it's so hard to hire security people despite the massive need (02:13:59)Nova DasSarma on practical steps to getting into this field (02:16:37)Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)Rob's outro (02:34:46)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You'll hear:Ajeya Cotra on overrated AGI worriesHolden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models biggerIan Morris on why the future must be radically different from the presentNick Joseph on whether his companies internal safety policies are enoughRichard Ngo on what everyone gets wrong about how ML models workTom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn'tCarl Shulman on why you'll prefer robot nannies over human onesZvi Mowshowitz on why he's against working at AI companies except in some safety rolesHugo Mercier on why even superhuman AGI won't be that persuasiveRob Long on the case for and against digital sentienceAnil Seth on why he thinks consciousness is probably biologicalLewis Bollard on whether AI advances will help or hurt nonhuman animalsRohin Shah on whether humanity's work ends at the point it creates AGIAnd of course, Rob and Luisa also regularly chime in on what they agree and disagree with.Chapters:Cold open (00:00:00)Rob's intro (00:00:58)Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)Ajeya Cotra on the misalignment stories she doesn't buy (00:09:16)Rob & Luisa: Agentic AI and designing machine people (00:24:06)Holden Karnofsky on the dangers of even aligned AI, and how we probably won't all die from misaligned AI (00:39:20)Ian Morris on why we won't end up living like The Jetsons (00:47:03)Rob & Luisa: It's not hard for nonexperts to understand we're playing with fire here (00:52:21)Nick Joseph on whether AI companies' internal safety policies will be enough (00:55:43)Richard Ngo on the most important misconception in how ML models work (01:03:10)Rob & Luisa: Issues Rob is less worried about now (01:07:22)Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)Michael Webb on why he's sceptical about explosive economic growth (01:20:50)Carl Shulman on why people will prefer robot nannies over humans (01:28:25)Rob & Luisa: Should we expect AI-related job loss? (01:36:19)Zvi Mowshowitz on why he thinks it's a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)Holden Karnofsky on the power that comes from just making models bigger (01:45:21)Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)Hugo Mercier on how AI won't cause misinformation pandemonium (01:58:29)Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)Robert Long on whether digital sentience is possible (02:15:09)Anil Seth on why he believes in the biological basis of consciousness (02:27:21)Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)Rob & Luisa: The most interesting new argument Rob's heard this year (02:50:37)Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)Rob's outro (03:11:02)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore
Three members of Citi's global real estate research team—Nick Joseph in the United States, Aaron Guy in the U.K., and Howard Penny in Australia—joined the latest episode of the Nareit REIT Report podcast to share their thoughts on regional outlooks and sector performance.Macro fundamentals, interest rates, and geopolitical sentiment are mixed globally, resulting in stock preferences that are heavily driven by regional teams' micro analysis, Joseph said.The key driver behind Citi's list of most preferred stocks in 2025 is the presence of positive rental growth, with rental growth weakness and high valuation the key reasons for the least preferred subsectors. Sector preferences are not globally consistent, Joseph stressed, highlighting the presence of significant local supply and demand drivers.Regionally, Citi is most positive on the U.S., Australia, Europe, the Philippines, and Indonesia, and more cautious on China, Latin America, India, Japan, Singapore, Hong Kong, and Thailand.
"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob WiblinIt's that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:How to use the microphone on someone's mobile phone to figure out what password they're typing into their laptopWhy mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever doneWhy evolutionary psychology doesn't support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to othersHow superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it's mostly a disagreement about timingWhy the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research todayHow much of the gender pay gap is due to direct pay discrimination vs other factorsHow cleaner wrasse fish blow the mirror test out of the waterWhy effective altruism may be too big a tent to work wellHow we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with…as well as 27 other top observations and arguments from the past year of the show.Check out the full transcript and episode links on the 80,000 Hours website.Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you're struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.Enjoy, and look forward to speaking with you in 2025!Chapters:Rob's intro (00:00:00)Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)Meghan Barrett on the likelihood of insect sentience (00:11:26)Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)Sella Nevo on side-channel attacks (00:19:32)Zvi Mowshowitz on AI sleeper agents (00:22:59)Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)Emily Oster on the impact of kids on women's careers (00:40:29)Carl Shulman on robot nannies (00:45:19)Nathan Labenz on kids and artificial friends (00:50:12)Nathan Calvin on why it's not too early for AI policies (00:54:13)Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)Nick Joseph on why he's a big fan of the responsible scaling policy approach (01:03:11)Sihao Huang on how the US and UK might coordinate with China (01:06:09)Nathan Labenz on better transparency about predicted capabilities (01:10:18)Ezra Karger on what explains forecasters' disagreements about AI risks (01:15:22)Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)Vitalik Buterin on defensive acceleration (01:29:43)Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)Nate Silver on whether effective altruism is too big to succeed (01:38:42)Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)Anil Seth on how our brain interprets reality (02:01:03)Eric Schwitzgebel on whether consciousness can be nested (02:04:53)Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)Peter Godfrey-Smith on uploads of ourselves (02:14:34)Laura Deming on surprising things that make mice live longer (02:21:17)Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)Cameron Meyer Shorb on vaccines for wild animals (02:42:53)Spencer Greenberg on personal principles (02:46:08)Producing and editing: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore
In this crosspost from the 80,000 Hours podcast, host Rob Wiblin interviews Nick Joseph, Head of Training at Anthropic, about the company's responsible scaling policy for AI development. The episode delves into Anthropic's approach to AI safety, the growing trend of voluntary commitments from top AI labs, and the need for public scrutiny of frontier model development. The conversation also covers AI safety career advice, with a reminder that 80,000 Hours offers free career advising sessions for listeners. Join us for an insightful discussion on the future of AI and its societal implications. Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SPONSORS: WorkOS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpentine-Network Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr 80,000 Hours: 80,000 Hours offers free one-on-one career advising for Cognitive Revolution listeners aiming to tackle global challenges, especially in AI. They connect high-potential individuals with experts, opportunities, and personalized career plans to maximize positive impact. Apply for a free call at https://80000hours.org/cognitiverevolution to accelerate your career and contribute to solving pressing AI-related issues. Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ RECOMMENDED PODCAST: This Won't Last - Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel ft their hottest takes on the future of tech, business, and venture capital. Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsors: WorkOS (00:01:22) About the Episode (00:04:31) Intro and Nick's background (00:08:37) Model training and scaling laws (00:13:10) Nick's role at Anthropic (00:16:49) Responsible Scaling Policies overview (Part 1) (00:18:00) Sponsors: Weights & Biases Weave | 80,000 Hours (00:20:39) Responsible Scaling Policies overview (Part 2) (00:25:24) AI Safety Levels framework (00:30:33) Benefits of RSPs (Part 1) (00:33:15) Sponsors: Omneky (00:33:38) Benefits of RSPs (Part 2) (00:36:32) Concerns about RSPs (00:47:33) Sandbagging and evaluation challenges (00:54:46) Critiques of RSPs (01:03:11) Trust and accountability (01:12:03) Conservative vs. aggressive approaches (01:17:43) Capabilities vs. safety research (01:23:47) Working at Anthropic (01:35:14) Nick's career journey (01:45:12) Hiring at Anthropic (01:52:06) Concerns about AI capabilities work (02:03:38) Anthropic office locations (02:08:46) Pressure and stakes at Anthropic (02:18:09) Overrated and underrated AI applications (02:35:57) Closing remarks (02:38:33) Sponsors: Outro
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: September 2024, published by Toby Tremlett on September 19, 2024 on The Effective Altruism Forum. If you would like to see EA Organization Updates as soon as they come out, consider subscribing to this tag. Some of the opportunities and job listings we feature in this update have (very) pressing deadlines (see AI Alignment Teaching Fellow opportunities at BlueDot Impact, September 22, and Institutional Foodservice Fellow at the Good Food Institute, September 18). You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series. These monthly posts originated as the "Updates" section of the monthly EA Newsletter. Organizations submit their own updates, which we edit for clarity. (If you'd like to share your updates and jobs via this series, please apply here.) Opportunities and jobs Opportunities Consider also checking opportunities listed on the EA Opportunity Board and the Opportunities to Take Action tag. ALLFED published a new database containing numerous research projects that prospective volunteers can assist with. Explore the database and apply here. Apply to the upcoming AI Safety Fundamentals: Alignment course by October 6 to learn about the risks from AI and how you can contribute to the field. The Animal Advocacy Careers Introduction to Animal Advocacy Course has been revamped. The course is for those wishing to kickstart a career in animal advocacy. Giv Effektivt (DK) needs ~110 EU citizens to become members before the new year in order to offer tax deductions of around 450.000DKK ($66.000) for 2024-25 donations. Become a member now for 50DKK ($7). An existing donor will give 100DKK for each new member until the organization reaches 300 members. Anima International's Animal Advocacy Training Center released a new online course - Fundraising Essentials. It's a free, self-paced resource with over two hours of video content for people new to the subject. Job listings Consider also exploring jobs listed on the Job listing (open) tag. For even more roles, check the 80,000 Hours Job Board. BlueDot Impact AI Alignment Teaching Fellow (Remote, £4.9K-£9.6K, apply by September 22nd) Centre for Effective Altruism Head of Operations (Remote, £107.4K / $179.9K, apply by October 7th) Cooperative AI Foundation Communications Officer (Remote, £35K-£40K, apply by September 29th) GiveWell Senior Researcher (Remote, $200K-$220.6K) Giving What We Can Global CEO (Remote, $130K+, apply by September 30th) Open Philanthropy Operations Coordinator/Associate (San Francisco, Washington, DC, $99.6K-$122.6K) If you're interested in working at Open Philanthropy but don't see an open role that matches your skillset, express your interest. Epoch AI Question Writer, Math Benchmark (Contractor Position) (Remote, $2K monthly + $100-$1K performance-based bonus) Senior Researcher, ML Distributed Systems (Remote, $150K-$180K) The Good Food Institute Managing Director, GFI India (Hybrid (Mumbai, Delhi, Hyderabad, or Bangalore), ₹4.5M, apply by October 2nd) Institutional Foodservice Fellow (Independent Contractor) (Remote in US, $3.6K biweekly, apply by September 18th) Organization updates The organization updates are in alphabetical order (0-A-Z). 80,000 Hours There is one month left to win $5,000 career grants by referring your friends or colleagues to 80,000 Hours' free career advising. Also, the organization released a blog post about the recent updates to their AI-related content, as well as a post about pandemic preparedness in relation to mpox and H5N1. On the 80,000 Hours Podcast, Rob interviewed: Nick Joseph on whether Anthropic's AI safety policy is up to the task...
This is a selection of highlights from episode #197 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Nick Joseph on whether Anthropic's AI safety policy is up to the taskAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Rob's intro (00:00:00)What Anthropic's responsible scaling policy commits the company to doing (00:00:17)Why Nick is a big fan of the RSP approach (00:02:13)Are RSPs still valuable if the people using them aren't bought in? (00:05:07)Nick's biggest reservations about the RSP approach (00:08:01)Should Anthropic's RSP have wider safety buffers? (00:11:17)Alternatives to RSPs (00:14:57)Should concerned people be willing to take capabilities roles? (00:19:22)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?That's what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic's “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.Links to learn more, highlights, video, and full transcript.As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it's trained, but before it's in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can't be stolen by terrorists interested in using it that way.As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he's found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.In addition to all of that, Nick and Rob talk about:What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).What it's like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.What it's like working at Anthropic, and how to get the skills needed to help with the safe development of AI.And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org.Chapters:Cold open (00:00:00)Rob's intro (00:01:00)The interview begins (00:03:44)Scaling laws (00:04:12)Bottlenecks to further progress in making AIs helpful (00:08:36)Anthropic's responsible scaling policies (00:14:21)Pros and cons of the RSP approach for AI safety (00:34:09)Alternatives to RSPs (00:46:44)Is an internal audit really the best approach? (00:51:56)Making promises about things that are currently technically impossible (01:07:54)Nick's biggest reservations about the RSP approach (01:16:05)Communicating “acceptable” risk (01:19:27)Should Anthropic's RSP have wider safety buffers? (01:26:13)Other impacts on society and future work on RSPs (01:34:01)Working at Anthropic (01:36:28)Engineering vs research (01:41:04)AI safety roles at Anthropic (01:48:31)Should concerned people be willing to take capabilities roles? (01:58:20)Recent safety work at Anthropic (02:10:05)Anthropic culture (02:14:35)Overrated and underrated AI applications (02:22:06)Rob's outro (02:26:36)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore
Enjoy this week's episode with LA SANTA, head honcho of Redolent Music, along with CHUS, DJ & producer influenced by Classical Music, Jazz, Bossa Nova, Soul, and World Music. This amalgamation of cultures allowed her to blend them into a unique scent. She creates a unique and extraordinary sense of belonging, enhanced through an inner journey. Her DJ sets are filled with sensitivity, harmony, high doses of groove, drums & ethnic roots. She has shared the DJ booth with the best international Electronic djs at the moment such as The Martinez Brothers, Seth Troxler, Blond:ish, CHUS, Dennis Ferrer, Deborah De Luca, Oscar L, Audio Fly or Birds Of Mind to name a few. La Santa expands her energy & grooves all over the world with her continuous plays at Ibiza, Tulum, El Cairo, Guatemala, Panama, Bali, India, Morocco... Her style, influenced by the English, Dutch & American underground sounds, definitely converged into house music inspired by Soul, Tribal, Latin, and World Music giving birth to a very versatile Dj where her sets can range from Minimal Deep Tech to Melodic sounds, going through Afro Tech & Latin beats. La Santa's strength resides in her charisma; she creates a unique and extraordinary setting and atmosphere when she plays, where her energy can be felt, creating a unique vibe. La Santa's productions are highly acclaimed in the scene, and she worked on productions with high profile artists such as FKA Mash, Jinadu, Sparrow & Barbossa, G.Zamora, Coco, Blueheist, and D- Formation to name a few. To give an example: the track “Cumanayagua“ by La Santa & Sparrow & Barbossa reached number 11 in the top 100 best-selling Afro House tracks of 2021 on Beatport. Labels such as Stereo Productions, Nervous, Madoras In Da House or Redolent are witnesses of her wide range of music styles and unlimited love for different genres. Redolent Music is her most recent project, a new independent boutique record label, event producer, management agency, and lifestyle concept, created alongside renowned artist, DJ, and Producer CHUS, whose purpose is to cater to the evolving music industry and develop emerging talent. La Santa & Chus recently launched Slave To The Rhythm, an event curated by Stereo Productions & Redolent debuting on the Island of Gods, Bali and have been already in Miami, Tulum, Ibiza & coming to Bali , Australia, Cyprus, Pakistan, Morocco, Dubai… Enjoy this Summer Tribal Afro journey with LA SANTA including her latest release Mi Vida along Peter Guzman & Andreatens & lost of bombs on Redolent! 01. LUCERO - Matafiyi (Radio Edit) (Redolent) 02. La Santa, Peter Guzman, ANDREATENS - Mi Vida (Redolent) 03. Stefano Ranieri - Taste Of The Future 04. diephuis & angelos - dam 05. Marlon D - Kiwi (AfroPunk Mix) 06. Da Le (Havana) - Caurí (Redolent) 07. Caiiro - Drummotions (The Mike Dunn Movement Mix) 08. Nick Joseph & Mizbee - On Me 09. Fiin, Pezlo (MD), Vikina - Mi Tierra (La Santa, G. Zamora Remix) (Redolent) 10. Pauza, Jalal Ramdani - Wataki (Redolent) 11. Calussa, Augusto Yepes - Bayamo (Redolent) 12. PolyRhythm, Sheleah Monea, Norty Cotto - Cherish The Day (Norty Cotto Deepside Mix) This show is syndicated & distributed exclusively by Syndicast. If you are a radio station interested in airing the show or would like to distribute your podcast / radio show please register here: https://syndicast.co.uk/distribution/registration
Three members of Citi's global real estate research team—Nick Joseph in the U.S., Aaron Guy in the U.K., and Howard Penny in Australia—were guests on the latest episode of the Nareit REIT Report podcast. Amid the market cross currents of slower global economic growth, offset to some extent by anticipated lower interest rates, “we are generally constructive on commercial real estate in 2024,” Joseph said. At the same time, Citi's 2024 global real estate outlook also stresses the importance of stock picking, rather than broader sector and subsector allocations.For the U.S., Citi is projecting total returns for REITs of 10% to 15% in 2024, with adjusted FFO growth of about 3%. “Growth is being driven by slowing but still solid operating results, development and the redevelopment benefit that is coming on, and the retention of free cash flow,” Joseph said.
Comedian and entrepreneur Nick Joseph (better known as Nick Nack Pattiwhack) stopped by to talk about finding viral success, new business ventures, and a possible return to comedy. (Note: This episode was recorded on January 28, 2023.)
Today, Ben and Anthony are joined by newest member of the Kinjaz crew, Nick Joseph, who talks about his dance career so far, choreographing for BTS, and creating his own clothing line. 00:00: Intro 02:00: Origin story 07:00: Dance as a career 12:00: Working with BTS 16:00: Crafting an individual style 21:00: Joining the Kinjaz crew 28:00: Getting into fashion and launching unto thee 31:00: Taking time for yourself 35:00: Defining success 37:00: Lightning round 42:00: Mastering patience 43:00: Outro You can follow Nick on Instagram @nickjxseph and Youtube here Learn more about your ad choices. Visit podcastchoices.com/adchoices
The potential for rolling country-level recessions and the impact of higher interest rates on transactions and capital flows are among the main themes influencing global real estate investment in 2023. The result, according to Citi's global real estate research team, are opportunities mixed in with the broader uncertainty facing investors.“I think there's a lot more uncertainty entering 2023 than we've seen at least in the past few years, putting aside COVID. I think that's going to be the biggest challenge and opportunity depending on how that turns out,” Nick Joseph, Citi's global head of real estate and head of the U.S. real estate and lodging research team, told Nareit's REIT Report podcast.Global sector-specific trends that Citi is monitoring include: the future of the office; e-commerce and the impact on brick and mortar; lodging demand and recovery; health care coming out of the pandemic; housing trends; and digital transformation and its impact on infrastructure.
With Paul on holiday (special), James flew (Han) solo to interview Nick Joseph whom played Arhul Hextrophon, a Major in the Rebellion, in A New Hope.
Nick Joseph has choreographed for some of the world's most renowned Korean music artists and groups such as BTS, TXT, Enhypen, Super M, and more. It's wild to think that Nick didn't even want to be a choreographer at first, he just wanted to learn from the dancers he looked up to! If you want to take online dance classes from Nick, click this link → https://steezy.co/podcast3nickaudio
Bowman's Friends is a podcast created to connect and inform UK students of issues, events, and cool stuff on campus and the Lexington area. It is hosted by UK students, for the UK community. Our goal is to amplify student voices through advocating for equity, inclusion, and representation of all. On this episode, Spencer Neichter and Sophia Didier sat down with Nick Joseph, the Overall Chair of DanceBlue, and Cliff York, the Mini Marathons Chair of DanceBlue, to discuss the organization, the DanceBlue marathon and what leads up to it, how anyone both at UK and not can help, and more!
LEXINGTON, Ky. (February 24, 2022) – DanceBlue is a University of Kentucky student-run organization that fundraises year-round and culminates in a 24-hour no sitting, no sleeping dance marathon. The money raised through DanceBlue is donated to the Golden Matrix Fund, established to support the kids of the DanceBlue Kentucky Children's Hospital Hematology/Oncology Clinic both today and well into the future through an endowment. The Golden Matrix Fund was created to benefit the clinic's patients and families, by providing care and support for the kids through giving them and their families comfort. DanceBlue funds also support the year-long fundraising engine and operations that underpin the mission, as well as providing funds to support research at the UK Markey Cancer Center. Through more than 15 years of DanceBlue, more than $18 million has been raised to support this cause. On this episode of Behind the Blue, DanceBlue Overall Chair Nick Joseph and Marketing Chair Jennifer Derk talk about their personal experiences with DanceBlue, the impact they see the event has on the UK community, and more. "Behind the Blue" is available on iTunes, Google Play, Stitcher and Spotify. Become a subscriber to receive new episodes of “Behind the Blue” each week. UK's latest medical breakthroughs, research, artists and writers will be featured, along with the most important news impacting the university. For questions or comments about this or any other episode of "Behind the Blue," email BehindTheBlue@uky.edu or tweet your question with #BehindTheBlue. Transcripts for this or other episodes of Behind the Blue can be downloaded from the show's blog page. To discover what's wildly possible at the University of Kentucky, click here.
Oh, no baby what is you doing?! We KNOW you weren't about to skip this episode, were you?? Hi Creators! This week we are joined by Nick Joseph otherwise known as Nick Nack PattiWhack to discuss: New Orleans upbringing, Linking with DanRue, and going ViralBiggest lessons and experiences between the Vine era, WildnOut, BET Hip Hop Awards, Netflix' Insta Famous, T-Pain, and more!Building his liquor business, Amistad Premium Liquor, and travel company, We Live AdventuresAdvice for modern day influencers, serial entrepreneurs and what's next for NickWe also play a game of "Where is Da Milk" where Nick comments on who would win in a Bounce off! Hosts Akilah and Eboné get a 1:1 lesson on how to dance New Orleans style. Tune in now!Keep up with us!@TheBlackCreatorsClub@AkilahFfriend@EboneChatman12And our guest@NicknPattiWhackSupport us by buying our merch featured in the episode!https://www.theblackcreatorsclub.com/shopThank you for supporting our independently-funded content! Patreon: https://www.patreon.com/theblackcreatorsclub Paypal: https://paypal.me/blackcreatorsclub?locale.x=en_USVenmo: https://account.venmo.com/u/theblackcreatorsclub
Support the show (http://www.firstpresjax.org)
We're finally back in the studio as Mark and Dave talk Star Wars as the days roll down to the arrival of The Mandalorian and The Rise of Skywalker. In this special extra-length episode Martin Keeler heads to D23 Expo to catch up with Vader Immortal's Ben Snow, Matt Martin and Jose Perez III, Paul Naylor reports from HooFest with A New Hope actor Nick Joseph, Mark travels to New York Comic Con where he walks the booth with Christine from PopMinded by Hallmark, chats with the team at Arcade 1 Up and talks Star Wars: The Ultimate Pop-Up Galaxy with author Matthew Reinhart and artist Kevin Wilson as well as catching up with Anne Neumann from Rancho Obi-Wan. Get comfy, this is longer than the Kessel Run at sublight speed on the latest episode of Making Tracks. You can find us on Apple Podcasts, Google Podcasts, Stitcher, Android, Spotify, Soundcloud, TuneIn IHeart radio and Spreaker. You can find us on Smart Speakers such as Amazon Alexa, Apple HomePod, Google Home, and Sonos. We are also available in your car with Apple CarPlay and Android Auto, on your gaming console, and television; and of course you can find us on Fantha Tracks TV on YouTube, or the Fantha Tracks App. For all the details on how to listen in and subscribe, check out our dedicated page for Fantha Tracks Radio. You can contact any of our shows by emailing radio@fanthatracks.com or comment on our social media feeds: https://www.youtube.com/channel/UCZ7LZotr3rQhVJwpO3b2ELw http://instagram.com/fanthatracks http://www.facebook.com/FanthaTracks https://twitter.com/FanthaTracks https://www.pinterest.co.uk/fanthatracks/ https://fanthatracks.tumblr.com/ https://www.tiktok.com/@fanthatracks
By definition, Brain Drain is the emigration of highly trained or intelligent people from a particular country. India and other surrounding countries experience the Brain Drain epidemic with their society’s top talent seeking prosperous opportunities abroad in countries such as USA or Canada. Why does this happen? What do countries “abroad” offer that makes the move across the globe worth it? What is the impact? Join the Carolina Desis as they pretend to be geo-politicians. Thank you, Nick Joseph for this month’s episode topic! And as always, thank you Yash Mistry for editing this month’s episode!
This week more from The Capitol 3 convention, three actors who’ve made regular appearances in the classic series of Dr Who: Prentis Hancock, Cy Town and Nick Joseph You can see my photos of the event here https://www.flickr.com/photos/tdrury/albums/72157695567390494 End Theme: Dr Who(8 Bit Version) By Finn Talisker The show is now on Facebook please join the group for exclusive behind the scenes insights and of course also discuss and feedback on the show https://www.facebook.com/groups/187162411486307/ If you want to send me comments or feedback you can email them to tdrury2003@yahoo.co.uk or contact me on twitter where I'm @tdrury or send me a friend request and your comments to facebook where I'm Tim Drury and look like this http://www.flickr.com/photos/tdrury/3711029536/in/set-72157621161239599/ in case you were wondering.
In our 41th Episode, two scruffy looking nerf herders discuss the latest goings on Around the Rim. Interview with special guest Nick Joseph, who portrayed Major Arhul Hextrophon during the medal ceremony in Star Wars: Episode IV A New Hope. Episode VIII Trailer date confirmed The return of Forces of Destiny Ron Howard releases new photo's from the set of SOLO. Deperate and Dangerous Times in the Galaxy Emilia Clarke filming complete John Boyega presents a new look at Star Wars Battlefront II Dont forget theses great events: Coventry Comic Con (1st October 2017) https://www.coventrycomiccon.uk SW40 (9th December 2017) https://www.coventrycomiccon.uk/fargo/ Give a LIKE, SUBSCRIBE and COMMENT BELOW You can find other great podcasts on the network as well as comic and entertainment news by jumping on twitter and following @taylornetwork
Nick Joseph talks to us about working with 3 Doctors on Doctor Who & meddling with the Stars of Star Wars. Find Nick here http://nickjoseph1.weebly.com/ Whovian Round-up & Round-up Reviews are by http://indiemacuser.com/ Gallifrey Stands can be found at on twitter @DoctorSquee, by email GallifreyStandsPodcast@gmail.com, on stitcher, iTunes, The Tangent-Bound Network, Satchel Player & http://gallifreystandspodcast.podbean.com & on Facebook https://www.facebook.com/groups/1481026762176392/ You can buy the Gallifrey Stands lipbalm @ https://www.etsy.com/uk/listing/209093664/gallifrey-stands-geek-stix-inspired-by?ref=shop_home_active_12 Please support our Pod-Pals too: DisAfterDark http://disafterdark.blogspot.co.uk/ Just give me a few minutes http://justgivemeafewminutes.podomatic.com/ AMAudioMedia http://amaudiomedia.com/ TangentBoundNetwork http://TangentBoundNetwork.com/ Drinking in the Park http://Neilandjohnny.com EMC Network http://www.electronicmediacollective.com/ WhoNews http://www.who-news.com/
This month, lots of great news regarding #GoRogue with a breakdown of the hilarious Chapter 1 fanfilm. A great interview with Nick Joseph, The Medal Bearer from SW: A New Hope, as well as DeAgostini Helmet reviews. You'll have to catch my review of the Magazines on www.dasweallycool.blogspot.co.uk. Remember, sharing's caring, email me at dasweallycool@gmail.com, Follow and Retweet my discounts, sightings and more on @ScavengerUK
Nick and Jake joined Donna Frost and I for an afternoon of their excellent music. Based in New York City and now National touring musicians they rocked the house. Little did we know at the time that this would be our landmark 200th episode.