POPULARITY
On this episode of The Kara Goldin Show, we're joined by Jasmina Aganovic, Founder and CEO of Future Society—a revolutionary fragrance brand using biotechnology to bring extinct flower scents back to life. With a background in chemical and biological engineering from MIT, Jasmina is merging science and storytelling to create a completely new genre in fragrance.During our conversation, Jasmina shares the inspiration behind Future Society, how she and her team are sequencing DNA to resurrect lost aromas, and why she believes biology will shape the future of beauty. We dive into the creative process behind crafting emotional, science-backed scents, the sustainability angle, and the powerful intersection of memory, technology, and design.Whether you're into beauty, biotech, or bold new business models, this episode is packed with insights you won't want to miss. Now on The Kara Goldin Show. Are you interested in sponsoring and advertising on The Kara Goldin Show, which is now in the Top 1% of Entrepreneur podcasts in the world? Let me know by contacting me at karagoldin@gmail.com. You can also find me @KaraGoldin on all networks. To learn more about Jasmina Aganovic and Future Society:https://www.instagram.com/wearefuturesocietyhttps://www.youtube.com/watch?v=-bBn9VfEULshttps://hypebae.com/2024/10/future-society-jasmina-aganovic-interview-extinct-flower-perfume-where-to-buyhttps://wearefuturesociety.com Sponsored By:Square - Get up to $200 off Square hardware when you sign up at square.com/go/karagoldinRange Rover Sport - The Range Rover Sport is your perfect ride. Visit LandRoverUSA.com and check it out. Check out our website to view this episode's show notes: https://karagoldin.com/podcast/680
This podcast kicks off the next series about the Endgame of Digital Transformation. The next four podcasts will postulate what the future may look like when technology is “done” transforming four areas of our world. This podcast kicks off the predications with the impacts of technology on society in general. While no one, not even Scott, can predict the future with 100% accuracy, even being able to imagine the future with some degree of clarity is helpful for knowledge and decisions today. Listen in and see if you agree with the picture Scott paints about what society might look like when our grandkids are old!
Welcome to Stanford Engineering's The Future of Everything, the podcast that delves into groundbreaking research and innovations that are shaping the world and inventing the future. The University has a long history of doing work to positively impact the world and it's a joy to share about the people who are doing this work, what motivates them, and how their work is creating a better future for everybody. Join us every Friday for new episodes featuring insightful conversations with Stanford faculty and to discover how Stanford's research is transforming tomorrow's world. Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / Facebook Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
Beginning in the first Trump presidency and expanded under Joe Biden, the US has taken a strategy of technologically containing China through restricting its access to cutting edge semiconductors. As Chinese Whispers has looked at before, these chips form the backbone of rapid advances in AI, telecoms, smartphones, weaponry and more. Washington's aim was clear: to widen the technological gap between the two powers But has this strategy worked? Lately this has become a hot topic of debate as Chinese tech companies such as Huawei and DeepSeek have nevertheless made technical strides. Some even argue that the export controls have spurred on Chinese innovation and self-reliance. In this episode of Chinese Whispers, two very informed and smart guests debate this issue. Ryan Fedasiuk is U.S. Director of The Future Society, an independent nonprofit organization focused on AI governance, and former Advisor for U.S.-China Bilateral Affairs at the US State Department. Steve Hsu is Professor of Theoretical Physics at Michigan State University and a start-up founder. He also hosts the podcast, Manifold. Produced by Cindy Yu and Joe Bedell-Brill.
Beginning in the first Trump presidency and expanded under Joe Biden, the US has taken a strategy of technologically containing China through restricting its access to cutting edge semiconductors. As Chinese Whispers has looked at before, these chips form the backbone of rapid advances in AI, telecoms, smartphones, weaponry and more. Washington's aim was clear: to widen the technological gap between the two powers But has this strategy worked? Lately this has become a hot topic of debate as Chinese tech companies such as Huawei and DeepSeek have nevertheless made technical strides. Some even argue that the export controls have spurred on Chinese innovation and self-reliance. In this episode of Chinese Whispers, two very informed and smart guests debate this issue. Ryan Fedasiuk is U.S. Director of The Future Society, an independent nonprofit organization focused on AI governance, and former Advisor for U.S.-China Bilateral Affairs at the US State Department. Steve Hsu is Professor of Theoretical Physics at Michigan State University and a start-up founder. He also hosts the podcast, Manifold. Produced by Cindy Yu and Joe Bedell-Brill.
A draft declaration of the Paris AI Action Summit has raised concerns for The Future Society, a nonprofit tasked by organisers to provide recommendations for protecting civil society from the risks of artificial intelligence.
Summary It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: AI is the most important source of risk. There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: Why I prioritize x-risk over animal-focused [...] ---Outline:(00:04) Summary(01:30) I dont like donating to x-risk(03:56) Cause prioritization(04:00) S-risk research and animal-focused longtermism(05:52) X-risk vs. global priorities research(07:01) Prioritization within x-risk(08:08) AI safety technical research vs. policy(11:36) Quantitative model on research vs. policy(14:20) Man versus man conflicts within AI policy(15:13) Parallel safety/capabilities vs. slowing AI(22:56) Freedom vs. regulation(24:24) Slow nuanced regulation vs. fast coarse regulation(27:02) Working with vs. against AI companies(32:49) Political diplomacy vs. advocacy(33:38) Conflicts that arent man vs. man but nonetheless require an answer(33:55) Pause vs. Responsible Scaling Policy (RSP)(35:28) Policy research vs. policy advocacy(36:42) Advocacy directed at policy-makers vs. the general public(37:32) Organizations(39:36) Important disclaimers(40:56) AI Policy Institute(42:03) AI Safety and Governance Fund(43:29) AI Standards Lab(43:59) Campaign for AI Safety(44:30) Centre for Enabling EA Learning and Research (CEEALAR)(45:13) Center for AI Policy(47:27) Center for AI Safety(49:06) Center for Human-Compatible AI(49:32) Center for Long-Term Resilience(55:52) Center for Security and Emerging Technology (CSET)(57:33) Centre for Long-Term Policy(58:12) Centre for the Governance of AI(59:07) CivAI(01:00:05) Control AI(01:02:08) Existential Risk Observatory(01:03:33) Future of Life Institute (FLI)(01:03:50) Future Society(01:06:27) Horizon Institute for Public Service(01:09:36) Institute for AI Policy and Strategy(01:11:00) Lightcone Infrastructure(01:12:30) Machine Intelligence Research Institute (MIRI)(01:15:22) Manifund(01:16:28) Model Evaluation and Threat Research (METR)(01:17:45) Palisade Research(01:19:10) PauseAI Global(01:21:59) PauseAI US(01:23:09) Sentinel rapid emergency response team(01:24:52) Simon Institute for Longterm Governance(01:25:44) Stop AI(01:27:42) Where Im donating(01:28:57) Prioritization within my top five(01:32:17) Where Im donating (this is the section in which I actually say where Im donating)The original text contained 58 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: November 19th, 2024 Source: https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Host April Franzino is joined by Jasmina Agonovic, Founder of Arcaea, shares her unique journey from studying engineering at MIT to becoming a beauty brand founder. She discusses her passion for innovation in the beauty industry, the creation of her company Arkea, and the launch of her fragrance brand Future Society, which focuses on reviving scents from extinct flowers. Jasmina emphasizes the importance of biology in beauty, the challenges of entrepreneurship, and the significance of having a lab for product development. She also shares insights on retail strategy, lessons learned from customer feedback, and future plans for her brand.Instagram: https://www.instagram.com/beautybizshow/See omnystudio.com/listener for privacy information.
A leading expert in AI is urging lawmakers to rapidly pass new legislation to regulate Agentic AI, a ground-breaking new form of AI capable of completing complex goal-oriented tasks without the need of a human prompt. President of the European Responsible AI Office (EURAIO) Nell Watson made the call today during her keynote speech at Learnovation 2024, the annual summit on the future of work and learning, which took place at the Aviva Stadium. Alongside her role as head of the EURAIO, Nell is an executive consultant for Apple and a senior scientific advisor to The Future Society, an independent non-profit focused on developing and implementing governance mechanisms for AI. She has also been a senior fellow to The Atlantic Council, a US think tank. The theme of Learnovation 2024 is 'Navigating the Future of Learning', with speakers and workshops focused on preparing learners for the challenges of the 21st century and the future of work. Learnovation is the annual summit organised by The Learnovate Centre, a leading global future of work and learning research hub funded by Enterprise Ireland and IDA Ireland and based at Trinity College Dublin. Nell told Learnovation 2024 this morning that autonomous Agentic AI represents a significant upgrade on traditional AI and generative AI models which require human prompts. In response, industry must prepare workers for the introduction of the technology with new workplace training and upskilling programmes that teach innovative and independent thinking. She also told the conference that, while applications of Agentic AI will have positive effects for many, especially those with disabilities, lawmakers and officials must still move quickly to pass new laws and regulations to defend workers' rights from bad corporate actors. They must also, she says, legislate to ensure developers impose limits on the AI itself so that it does not breach the law while working to achieve its goals. President of the European Responsible AI Office Nell Watson says: "Agentic AI promises to dramatically change the world of work and learning. It's vital that we start preparing people for that change through education, training and upskilling, and new laws and regulations to ensure that the rights of people are not sacrificed in the pursuit of corporate profit. "This technology has vast potential. Applications in virtual reality will allow people to learn new skills in low-stakes virtual environments. It will make learning more inclusive with applications for people with reduced hearing or sight loss, or those with speech issues. Applying Agentic AI to learning will have life-changing effects for many people. However, it remains hugely important that officials take action now to regulate this technology and protect workers' rights before Agentic AI becomes ubiquitous, rather than spend valuable time playing catch-up later." Director of The Learnovate Centre, Nessa McEniff, said: "Learnovation 2024 is looking at some extremely interesting and topical issues including how to recognise and remove barriers for innovation in learning and how to embed AI in education in ways that are both effective and responsible. We will also explore AI's role in learning technology and focus on corporate learning in the 21st Century and how to prepare for the challenges of the future workplace." The event's host is Dr. Mary Kelly, Academic Dean at Hibernia College, who also gave the opening address. Other speakers at Learnovation 2024 include: Richard Culatta, CEO at the International Society for Technology in Education (ISTE) and the Association for Supervision of Curriculum Development (ASCD), non-profit global education organisations based in Washington that are focused on accelerating innovation in education and elevating learning to meet the needs of all students. Dr Nigel Paine, Global Thought Leader and ex-CLO of the BBC. Nigel has more than 25 years of experience in corporate learning and is a regular speaker, writer and b...
If Jurassic Park recreated dinosaurs, could we do the same for flowers? Scientist and founder Jasmina Aganovic thinks so, framing her work as “science possibility, not science fiction.” Her brand, Future Society, is bringing extinct flowers back to life—as fragrances. They've partnered with some of the world's best perfumers, who used DNA sequencing data to let their imaginations run wild. Just like that, their Scent Surrection Collection was born—six fragrances, each inspired by flowers that vanished from around the world, including fan-favorites Solar Canopy and Floating Forest.In this episode, Jasmina sits down with Scentbird's Marianne Mychaskiw. She takes us inside her creative process, opens up about the challenges of merging science and art, and reminds us that, just like these flowers, every human is capable of reinvention.Highlights:• Intro to Jasmina• Merging biology and beauty• The inspiration for Future Society• Invisible Woods• Solar Canopy• “Our fragrances are polarizing”• Did the flowers actually smell like this!?• The science behind the process• Grassland Opera• Reclaimed Flame• Floating Forest• Science vs. nature• “All things are interconnected”• Learning scent development• Scent Connection, Lost Beauties Edition• Rapid-fire round: scents inspired by extinct animals• “We can invent our tomorrow”Featured Fragrances:Solar Canopy by Future SocietyFloating Forest by Future SocietyHaunted Rose by Future SocietyDiscovery Set by Future SocietySoak in all of our audio and video content at https://podcast.scentbird.com.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new interview series with people doing impactful work, published by Probably Good on August 21, 2024 on The Effective Altruism Forum. Probably Good recently launched a new article series, Career Journeys: Interviews with people working to make a difference. In this series, we interview people from a range of fields and career paths to bring a more personal perspective to career planning. Each conversation explores how people got to where they are today, what their day-to-day work entails, and what advice they'd give to others pursuing a similar path. As several " write about your job" posts have noted, jobs are still mysterious both in terms of how people get them and what they actually look like to do. These interviews provide some insight into how varied - and often surprising - lived career experiences can be. We hope they can be a catalyst for our readers to consider new paths and apply any advice they find relevant for their path. Check out our first few interviews, now live on the site: Astronaut ambitions, leaving clinical medicine, and eliminating lead exposure: After a varied career of exploration and changing course, Bal Dhital currently works as a program manager for the Lead Exposure Elimination Project, a charity that aims to end the use of lead-based paint and products around the world. Navigating academia & researching morality: Matti Wilks is currently a lecturer (assistant professor) in Psychology at the University of Edinburgh. Matti's research uses social and developmental psychological approaches to study our moral motivations and actions. From construction engineering to non-profit operations: After several years in construction engineering and a five year bicycle journey in sub-saharan Africa, Bell Arden currently runs the Operations team at The Future Society - a non-profit organization focused on improving AI governance. Do you have someone in mind (including yourself) that might make for a good interview? Or have a suggestion for a specific career path you'd like to learn more about? Feel free to contact us or email us at annabeth@probablygood.org with your suggestions. We can't promise we'll interview every recommended interviewee, but we'd love to hear about anyone who has a particularly interesting career journey that might be helpful for others to read about. Thanks! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
This is a special re-airing of an episode Samo Burja and Erik Torenberg taped for Venture Stories (Village Global's podcast) in 2019 . Their conversation covers the following: the distinction between invention vs. adoption of technology. What a truly egalitarian society might like, and whether such a society can exist. How individuals' personal theories of history have influenced outcomes in the past, from the USSR's espionage of the Manhattan project to Christianity & the Roman elites. The U.S. government as an institution under Great Founder Theory, and its foreign policy in the 20th and 21st centuries. Lee Kuan Yew and Singapore, and Paul Kagame and Rwanda. Patronage as a social technology used by the Chinese Communist Party. To what extent China's social stability is dependent upon the promise of growth. Bureaucratic legibility as a key mechanism by which technology is a centralizing force. --- This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We're launching new shows every week, and we're looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co. --- REFERENCED ARTICLES: “Functional Institutions are the Exception” https://samoburja.com/functional-institutions-are-the-exception/ “How Roman Emperors Handled the Succession Problem” https://samoburja.com/how-roman-emperors-handled-the-succession-problem/ “Intellectual Dark Matter” https://www.samoburja.com/intellectual-dark-matter/ --- SPONSOR: HARMONIC
from the robotcrimeblog.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe, published by Cillian on December 6, 2023 on The Effective Altruism Forum. Summary We are hiring for an Executive Director and an EU Tech Policy Lead to launch Talos Institute[1], a new organisation focused on EU AI policy careers. Talos is spinning out of Training for Good and will launch in 2024 with the EU Tech Policy Fellowship as its flagship programme. We envision Talos expanding its activities and quickly growing into a key organisation in the AI governance landscape. Apply here by December 27th. Key Details Closing: 27 December, 11:59PM GMT Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later. Ability to attend our upcoming Brussels Summit (February 26th - March 1st) would also be beneficial, though not required. Hours: 40/week (flexible) Location: Brussels (preferred) / Remote Compensation: Executive Director: 70,000 - 90,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate. EU Tech Policy Lead: 55,000 - 75,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate. How to apply: Please fill in this short application form Contact: cillian@trainingforgood.com About Talos Institute EU Tech Policy Fellowship The EU Tech Policy Fellowship is Talos Institute's flagship programme. It is a 7-month programme enabling ambitious graduates to launch European policy careers reducing risks from artificial intelligence. From 2024, it will run twice per year. It includes: 8-week training that explores the intricacies of AI governance in Europe A week-long policymaking summit in Brussels to connect with others working in the space 6-month placement at a prominent think tank working on AI policy (e.g. The Centre for European Policy Studies, The Future Society) Success to date The EU Tech Policy Fellowship appears to have had a significant impact to date. Since 2021, we've supported ~30 EU Tech Policy Fellows and successfully transitioned a significant number to work on AI governance in Europe. For example: Several work at key think tanks (e.g. The Future Society, the International Center for Future Generations, and the Centre for European Policy Studies) One has co-founded an AI think tank working directly with the UN and co-authored a piece for The Economist with Gary Marcus Others are advising MEPs and key institutions on the EU AI Act and related legislation We're conducting an external evaluation and expect to publish the results in early 2024. Initial indicators suggest that the programme has been highly effective to date. As a result, we have decided to double the programme's size by running two cohorts per year. We now expect to support 30+ fellows in 2024 alone. Future directions We can imagine Talos Institute growing in a number of ways. Future activities could include things like: Creating career advice resources tailored to careers in European policy (especially for those interested in AI & biosecurity careers). Similar to what Horizon has done in the US. Community-building activities for those working in AI Governance in Europe (eg. retreats to facilitate connections, help create shared priorities, identify needs in the space, and incubate new projects). Hosting events in Brussels educating established policy makers on risks from advanced AI Activities that help grow the number of people interested in considering policy careers focused on risks from advanced AI, e.g. workshops like this Expanding beyond AI governance to run similar placement programmes for other problems in Europe (e.g. biosecurity). Establishing the organisation as a credible think tank in Eu...
We cover it all in this Black Friday episode (that doesn't really include anything for BF). We talk Dusita, Trudon, and whether or not Future Society is the Theranos of fragrance. We also deal with a crying baby, annoying smoke detectors, and share a couple of delicious dirty martinis. Finally, we try to answer a great listener question but mostly end up talking about buyers remorse from a recent purchase. Listen in for all the above and an "oud-y fruity" version of The Game!Special Thanks to Tiago @scentiment and Lisa Brand spotlight of the episode - La Curie at https://www.la-curie.com (01:39) - - Thoughts on Dusita (10:28) - - Thoughts on Trudon (15:36) - - Listener Question / Buyers Remorse (21:00) - - The Online Fragrance Community (27:40) - - The PR Push of Future Society Fragrances (31:06) - - Scents We've Been Wearing (aka Scents of the Week) (50:25) - - The Game Fragrances mentioned in this episode:Rosarine, Melodie de L'Amour, Moonlight in Chiangmai, Oudh Infini, Le Sillage Blanc, Issara, La Rhapsodie Noire by Dusita / Baccarat 540 by Maison Francis Kurkdjian / Nudiflorum by Nasomatto / Muscs Koublai Khan by Serge Lutens / Aphélie, Elae, Mortel, and Brume by Trudon / 20 MARS 2022 by Rundholz / L'Eau d'Hiver and Portrait of a Lady by Frederic Malle / Tempo by Diptyque / Muguet Fleuri by Oriza L. Legrand / Oud Palao by Diptyque / Épices and Pistil by Miskeo Parfums / Hexensalbe by Stora Skuggan / Flaming Creature by Marissa Zappas / Rimbaud, Black Tie, Reptile, Eau de Californie, Cologne Francaise, and Parade by Celine / Aventus by Creed / Ani by Nishane / Gold Leaves by Regime des Fleurs / 1992 Purple Night and 1885 Bains Sulfureux by Les Bains Guerbois / Geist, Incendo, and Cyllene by La Curie / Incense Kyoto by Comme des Garçons / Oud for Greatness by Initio / Lune Feline by Atelier des OrsThe Game:Oud Laque by Les Bains Guerbois / Faunus by La Curie / Into the Oud by Astrophil & Stella / Rouge Sarây by Atelier des Ors / Oud Vendôme by Ex Nihilo / 1979 New Wave by Les Bains GuerboisPlease feel free to email us at hello@fragraphilia.com - Send us questions, comments, or recommendations. We can be found on TikTok and Instagram @fragraphilia
You know naturals; you know synthetics; but do you know *bio-engineered materials?* Through her biotech fragrance brand, Future Society, Jasmina Aganovic (Founder & CEO) is resurrecting extinct smells through DNA sequencing, and introducing this novel 3rd category of fragrance materials into the conversation. FRAGS MENTIONED: MAISON d'ETTO: Rotano, Noisette, Canaan; Robert Piguet Fracas, Giorgio Beverly Hills, Mugler Alien, MAISON d'ETTO Macanudo, Future Society: Solar Canopy, Floating Forest, Haunted Rose, Grassland Opera; Flamingo Estate Roma Heirloom Tomato FOLLOW: @futuresociety GET SMELL CLUB TIX: https://shorturl.at/bioTZ (*Future Society fragrances were gifted in PR.)
Boomer Living Tv - Podcast For Baby Boomers, Their Families & Professionals In Senior Living
Join Hanh Brown, host of Boomer Living Broadcast, as she dives deep into the world of web safety for older adults, AI reliability, and digital advancements with esteemed guest, Dr. Srijan Kumar. This episode will take you through the maze of misinformation, online manipulation, and the challenges and promises that AI holds. From understanding the nuances of network modeling to exploring the latest in multimodal learning, Mrs. Brown and Dr. Kumar shed light on how the digital realm is evolving and what it means for our future. They also discuss the implications of these advancements on mental health, the role of peer correction, and the importance of trustworthiness in AI systems. Whether you're a tech enthusiast or just curious about the digital world, this episode is packed with insights that you won't want to miss.Delve deep into the dynamic world of web safety and AI robustness in our latest coverage:- From spotting misinformation to understanding its impact on mental health.- Unraveling techniques to detect online manipulation, hate speech, and fake reviews.- Ensuring the trustworthiness and reliability of AI systems, while safeguarding them from adversaries.- Explore cutting-edge advancements in network/graph modeling and the intricacies of large-scale network predictions.- Discover the latest in multimodal learning, focusing on robust vision-and-language integrations.- Lastly, glimpse the horizon with our insights into future research directions and the evolving landscape of web safety.Join us as we navigate these crucial realms, paving the way for a safer, more reliable digital future.You can find Srijan on LinkedIn: https://www.linkedin.com/in/srijankr/
Boomer Living Tv - Podcast For Baby Boomers, Their Families & Professionals In Senior Living
In this enlightening episode, we delve into the transformative potential of AI, Data Science, Big Data, and NoSQL on senior care with our esteemed guest, Akmal Chaudhri, Esq., a titan in database technology. We explore the impact of these tools on health prediction, personalizing care, and enhancing the independence of older adults.We journey through Akmal's expansive career, reflecting on the evolution of database technology. The discussion extends into how AI and Data Science anticipate and address the needs of our aging population, and the role of Big Data in improving senior living conditions.This podcast illuminates the balance between embracing new technologies and ensuring the best care for our aging society. We conclude with an optimistic look at a future where technology empowers our seniors to lead dignified and fulfilling lives. This thought-provoking conversation promises to inspire all listeners with its unique blend of technology and compassion.You can join Akmal Chaudhri, Esq. on Wednesday, 2 August, 2023 at this exclusive event "AI: Using Generative Pre-trained Transformers (GPT) Without Hallucinations": https://www.eventbrite.co.uk/e/ai-using-generative-pre-trained-transformers-gpt-without-hallucinations-registration-672983360347You can find Akmal on LinkedIn: https://www.linkedin.com/in/akmalchaudhri/
The New World Order, Agenda 2030, Agenda 2050, The Great Reset and Rise of The 4IR
Show Notes: Top WEF associate Economist envisions CBDC's Under The Skin in Order to Operate in Future Society. A Great Reset of the Financial Infrastructure of the World has commenced. All donations and program research support to be sent to: $aigner2019(cashapp) or https://www.paypal.me/Aigner2019 or Zelle(1-617-821-3168)
Boomer Living Tv - Podcast For Baby Boomers, Their Families & Professionals In Senior Living
Don't miss our captivating event, "The Future is Now: AI, Automation, and Innovation in Healthcare and Business." Join Dr. Douglas Dew, CEO of Automated Clinical Guidelines, and Conor Grennan, Dean at NYU Stern School of Business, for an exploration of cutting-edge AI technologies.Discover how AI is revolutionizing healthcare and business across various sectors. Learn about Dr. Dew's groundbreaking innovations and his unique system enabling machines to comprehend medicine through two-way conversations. Conor Grennan's initiatives at NYU Stern will be highlighted, fostering generative AI fluency.Don't miss this thought-provoking event as industry pioneers share their experiences, inspiring stories of innovation and entrepreneurship. Explore the challenges and opportunities in AI regulations. Attendees will gain valuable insights into the transformative role of AI in healthcare and business.Join us for an enriching event that expands your knowledge of AI, automation, and innovation in healthcare and business.Find Dr. Douglas Dew on LinkedIn: https://www.linkedin.com/in/dkdew/Find Conor Grennan on LinkedIn: https://www.linkedin.com/in/conorgrennan/
Boomer Living Tv - Podcast For Baby Boomers, Their Families & Professionals In Senior Living
Join us for a captivating and thought-provoking debate "AI: Is ChatGPT Overhyped and Overrated or Underhyped and Underestimated?" as we explore the captivating world of ChatGPT, OpenAI's groundbreaking generative AI platform. Since its introduction in late 2022, ChatGPT has sparked fervent discussions, captivating the attention of experts and enthusiasts alike. It has become the focal point for the exploration of the future of artificial intelligence and its profound impact on society.In this engaging debate, we will dive deep into a pivotal question: Does ChatGPT truly suffer from an excess of hype and unwarranted acclaim, or is it being unjustly underestimated? Our esteemed panel of thought leaders will meticulously dissect the conflicting opinions and perspectives surrounding ChatGPT's capabilities and potential. Through their insightful analysis, we aim to provide a comprehensive exploration of this subject matter.This debate promises to be an intellectual journey, shedding light on the various aspects that contribute to the ongoing discourse surrounding ChatGPT. From the skeptics who question its ability to replicate human intelligence to the visionaries who envision a future transformed by its possibilities, we will explore the entire spectrum of perspectives.
This week we welcome New View Oklahoma's Ashley Howard VP of and Communications and Mark Ivy Development Manager to talk about their Blackout Banquet June 24th. https://nvoklahoma.org/ Our second segments features Amber Hanaken, Director of Marketing and Communications at Soonercon and Matthew Cavanaugh Treasurer of The Future Society of Oklahoma https://soonercon.com/ https://www.fscok.org/See omnystudio.com/listener for privacy information.
In this video, I review the science fiction novel "Project Hail Mary" by Andy Weir. As a fan of the genre, I was excited to dive into this action-packed and thought-provoking book that takes a critical look at some of the global issues we face today. I discuss the strengths and shortcomings of the book, as well as its potential pleasures and delights for science fiction enthusiasts. Join me as I explore the captivating story of Ryland Grace, a science teacher who wakes up alone in a spacecraft and must use his knowledge and skills to save the earth from an alien threat. Don't forget to leave a comment with your thoughts on the book and whether you would recommend it to others.
Marques is a cyborg anthropologist, working at the nexus of Ancestral Wisdom, Modern Technology and Future Society. Seeking to be globally impactful, he created the World Education Foundation in 2009 traveling to over 83 countries, while earning his Master's through the Adult Learning and Global Change program at Linkoøping University in Sweden. In 2017, Marques was selected to participate in the Global Solutions Program hosted by Singularity University and NASA, exploring the utilization of satellite technology, unique datasets and machine learning to quantify financial, social and climate impact of the built environment. Marques is an expert in the field of Cyborg Anthropology, where he has become a leading voice in synthesizing the intra-connections between humans, innovation and living systems. Currently, Marques is co-founder of Ism.Earth and a consultant for Earnst & Young's Luminary Network. He's considered a global thought leader, consultant and keynote speaker discussing topics spanning education, space, innovation, technology, climate change, blockchain, future of work, equity & racial justice, along with personal and organizational transformation. Most recently Marques and dev team won the Qualcomm SnapDragon track at the 2023 MIT Reality Hack for creating a human-centered platform to utilize cutting edge technologies such as AR, Mixed Reality, GISdata and Machine Learning to help the unhoused conceptualize, customize, and co-create their very own home. Website: MarquesAnderson.com Linkedin- https://www.linkedin.com/in/marquesanderson/ Twitter- @AncestralCyborg IG: @GlobalPoppaSmurf More information about The Bruin Promise: alumni.ucla.edu/bruin-promise/
Hub Culture presents: The Chronicle Discussions, Episode 84: CoCreating the Future Society - From Burning Man to Davos. Stan Stalnaker of Hub Culture and Jenn Sander, Global Innovation for Burning Man and Founder at Play Atélier, host live from the Hub Culture Pavilion in Davos, Switzerland, during World Economic Forum 2023. Produced by: New Angel Productions
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now for the EU Tech Policy Fellowship 2023, published by Jan-WillemvanPutten on November 11, 2022 on The Effective Altruism Forum. Announcing the EU Tech Policy Fellowship 2023, an 8-month programme to catapult ambitious graduates into high-impact career paths in EU policy, mainly working on the topic of AI Governance. Summary Training for Good is excited to announce the second edition of the EU Tech Policy Fellowship. This programme enables promising EU citizens to launch careers focused on regulating high-priority emerging technologies, mainly AI. Apply here by December 11th. This fellowship consists of three components: Remote study group (July - August, 4 hours a week): A 6 week part-time study group covering AI governance & technology policy fundamentals. 2 x Policy training in Brussels (June 26-30 and September 3 - 8 exact date TBC): Two intensive week-long bootcamps in Brussels featuring workshops, guest lectures from relevant experts and networking events. Fellows will then participate in one of two tracks depending on their interests. Track 1 (September - February full time): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 2024. Host organisations include The Future Society, Centre for European Policy Studies, and German Marshall Fund (among others). Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy jobs in Europe. This may include career workshops, feedback on CVs, interview training and mentorship from experienced policy professionals. Other important points: If you have any questions or would like to learn more about the program and whether or not it's the right fit for you, Training for Good will be hosting an informal information session on Thursday November 24 (5.30pm CET), please subscribe here for that session. This fellowship is only open to EU citizens. Modest stipends are available to cover living and relocation costs. We expect most stipends to be between €1,750 and €2,250 per month For track 1, stipends are available for up to 6 months while participating in placements. For track 2, stipends are available for up to 1 month while exploring and applying for policy roles. Apply here by December 11th. The Programme The programme spans a maximum of 8 months from June 2023 to February 2024, is fully cost-covered, and where needed, participants can avail of stipends to cover living costs. It consists of 4 major parts: Policy training in Brussels (June 26-30, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events. Main focus: Understanding Brussels bubble (including networking) and creating your own goals for the Fellowship All accomodation, food & travel costs will be fully covered by Training for Good Remote study group (July - August): A 7 week study group covering AI governance & technology policy fundamentals. Every week consists of ~4 hours of readings, a 1 hour discussion and a guest lecture. Policy training in Brussels (September 3-8, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events. The goal of this week is to come up with a policy proposal, inspired by the latest insights from governance research covered in the 7 week reading group. All accomodation, food & travel costs will be fully covered by Training for Good Fellows will then participate in one of two tracks depending on their interests. Track 1 (September - February): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 202...
In April 2021, the European Commission proposed a draft regulation for Artificial Intelligence (AI) systems used in the EU single market, dubbed the AI Act. This opened one of the most politicized regulatory debates since GDPR, and the AI Act promises to be as wide-ranging. In this conversation, we'll take stock of the current thinking about governing AI in the EU, 1.5 year into the debate. Find out more about this event on our website: https://bit.ly/3UOUJLH Interested in watching our webinars live, or taking part in the production of our research? Join our community at: https://bit.ly/3sXPpb5 Nicolas Moës is an economist by training focused on the impact of Artificial Intelligence on geopolitics, the economy and industry. He is the Director for European AI Governance at The Future Society, where he studies and monitors European developments in the legislative framework surrounding AI. His current focus is on the EU AI Act and the various governance mechanisms needed to enforce it cost-effectively. Nicolas is also involved in AI standardisation efforts, as a member of the International Standardisation Organization's SC42 and CEN-CENELEC JTC 21 committees on Artificial Intelligence, as a Belgian representative. Nicolas is also an expert at OECD.AI Policy Observatory in the Working Group on Classification & Risk. Prior to The Future Society, he worked at the Brussels-based economic policy think-tank Bruegel on EU technology, AI and innovation policy. His publications have focused on the impact of AI & automation, though he has carried out research on global trade & investments, EU-China relations and transatlantic partnerships. Nicolas completed his Masters degree (M.Phil.) in Economics at the University of Oxford with a thesis on institutional engineering for resolving the tragedy of the commons in global contexts. He is a native French-speaker fluent in English and persists in learning Dutch and Mandarin.
What is the Cosmopolis 2045 project? And how did it start?How can we pay attention to the "invisible in-between" by re-evaluating whether or not our language is achieving our goals?What does it mean to engage and both personal and social evolution?...Today, Abbie discusses the importance of "cosmopolitan communication" and how we can be profoundly open to and respectful of others without giving up our own beliefs. Abbie explains how before we can embody CMM, we first have to imagine what that looks like....Check out the Cosmopolis 2045 website here....Stories Lived. Stories Told. is created, produced & hosted by Abbie VanMeter.Stories Lived. Stories Told. is an initiative of the CMM Institute for Personal and Social Evolution.Music for Stories Lived. Stories Told. is created by Liv Hukkleberg. ...Explore all things Stories Lived. Stories Told.Email me! storieslived.storiestold@gmail.comFollow me on Instagram.Subscribe on YouTube.Check out my website.Learn more about the CMM Institute.Learn more about CMM.Learn more about Cosmopolis 2045.Learn more about CosmoKidz.Learn more about the CosmoTeenz Fellows' work on Instagram.
0:00:10 - Minecraft on Mac Talk 0:10:00 - 2022 Ford Maverick 40mins 0:44:00 - Future Society Predictions 0:58:00 - Stray 1:03:00 - Trophies 1:20:00 - Last of Us Part 1 Remake PS5
MONEY MAN OOOG AKA O-DAWG's OPEN MINDED PODCAST (FACTS VS FICTION VS IMAGINATION)
Lost Generation --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/moneymanigohard/message
The intent of the internet was not for its current use. Web3 and the utilization of blockchain has the opportunity to right those wrongs. Join me as I sit down with Justin of Meta Money to discuss this effect of the new internet on society, especially when it comes to children and the venerable.
We start off this week talking with representatives from SoonerCon a collaborative, 100% volunteer-organized effort backed by the 501(c)3 nonprofit Future Society of Central Oklahoma. A pop culture convention that takes place in Norman, OK. Our second segment features the Oklahoma City YWCA, which helps survivors of domestic violence and sexual assault. https://www.ywcaokc.org/See omnystudio.com/listener for privacy information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EU Tech Policy Fellowship, published by Jan-WillemvanPutten on March 30, 2022 on The Effective Altruism Forum. Announcing the EU Tech Policy Fellowship, a cost-covered programme to catapult relevant graduates into high-impact career paths in European policy and Tech Policy roles. Summary Training for Good is excited to announce the EU Technology Policy Fellowship. This programme enables promising EU citizens to launch careers focused on regulating high-priority emerging technologies, especially AI and cybersecurity. This fellowship consists of three components: Remote study group (July - August, 4 hours a week): A 6 week part-time study group covering AI governance & technology policy fundamentals. Policy training in Brussels (late August / early September - exact date TBD): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events. Tracks: Fellows will then participate in one of two tracks depending on their interests. Track 1 (September - December full time): Fellows will be matched with a host organisation working on European tech regulation for a ~3 month placement between September and December 2022. Host organisations include The Future Society, Centre for European Policy Studies, and NLAI Coalition (among others). Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy jobs in Europe. This will include career workshops, feedback on CVs, interview training and mentorship from experienced policy professionals. Other important points: If you have any questions or would like to learn more about the program and whether or not it's the right fit for you, Training for Good will be hosting an informal information session on Thursday April 7 (5.30pm CET), please subscribe here for that session. This fellowship is only open to EU citizens. Modest stipends are available to cover living and relocation costs. We expect most stipends to be between €1,250 and €1,750 per month (though will take individual circumstances into consideration). For track 1, stipends are available for up to 4 months while participating in placements. For track 2, stipends are available for up to 1 month while exploring and applying for policy roles. Apply here by April 19th. The Programme The programme spans ~5 months from July to December, is fully cost-covered, and where needed, participants can avail of stipends to cover living costs. It consists of 3 major parts: Remote study group (July - August): A 6 week study group covering AI governance & technology policy fundamentals (draft curriculum). This study group will run from July 18 - August 28, including ~4 hours of readings and a 1 hour discussion each week. Policy training in Brussels (exact date TBD): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events. This training will either run the week commencing August 29 or September 5 (exact date to be decided). All accomodation, food & travel costs will be fully covered by Training for Good. Tracks: Fellows will then participate in one of two tracks depending on their interests. Track 1 (September - December): Fellows will be matched with a host organisation working on European tech regulation for a ~3 month placement between September and December 2022. Host organisations include The Future Society, Centre for European Policy Studies, and NL AI Coalition (among others). Modest stipends are available to fellows during these placements to cover living and relocation costs for up to 4 months. Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy ...
Today I have the pleasure of speaking with Nell Watson, a tech ethicist, machine intelligence researcher and AI faculty member at Singularity University. A longtime friend of the podcast, Nell's interdisciplinary research into emerging technologies such as machine vision and AI ethics have attracted audiences from all over the world, and inspired leaders to work towards a brighter future at venues such as The World Bank, The United Nations General Assembly, and The Royal Society. A Senior Advisor to The Future Society at Harvard, Nell also serves as an Advisory technologist to several accelerators, venture capital funds and startups, including The Lifeboat Foundation, which aims to protect humanity from existential risks that could end civilisation as we know it, such as asteroid collisions, or rogue artificial intelligence.
Hey guys! Welcome to GrowGetters – the future skills podcast.On our second last episode for Season 4, this week we will take you in a more techy-focused direction by hosting an unmissable masterclass with the excellent, Nell Watson.Eleanor ‘Nell' Watson is a Machine Intelligence researcher who helped to pioneer Deep Machine Vision through her company QuantaCorp, which enables fast and accurate body measurement from just two photos.In sharing her knowledge as an AI Faculty member at Singularity University and authoring Machine Intelligence courseware for O'Reilly Media, she realised the importance of protecting human rights and putting ethics, safety, and the values of the human spirit into A.I.She chairs EthicsNet.org, a community teaching pro-social behaviours to machines, plus many other organisations.Nell serves as a Senior Scientific Advisor to The Future Society at Harvard, and holds Fellowships from the British Computing Society, and Royal Statistical Society, among others.Her public speaking has inspired audiences globally to work towards brighter futures at venues such as the MIT, The World Bank, The United Nations General Assembly, and the list goes on.Her work is regularly covered by Wired, BBC, The Guardian, Forbes, Vice, and, and, and...She's our favourite female technologist and we love that she believes that technology can (and should) be leveraged to free us from the saddest aspects of the human condition. Such an important topic as our world becomes more virtual.In this episode you will learn:What is the difference between machine learning and AI. How AI and machine learning affect our lives today. How algorithms shape opinions and influence free speech right now.Why is it important for companies and technologies to be aware of the ethical affects AI can have when developing their products.Tech trends we should follow or stay abreast of right now…And how you can started using AI or machine learning in your jobs or business today.Stay in touch with Nell:Twitter: https://twitter.com/nellwatsonLinkedIn: https://www.linkedin.com/in/nellwatson/Website: https://www.nellwatson.com/PLUSWe have been building something super exciting...and that is the brand new GrowGetters Club.Are you looking to create something with real impact that aligns your talent with your passion?Do you want to turn your hobby or side hustle into an innovative and thriving business?Are you a side hustler or solopreneur who needs the right skills and advice to take your business to the next level?Do you want to elevate your personal brand to thought leader and monetise your unique skillset?Or do you simply want to design a multi-faceted, multi-passionate, multi-income career that works for you?When you join GrowGetters Club, you'll apply the latest skills from the innovation space to grow your side hustle or business - and start monetising your skills and knowledge to create diverse income streams.Come and join an international circle of like-minded GrowGetters and be part of a movement of womxn ready to rise up, skill up, and be future-ready! You'll be supported, learn from, and grow within a tight-knit group of 'business friends'. There are 20 spots available so get in now whilst you still can.If you enjoy listening to the pod there are a few ways you would absolutely make our day (and week, and year!!) and help support us so we can continue to create kickass content just for you!The quickest way is to make sure you click that FOLLOW button on Spotify, and hit SUBSCRIBE on Apple Podcasts (or wherever you get your poddies) to make sure you never miss an episode!And if you are an Apple Podcasts user, we would be thrilled if you can take one minute to leave us a 5-star rating and a glowing review so even more of you fabulous GrowGetters can find us!If you're more of a social media maven, then you can follow us on LinkedIn and Instagram at @growgetters.io where we post a whole swag of tips, tools, advice, and hacks on future-proofing your career!And finally, don't forget to subscribe to our GrowGetters Growth Hacks newsletter on growgetters.io for a fortnightly fix of the very latest hacks, tools, models, trends, and recommended reads to help you stay in demand and in the know!
Lords and ladies this evening we transport ourselves into the future of society and find our power in the new psionics frontier! Discover secrets of the future! Tonight on Planet Vrilock! KEEP THE MAGICK HIGH! - Herr Doktor von Vrilock Get insiders only content to boost your mental powers and mind control abilities! Join the club! Club
Nell Watson: How To Teach AI Human Values [Audio] Podcast: Play in new window | DownloadSubscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSSNell Watson is an interdisciplinary researcher in emerging technologies such as machine vision and A.I. ethics. Her work primarily focuses on protecting human rights and putting ethics, safety, and the values of the human spirit into technologies such as Artificial Intelligence. Nell serves as Chair & Vice-Chair respectively of the IEEE's ECPAIS Transparency Experts Focus Group, and P7001 Transparency of Autonomous Systems committee on A.I. Ethics & Safety, engineering credit score-like mechanisms into A.I. to help safeguard algorithmic trust.She serves as an Executive Consultant on philosophical matters for Apple, as well as serving as Senior Scientific Advisor to The Future Society, and Senior Fellow to The Atlantic Council. She also holds Fellowships with the British Computing Society and Royal Statistical Society, among others. Her public speaking has inspired audiences to work towards a brighter future at venues such as The World Bank, The United Nations General Assembly, and The Royal Society.Episode Links: Nell Watson's LinkedIn: https://www.linkedin.com/in/nellwatson/ Nell Watson's Twitter: https://twitter.com/NellWatson Nell Watson's Website: https://www.nellwatson.com/ Podcast Details: Podcast website: https://www.humainpodcast.com Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009 Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9 YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos Support and Social Media: – Check out the sponsors above, it's the best way to support this podcast– Support on Patreon: https://www.patreon.com/humain/creators – Twitter: https://twitter.com/dyakobovitch – Instagram: https://www.instagram.com/humainpodcast/ – LinkedIn: https://www.linkedin.com/in/davidyakobovitch/ – Facebook: https://www.facebook.com/HumainPodcast/ – HumAIn Website Articles: https://www.humainpodcast.com/blog/ Outline: Here's the timestamps for the episode: (2:57)- Even though the science of forensics and police work has changed so much in those last two centuries, principles are great, but it's very important that we create something actionable out of that. We create criteria with defined metrics that we can know whether we are achieving those principles and to what degree.(3:25)- With that in mind, I've been working with teams at the IEEE Standards Association to create standards for transparency, which are a little bit traditional big document upfront very deep working on many different levels for many different use cases and different people for example, investigators or managers of organizations, etcetera.(9:04)- Transparency is really the foundation of all other aspects of AI and Ethics. We need to understand how an incident occurred, or we need to understand how a system performs a function in order to. I analyze how it might be biased or where there might be some malfunction or what might occur in a certain situation or a certain scenario, or indeed who might be responsible for something having gone through it is really the most basic element of protecting ourselves, protecting our privacy, our autonomy from these kinds of advanced algorithmic systems, there are many different elements that might influence these kinds of systems.(26:35)- We're really coming to a Sputnik moment and AI. We've gotten used to the idea of talking to our embodied smart speakers and asking them about sports results or what tomorrow's weather is going to be. But they're not truly conversational.(32:43)- Fundamentally technologies and a humane society is about putting the human first, putting human needs first and adapting systems to serve those needs and to truly and better the human condition to not sacrifice everything for the sake of efficiency to leave a bit of slack and to ensure that the costs to society of a new innovation or the costs to the environment are properly taken into effect.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Hey GrowGetters!We are beyond excited to let you know that Season 4 of the GrowGetters Podcast will be dropping on WEDNESDAY SEPTEMBER 15th!We want to thank you for waiting patiently whilst we've been cooking up some podcast goodness and doing a bunch of research in prep for this brand new season!We will be dropping an episode every week covering the very latest trending Future Skills topics in tech, business, and career growth! Plus we have got some KILLER MASTERCLASS episodes in store with some brilliant business babes and future-thinkers from across the globe.From Brooke Vulinovich (Instagram specialist, keynote speaker and creator of the global Social Club Membership), Nell Watson (software engineer, Senior Scientific Advisor to The Future Society at Harvard and faculty member at Singularity University), to Steph Taylor (Digital Product Launch Strategist - and one of the best we know) - Season 4 is jam packed full of GrowGetters goodness! So grab your ear buds and get ready to learn and be inspired!! We cannot wait to share all the insights with you!!See you next week!PLUSWe have been building something super exciting...and that is the brand new GrowGetters Club. Are you looking to learn the most in-demand skills of the future! Do you want to join a circle of like-minded GrowGetters and be part of a movement of women ready to rise up, skill up, and be future-fit? Then if your answer is heck yes - join the waitlist today as the Club officially launches out of BETA in Q4 2021.If you enjoy listening to the pod there are a few ways you would absolutely make our day (and week, and year!!) and help support us so we can continue to create kickass content just for you!The quickest way is to make sure you click that FOLLOW button on Spotify, and hit SUBSCRIBE on Apple Podcasts (or wherever you get your poddies) to make sure you never miss an episode!And if you are an Apple Podcasts user, we would be thrilled if you can take one minute to leave us a 5-star rating and a glowing review so even more of you fabulous GrowGetters can find us!If you're more of a social media maven, then you can follow us on LinkedIn and Instagram at @growgetters.io where we post a whole swag of tips, tools, advice, and hacks on future-proofing your career!And finally, don't forget to subscribe to our GrowGetters Growth Hacks newsletter on growgetters.io for a fortnightly fix of the very latest hacks, tools, models, trends, and recommended reads to help you stay in demand and in the know!
Bienvenue dans ce nouvel épisode du podcast "la prospective" de The Flares, ici Gaëtan. Je reçois Nicolas Moës pour parler de la gouvernance de l'IA, ou en anglais AI Policy. Nicolas est économiste de formation centrée sur l'impact de l'Intelligence Artificielle sur la géopolitique, l'économie et l'industrie. Il est le représentant à Bruxelles de The Future Society qui a pour objectif de faire progresser l'adoption responsable de l'intelligence artificielle et d'autres technologies émergentes au profit de l'humanité. Nicolas suit les développements européens dans le cadre législatif entourant l'IA. 0:00:00 Introduction 0:01:33 Presentation de Nicolas 0:04:20 Qu'est ce que la gouvernance de l'IA ? 0:06:56 Quels sont les problèmes actuels causés par l'IA qui impact la société et la géopolitique ? 0:27:40 Est-ce que les législateurs sont conscients des considérations sur le long terme telles que AGI et super IA ? 0:40:10 La gouvernance de l'IA à l'ONU ? 0:51:00 Si nous ne nous éteignons pas, la création d'IA générale et de superintelligence est-elle inévitable ? 1:01:35 Comment les entreprises privées collaborent-elles sur ces questions législatives ? 1:11:40 Quelles sont les approches possibles pour que les législations aient une portée internationale 1:19:30 Est ce que la mise en place de standard n'est-elle pas une voie à suivre ? 1:25:36 En quoi la politique actuelle de l'IA est-elle importante pour les considérations à long terme sur l'intelligence artificielle (AGI, super IA) ? 1:28:54 Est-ce que les régulations et mesures prises pour limiter les risques des armes nucléaires, chimiques et biologiques peuvent servir dans le domaine de l'IA ? 1:36:30 Ne faudrait-il pas investir dans la cultivation de la pensée longtermiste pour les politiciens ? 1:41:14 Est ce qu'on peut se servir de ce qui a marché ou non face à la pandémie du covid-19 en termes de coordination internationale pour des solutions sur la gouvernance de l'IA ? 1:46:53 Quels sont les potentiels bénéfices de l'IA si nous réussissons les défis soulevés par la gouvernance ? 1:50:23 Quels pourraient-être les avantages de déléguer des décisions politiques à l'intelligence machine (Algocratie) ? Ceci est une longue conversation, vous pourriez choisir de l'écouter en accélérant la vitesse de lecture. On vous rappelle que pour un meilleur confort, vous pouvez choisir d'écouter le podcast en tâche de fond si vous êtes sur votre ordinateur ou tablette, ou alors de le télécharger directement sur votre téléphone pour pouvoir l'écouter n'importe où. Il suffit de chercher « The Flares » sur iTunes ou Podcast addict (ou autre appli de podcast). Profitez-en pour vous abonnez afin de ne pas manquerez les prochains podcasts. Vous trouverez également l'autre série de podcast que nous avons lancé en collaboration avec l'association technoprog intitutée : "Humain, Demain". Si vous avez des suggestions, des remarques ou des interrogations à propos de cet épisode, vous êtes bien entendu libres de les poster en commentaires juste en dessous. On vous souhaite une bonne écoute.
Welcome to the Policy People Podcast. In this conversation, I discuss AI policy in the post-pandemic paradigm with Sacha Alanoca. We discuss how the pandemic has accelerated AI adoption, the key players of the AI policy space, whole-of-society approaches to AI policy, the importance of human agency, emerging AI governance mechanisms and the Global Partnership on Artificial Intelligence, how national data laws impede AI solutions for global health, the social acceptability of anti-vaccine sentiment analysis tools, how national data laws impede AI solutions for global health, why AI is the perfect field for inter-disciplinary thinkers, learning to code as a policy researcher and many more topics. You can listen to the episode right away in the audio player embedded above, or right below it you can click “Listen in podcast app” — which will connect you to the show’s feed. Alternatively, you can click the icons below to listen to it on Apple Podcasts or Spotify. If you enjoy this conversation and would like to help the show, leaving us a 5-star rating and review on Apple Podcasts is the easiest way to do so.Thank you to Jaime Christiansen for leaving this review this week…To give us a review, just go to Policy People on Apple Podcasts and hit ‘Write a Review’.Sacha Alanoca is Senior AI Policy Researcher & Head of Community Development at The Future Society. Before joining The Future Society, Sacha worked across several think tanks in South America as well as at the OECD in Paris. Her work revolves around international initiatives for trustworthy AI adoption, AI ethical guidelines, the development of independent AI audits and civic empowerment platforms. Her most recent work a report titled “Responsible AI in Pandemic Response” is the most comprehensive study of COVID-era AI initiatives to date and was done in partnership with the Global Partnership on AI, the world’s leading multilateral forum on AI governance. You can discover The Future Society and its’ work at thefuturesociety.org or follow the think tank on Twitter at the handle @thefuturesoc. You can also connect with Sacha on LinkedIn or follow her on Twitter at the handle @SachaAlanoca. Subscribe at policypeople.substack.com
It's episode 238 of the Okie Geek Podcast. We are talking this week with friends of the show, Matt Cavanaugh, Amber Hanneken and Aislinn Burrows about Fundraising for Oklahoma's longest running pop culture convention, SoonerCon. Sadly, SoonerCon had to get delayed again for one more year, but it is coming back in 2022. To make that possible, organizers are holding a series of fundraisers to help keep the lights on and pay for much needed costs in preparation for the big part next year. From June 21st through the 27th there will be an online silent auction to raise money for the Future Society of Central Oklahoma. You can find out more about SoonerCon on https://www.facebook.com/SoonerConSciFiExpo (Facebook), https://twitter.com/soonercon?lang=en (Twitter), https://www.instagram.com/soonercon/?hl=en (Instagram )and on its https://soonercon.com/ (website). Support this podcast
In December 1938, a frustrated nuclear physicist named Leo Szilard wrote a letter to the British Admiralty telling them that he had given up on his greatest invention — the nuclear chain reaction. "The idea of a nuclear chain reaction won’t work. There’s no need to keep this patent secret, and indeed there’s no need to keep this patent too. It won’t work." — Leo Szilard What Szilard didn’t know when he licked the envelope was that, on that very same day, a research team in Berlin had just split the uranium atom for the very first time. Within a year, the Manhatta Project would begin, and by 1945, the first atomic bomb was dropped on the Japanese city of Hiroshima. It was only four years later — barely a decade after Szilard had written off the idea as impossible — that Russia successfully tested its first atomic weapon, kicking off a global nuclear arms race that continues in various forms to this day. It’s a surprisingly short jump from cutting edge technology to global-scale risk. But although the nuclear story is a high-profile example of this kind of leap, it’s far from the only one. Today, many see artificial intelligence as a class of technology whose development will lead to global risks — and as a result, as a technology that needs to be managed globally. In much the same way that international treaties have allowed us to reduce the risk of nuclear war, we may need global coordination around AI to mitigate its potential negative impacts. One of the world’s leading experts on AI’s global coordination problem is Nicolas Miailhe. Nicolas is the co-founder of The Future Society, a global nonprofit whose primary focus is encouraging responsible adoption of AI, and ensuring that countries around the world come to a common understanding of the risks associated with it. Nicolas is a veteran of the prestigious Harvard Kennedy School of Government, an appointed expert to the Global Partnership on AI, and advises cities, governments, international organizations about AI policy.
A new Episode of the Serie "The AI Deal of Trust" in the unique AI Chanel of Trust "Exponential Trust Times " by AI Exponential Thinker. Our Guest is the Futurist and Xprize Judge Member Nell Watson. Nell Watson is the co-Founder of QuantaCorp, She serves as a Senior Scientific Advisor to The Future Society at Harvard and as a Futurist & professor at Singularity University. She is the Chair of EthicsNet and Vice-Chair of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Dr. Karoui is pleased to welcome Nell Watson in this new podcast episode. Dr. Lobna Karoui is an Executive AI Strategy Growth Advisor and Exponential Digital Transformer for Fortune 500 & CAC40 with two decades experience in building AI products and services for millions of users. She is the president of AI Exponential thinker with a target to inspire and empower 1 Million young boys and girls, horizon 2025, about Trust Technologies and AI Opportunities. She is an international Speaker and interviewer recognized as an AI Expert by Forbes, Bloomberg and MIT. Follow us and subscribe AI Exponential Thinker, LinkedIn, Facebook and Instagram or via contact@aiexponentialthinker.com to interact with our Guests, meet great speakers and mentors from great companies such as Amazon, WEF, Harvard and more
The AI Deal of TRUST - Host : Dr. Lobna Karoui, Executive AI Strategy Growth Advisor / Guest : Nell Watson, Futurist and XPRIZE Judge Member A new Episode of the Serie "The AI Deal of Trust" in the unique AI Chanel of Trust by AI Exponential Thinker. Our Guest is the Futurist and Xprize Judge Member Nell Watson. Nell Watson is the Founder of QuantaCorp who serves as a Senior Scientific Advisor to The Future Society at Harvard and as a Faculty at Singularity University. She is the Chair of EthicsNet and Vice-Chair of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Dr. Karoui is pleased to welcome Nell Watson in this new podcast episode. Dr. Lobna Karoui is an Executive AI Strategy Growth Advisor and Exponential Digital Transformer for Fortune 500 & CAC40 with two decades experience in building AI products and services for millions of users. She is the president of AI Exponential thinker with a target to inspire and empower 1 Million young boys and girls, horizon 2025, about Trust Technologies and AI Opportunities. She is an international Speaker and interviewer recognized as an AI Expert by Forbes, Bloomberg and MIT. Follow us and subscribe www.aiexponentialthinker.com, linkedin, Facebook and Instagram or via contact@aiexponentialthinker.com to interact with our Guests, meet great speakers and mentors from great companies such as Amazon, WEF, Harvard and more
Nokia Bell Labs researcher Sean Kennedy helps de-mystify Artificial Intelligence and Machine Learning, utilizing lessons from behavioral psychologist Daniel Kahneman to frame a responsible approach to innovation.
Hub Culture presents: The Chronicle Discussions, Episode 25 - CoCreating the Future Society Now with Brittany Kaiser, Co-founder of Own Your Data; Jenn Sander, Global Innovation for Burning Man / Founder at Play Atélier; and Bill Tai, venture capitalist. Stan Stalnaker hosts virtually from Hub Culture Emerald City. July 17, 2020.
Governments play an essential role in regulating the developments and usage of artificial intelligence. Lacking necessary AI policies, the technology might be misused and backfire in increasing inequality, the rise of totalitarian powers, supporting racism biases, etc. But if our politicians have no clue how the technology works, how are they going to be implemented? This is exactly what The Future Society is about. The “think-and-do-tank” has an extraordinary mission: Advancing the responsible adoption of Artificial Intelligence and other emerging technologies for the benefit of humanity. How does the organization shape the global AI policy framework? Can governments find a balance between caution & foresight and action that is rapid enough? How is AI being used to eradicate modern slavery? Dive with us into this thought-provoking conversation with Adriana Bora, AI Policy Researcher at The Future Society.
In this installment of the Future Grind podcast host Ryan O'Shea speaks with Nell Watson, an entrepreneur and machine intelligence researcher whose work primarily focuses on protecting human rights and creating ethical AI. She currently serves as AI Faculty at Singularity University and works with the IEEE on AI initiatives. She also chairs EthicsNet.org, which crowdsources datasets to teach pro-social behaviors to machines, and CulturalPeace.org, which seeks to craft Geneva Conventions-style rules for cultural conflict. Nell serves as Senior Scientific Advisor to The Future Society at Harvard, and holds Fellowships from the British Computing Society, and Royal Statistical Society, among others. They discuss AI value alignment, the role for humans in AI, rules of engagement for the culture wars, the COVID-19 pandemic, and much more. Both Nell and Ryan are going to be speaking at the Humanity Plus Post-Pandemic summit on July 7th and 8th, 2020. This free digital event will be themed around A Future Free of Disease and Destruction, and they'll be joined by some of the leading figures in futurism and transhumanism including Dr. Ben Goertzel, Dr. Max More, Dr. Natasha Vita-More, Dr. Anders Sandberg, and more. You can register here. Show Notes: https://futuregrind.org Subscribe on iTunes: https://itunes.apple.com/us/podcast/future-grind-podcast-science-technology-business-politics/id1020231514 Support: https://futuregrind.org/support Follow along - Twitter - https://twitter.com/Ryan0Shea Instagram - https://www.instagram.com/ryan_0shea/ Facebook - https://www.facebook.com/RyanOSheaOfficial/
Welcome to the podcast, Nell Watson, Founder of QuantaCorp, a pioneer in Machine Vision. Graham and Nell explore the fascinating world of AI, philosophy, and ethics. Graham was a graduate of Artificial Intelligence back in 1995, but a lot has changed in the last 25 years! Nell explains how AI has evolved and how it will continue to evolve from here. Nell serves as Senior Scientific Advisor to The Future Society at Harvard, and holds Fellowships from the British Computing Society, and Royal Statistical Society. She also chairs EthicsNet.org, a community teaching pro-social behaviours to machines
What does the future of humanity look like --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/garrett0785/support
In this episode Andrew Vaziri speaks with Nicolas Economou, CEO of the eDiscovery company H5 and co-founder and chair of the Science, Law and Society Initiative at The Future Society, a 501c3 think tank incubated at the Harvard Kennedy School of Government. Economou discusses how AI is applied in the legal system, as well as some of the key points from the recent “Global Governance of AI Roundtable”. The roundtable, hosted by the government of the UAE, brought together a diverse group of leaders from tech companies, governments, and academia to discuss the societal implications of AI.
Cutting Through the Matrix with Alan Watt Podcast (.xml Format)
Fortress Britain, Armoured Vehicles, Body Cavity Searches - Acceptance of "Normal". Advanced Technology - RE-Search - UFO Movement, Rockefeller Foundation - DARPA, NSA - 3 Levels of Science. Star Trek series - Future Society done in Allegorical Form in Space - Starship Enterprise, Alien Races (Foreign Countries) - Dehumanization of Enemy. Computer "Programming", Common Culture. Maitreya, Krishnamurti - Mind Manipulation - "Benevolent Dictator" - MKULTRA, Psychic Driving. Young Men, Hormones, Acceptance by Peer Group, Gangs - Tribal Leaders - Uniforms, Emblems, Rewards - Soldier (Dies for Sun). ID Cards, Tracking by Cell-Phone Towers - Brain Chipping, Interface with Nervous System - NASA, Chipped Astronauts - Total Information Network. Flu Shots, Viruses "Evolving" - AIDS - Vaccines taken on Faith. Politicians, CEOs, Psychopathic Types - "Old Boys Network" - Pyramid Structure - Knighting. *Dialogue Copyrighted Alan Watt - Nov. 15, 2007 (Exempting Music, Literary Quotes, and Callers' Comments)