POPULARITY
E neste episódio especial de QueIssoAssim, Brunão e Baconzitos recebem a visita do Miotti para baterem um papo muito descontraído com uma lenda da dublagem brasileira: Mário Jorge! Conhecido por vários trabalhos importantes da dublagem como John Travolta, Steve Gutemberg, o Mate do Carros, o Burrinho do Shrek, entre outros, hoje o foco principal é no retorno de Eddie Murphy como Axel Foley em Um Tira da Pesada 4: Axel Foley, disponível na Netflix! Neste episódio escute histórias clássicas de personagens famosos, entenda um pouco mais sobre o mundo da dublagem e aprenda muito com esse cara super gente fina! Agradecimentos especiais à querida Mônica Rossi! Citados no programa: QueIssoAssim 26 - Loucademia de Polícia QueIssoAssim 45 - Três Solteirões e Um Bebê QueIssoAssim 135 - Um Tira da Pesada QueIssoAssim 155 - Ghost do Outro Lado da Vida
Quais são as técnicas de vendas mais eficientes para vendermos produtos e serviços para MÉDICOS? O que um médico considera na hora de tomar a decisão de comprar um equipamento para sua clínica? Como podemos fazer o levantamento de necessidades corretamente? Neste episódio, mergulharemos nas complexidades dessa abordagem de vendas única, considerando o ambiente altamente especializado em que os médicos operam. Vender para profissionais de saúde requer não apenas habilidades de negociação afiadas, mas também um profundo entendimento das dinâmicas e desafios enfrentados por médicos em suas práticas diárias. Vamos abordar estratégias específicas para construir relacionamentos eficazes com médicos, desde a abordagem inicial até o fechamento da venda. A confiança é fundamental, e entender as preocupações e objetivos dos médicos é a chave para estabelecer conexões sólidas. Além disso, abordaremos a importância de adaptar sua mensagem de vendas para atender às necessidades exclusivas do setor médico. Isso inclui a demonstração clara de como seus produtos ou serviços podem melhorar a eficiência operacional, proporcionar melhorias nos cuidados ao paciente ou atender a requisitos regulatórios específicos. Pensando em todos esses pontos, no episódio de hoje, Leandro Munhoz (@le_munhoz) conversa com Fabíola Miotti (LinkedIn) sobre como vender produtos de alto valor agregado para a indústria médica
This week, Erica sits down with Integrated Marketing Strategist Sarah Miotti (prev. Senior Brand Manager of Partnerships for Beachwaver Co. + Director of Marketing for Cloth & Flame). In this episode, Sarah lets us in on her multi-channel secrets to success in building community and increasing conversions, including how to actually drive sales through affiliate marketing programs, how collaborating with creators (and other brands!) on product collections can expose your brand to new, engaged audiences, how to best connect with your customers over text (SMS), and more! Here's a peek at what we cover in this episode: [00:05:12] - Sarah shares a look at her background in media and marketing with both agency and in house roles across a wide variety of verticals, including food and beverage and pharmaceuticals. She then walks us through her time at Beachwaver Co., working hand in hand with Founder and Celebrity Hairstylist Sarah Potempa to build the brand's DTC (direct-to-consumer) marketing initiatives, like their affiliate, influencer, text (SMS), email, partnerships, and events programs. She also provides her best advice for listeners who dream of holding a "Director of Marketing" title one day. [00:12:30] - Sarah shares how she and her team at Beachwaver Co. created personalized, high touch affiliate programs, born out of a desire to get to know the consumer directly. She uncovers all of the elements required for a successful affiliate program, including the technical set up, commission and bonus structures, recruiting process, monthly programming, and unique ways of activating the community. She also dives into how to actually drive sales from affiliates and the additional KPIs to consider in gauging the success of an affiliate program. [00:23:59] - Sarah talks through Beachwaver Co.'s incredible creator and brand product collaborations and how they expose the brand to new, engaged audiences. [00:28:30] - Sarah shares her best tips for an effective text (SMS) strategy and how affiliate and influencer programs can feed it. She also gets into her perspective on building a team and how being vulnerable and approachable is the key to effective leadership. Grab a drink and listen in to this week's Marketing Happy Hour conversation! ____ Other episodes you'll enjoy if you enjoyed Sarah's episode: Leaning in to Your Brand's Community | Kennedy Crichlow + Mary Ralph Lawson Bradley of Daily Drills Experiential / Event Marketing 101 (+ a Conversation on Thoughtful Leadership) | Amy Gaston (prev. Magnolia) Product Marketing 101: Your Go-To-Market Toolkit | Jaylen Adams of Rare Beauty ____ Say hi! DM us on Instagram and share your favorite moments from this episode - we can't wait to hear from you! Join our MHH Insiders group to connect with Millennial and Gen Z marketing professionals around the world! Get the latest from MHH, straight to your inbox: Join our email list! Connect with Sarah: LinkedIn | Instagram Follow MHH on Social: Instagram | LinkedIn | Twitter | TikTok Subscribe to our LinkedIn newsletter, Marketing Happy Hour Weekly: https://www.linkedin.com/newsletters/marketing-happy-hour-weekly-6950530577867427840/ --- Support this podcast: https://podcasters.spotify.com/pod/show/marketinghappyhour/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priorities for the UK Foundation Models Taskforce, published by Andrea Miotti on July 21, 2023 on LessWrong. The UK government recently established the Foundation Models Taskforce, focused on AI safety and backed by £100M in funding. Founder, investor and AI expert Ian Hogarth leads the new organization. The establishment of the Taskforce shows the UK's intention to be a leading player in the greatest governance challenge of our times: keeping humanity in control of a future with increasingly powerful AIs. This is no small feat, and will require very ambitious policies that anticipate the rapid developments in the AI field, rather than just reacting to them. Here are some recommendations on what the Taskforce should do. The recommendations fall into three categories: Communication and Education about AI risk, International Coordination, and Regulation and Monitoring. Communication and Education about AI Risk The Taskforce is uniquely positioned to educate and communicate about AI development and risks. Here is how it could do it: Private education The Taskforce should organize private education sessions for UK Members of Parliament, Lords, and high-ranking civil servants, in the form of presentations, workshops, and closed-door Q&As with Taskforce experts. These would help bridge the information gap between policymakers and the fast-moving AI field. A new platform: ai.gov.uk The Taskforce should take a proactive role in disseminating knowledge about AI progress, the state of the AI field, and the Taskforce's own actions: The Taskforce should publish bi-weekly or monthly Bulletins and Reports on AI on an official government website. The Taskforce can start doing this right away by publishing its bi-weekly or monthly bulletins and reports on the state of AI progress and AI risk on the UK government's research and statistics portal. The Taskforce should set up ai.gov.uk, an online platform modeled after the UK's COVID-19 dashboard. The platform's main page should be a dashboard showing key information about AI progress and Taskforce progress in achieving its goals, that gets updated regularly. ai.gov.uk should have a progress bar trending towards 100% for all of the Task Force's key objectives. ai.gov.uk should also include a "Safety Plans of AI Companies" monthly report, with key insights visualized on the dashboard. The Taskforce should send an official questionnaire to each frontier AI company to compile this report. This questionnaire should contain questions about companies' estimated risk of human extinction caused by the development of their AIs, their timelines until the existence of powerful and autonomous AI systems, and their safety plans regarding development and deployment of frontier AI models. There is no need to make the questionnaire mandatory. For companies that don't respond or respond only to some questions, the relevant information on the dashboard should be left blank, or filled in with a "best guess" or "most relevant public information" curated by Taskforce experts. Public-facing communications Taskforce members should utilize press conferences, official posts on the Taskforce's website, and editorials in addition to ai.gov.uk to educate the public about AI development and risks. Key topics to cover in these public-facing communications include: Frontier AI development is focused on developing autonomous, superhuman, general agents, not just towards better chatbots or the automation of individual tasks. These are and will increasingly be AIs capable of making their own plans and taking action in the real world. No one fully understands how these systems function, their capabilities or limits, and how to control or restrict them. All of these remain unsolved technical challenges. Consensus on the societal-scale risk from AI is growing, and the gov...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priorities for the UK Foundation Models Taskforce, published by Andrea Miotti on July 21, 2023 on The AI Alignment Forum. The UK government recently established the Foundation Models Taskforce, focused on AI safety, modelled on the Vaccine Taskforce, and backed by £100M in funding. Founder, investor and AI expert Ian Hogarth leads the new organization. The establishment of the Taskforce shows the UK's intention to be a leading player in the greatest governance challenge of our times: keeping humanity in control of a future with increasingly powerful AIs. This is no small feat, and will require very ambitious policies that anticipate the rapid developments in the AI field, rather than just reacting to them. Here are some recommendations on what the Taskforce should do. The recommendations fall into three categories: Communication and Education about AI risk, International Coordination, and Regulation and Monitoring. Communication and Education about AI Risk The Taskforce is uniquely positioned to educate and communicate about AI development and risks. Here is how it could do it: Private education The Taskforce should organize private education sessions for UK Members of Parliament, Lords, and high-ranking civil servants, in the form of presentations, workshops, and closed-door Q&As with Taskforce experts. These would help bridge the information gap between policymakers and the fast-moving AI field. A new platform: ai.gov.uk The Taskforce should take a proactive role in disseminating knowledge about AI progress, the state of the AI field, and the Taskforce's own actions: The Taskforce should publish bi-weekly or monthly Bulletins and Reports on AI on an official government website. The Taskforce can start doing this right away by publishing its bi-weekly or monthly bulletins and reports on the state of AI progress and AI risk on the UK government's research and statistics portal. The Taskforce should set up ai.gov.uk, an online platform modeled after the UK's COVID-19 dashboard. The platform's main page should be a dashboard showing key information about AI progress and Taskforce progress in achieving its goals, that gets updated regularly. ai.gov.uk should have a progress bar trending towards 100% for all of the Task Force's key objectives. ai.gov.uk should also include a "Safety Plans of AI Companies" monthly report, with key insights visualized on the dashboard. The Taskforce should send an official questionnaire to each frontier AI company to compile this report. This questionnaire should contain questions about companies' estimated risk of human extinction caused by the development of their AIs, their timelines until the existence of powerful and autonomous AI systems, and their safety plans regarding development and deployment of frontier AI models. There is no need to make the questionnaire mandatory. For companies that don't respond or respond only to some questions, the relevant information on the dashboard should be left blank, or filled in with a "best guess" or "most relevant public information" curated by Taskforce experts. Public-facing communications Taskforce members should utilize press conferences, official posts on the Taskforce's website, and editorials in addition to ai.gov.uk to educate the public about AI development and risks. Key topics to cover in these public-facing communications include: Frontier AI development is focused on developing autonomous, superhuman, general agents, not just towards better chatbots or the automation of individual tasks. These are and will increasingly be AIs capable of making their own plans and taking action in the real world. No one fully understands how these systems function, their capabilities or limits, and how to control or restrict them. All of these remain unsolved technical challenges. Consensus on the so...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priorities for the UK Foundation Models Taskforce, published by Andrea Miotti on July 21, 2023 on LessWrong. The UK government recently established the Foundation Models Taskforce, focused on AI safety and backed by £100M in funding. Founder, investor and AI expert Ian Hogarth leads the new organization. The establishment of the Taskforce shows the UK's intention to be a leading player in the greatest governance challenge of our times: keeping humanity in control of a future with increasingly powerful AIs. This is no small feat, and will require very ambitious policies that anticipate the rapid developments in the AI field, rather than just reacting to them. Here are some recommendations on what the Taskforce should do. The recommendations fall into three categories: Communication and Education about AI risk, International Coordination, and Regulation and Monitoring. Communication and Education about AI Risk The Taskforce is uniquely positioned to educate and communicate about AI development and risks. Here is how it could do it: Private education The Taskforce should organize private education sessions for UK Members of Parliament, Lords, and high-ranking civil servants, in the form of presentations, workshops, and closed-door Q&As with Taskforce experts. These would help bridge the information gap between policymakers and the fast-moving AI field. A new platform: ai.gov.uk The Taskforce should take a proactive role in disseminating knowledge about AI progress, the state of the AI field, and the Taskforce's own actions: The Taskforce should publish bi-weekly or monthly Bulletins and Reports on AI on an official government website. The Taskforce can start doing this right away by publishing its bi-weekly or monthly bulletins and reports on the state of AI progress and AI risk on the UK government's research and statistics portal. The Taskforce should set up ai.gov.uk, an online platform modeled after the UK's COVID-19 dashboard. The platform's main page should be a dashboard showing key information about AI progress and Taskforce progress in achieving its goals, that gets updated regularly. ai.gov.uk should have a progress bar trending towards 100% for all of the Task Force's key objectives. ai.gov.uk should also include a "Safety Plans of AI Companies" monthly report, with key insights visualized on the dashboard. The Taskforce should send an official questionnaire to each frontier AI company to compile this report. This questionnaire should contain questions about companies' estimated risk of human extinction caused by the development of their AIs, their timelines until the existence of powerful and autonomous AI systems, and their safety plans regarding development and deployment of frontier AI models. There is no need to make the questionnaire mandatory. For companies that don't respond or respond only to some questions, the relevant information on the dashboard should be left blank, or filled in with a "best guess" or "most relevant public information" curated by Taskforce experts. Public-facing communications Taskforce members should utilize press conferences, official posts on the Taskforce's website, and editorials in addition to ai.gov.uk to educate the public about AI development and risks. Key topics to cover in these public-facing communications include: Frontier AI development is focused on developing autonomous, superhuman, general agents, not just towards better chatbots or the automation of individual tasks. These are and will increasingly be AIs capable of making their own plans and taking action in the real world. No one fully understands how these systems function, their capabilities or limits, and how to control or restrict them. All of these remain unsolved technical challenges. Consensus on the societal-scale risk from AI is growing, and the gov...
No episódio de hoje do QueIssoAssim, Brunão, Baconzitos e Plínio Perrú trazem o Artur Boni de uma dimensão paralela para conversar sobre o novo filme do Flash. E como o podcast é sobre um filme da DC, chamamos também o nosso querido Miotti. Ou o Miotti do Multiverso: o Miotto. E aí? O filme funcionou? Quais são os maiores problemas do filme? Aonde ele acerta? Discutimos tudo aqui, em um programa recheado de spoilers.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conjecture: A standing offer for public debates on AI, published by Andrea Miotti on June 16, 2023 on LessWrong. Tl;dr: If you want to publicly debate AI risk with us, send us an email at hello@conjecture.dev with information about you, suggested topics, and the suggested platform. Public debates strengthen society and public discourse. They spread truth by testing ideas and filtering out weaker arguments. Moreover, debating ideas publicly forces people to be coherent over time, or to adjust their beliefs faced with new evidence. This is why we need more public debates on AI development, as AI will fundamentally transform our world, for better or worse. Most of us at Conjecture expect advanced AI to be catastrophic by default, and that the only path to a good future goes through solving some very hard technical and social challenges. However, many others inside and outside of the AI field have very different expectations! Some think very powerful AI systems are coming soon, but it will be easy to control them. Others think very powerful AI systems are just very far away, and there's no reason to worry yet. Open debate about AI should start now, to discuss these and many more issues. As Conjecture, we have a standing offer to publicly debate AI risk and progress in good faith. If you want to publicly debate AI risk with us, send us an email at hello@conjecture.dev with information about you, suggested topics, and the suggested platform. By default, we prefer the debate to be a live discussion streamed on Youtube or Twitch. Given our limited time, we won't be able to accept all requests, but we'll explain in cases where we reject. As a rule of thumb, we will give priority to people with more reach and/or prominence. Some relevant topics can include: What are reasons for and against expecting that the default outcome of developing powerful AI systems is human extinction? Is open source development of powerful AI systems a good or bad idea? How far are we from existentially dangerous AI systems? Should we stop development of more powerful AI, or continue development towards powerful general AI and superintelligence? Is a global moratorium on development of superintelligence feasible? How easy or hard is it going to be to control powerful AI systems? Here's a recent debate between Connor Leahy (Conjecture CEO) and Joseph Jacks (open source software investor) on whether AGI is an existential risk, and a debate between Robin Hanson (Prof. of Economics at GMU) and Jaan Tallinn (Skype co-founder, AI investor) on whether we should pause AI research. To see some of our stances on these topics, you can find some recent public appearances from Connor (CEO) here and here. An overview of our main research agenda is available here and here. We ran a debate initiative in the past, but it was focused on quite technical discussions with people already deep in the field of AI alignment. As AI risk gets into the mainstream, the conversation should become much broader. Two discussions that we published from that initiative: Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes If the linked page doesn't load on your browser, try CMD + Shift + R on Mac or CTRL + F5 on Windows to hard reload the page. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Dr. Lisiane Miotti Joins me for a discussion on the safety of FDA-approved hormone replacement therapy // Join us on our latest podcast episode as we delve into the topic of FDA-approved hormone replacement therapy (HRT) and its safety. Hormone replacement therapy has been widely used to address various health concerns, such as menopause symptoms, hormonal imbalances, and osteoporosis. In this episode, we'll explore the approval process that the U.S. Food and Drug Administration (FDA) employs to ensure the safety and efficacy of HRT. We'll also discuss the potential benefits and risks associated with FDA-approved HRT, providing insights from medical experts and studies conducted in recent years. Whether you're considering HRT or simply interested in understanding its safety profile, this episode will provide you with valuable information to make informed decisions about your health. Tune in to learn more about the science, regulations, and ongoing research surrounding FDA-approved hormone replacement therapy. Find Dr. Miotti here on IG: https://www.instagram.com/dra.lisianemiotti/ MORE ON DR. HIRSCH BELOW! BOOK A VISIT: https://heatherhirschmd.com/bookings/ PRE-ORDER MY BOOK! https://www.amazon.com/Unlock-Your-Menopause-Type-Personalized/dp/1250850827 GET MY FREE MENOPAUSE HEALTH GUIDE: https://view.flodesk.com/pages/5f787bdd57796e835ea84e10 AMAZON PRODUCTS: https://www.amazon.com/shop/hormone.health.doc --- Support this podcast: https://podcasters.spotify.com/pod/show/heather-hirsch/support
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conjecture: A standing offer for public debates on AI, published by Andrea Miotti on June 16, 2023 on LessWrong. Tl;dr: If you want to publicly debate AI risk with us, send us an email at hello@conjecture.dev with information about you, suggested topics, and the suggested platform. Public debates strengthen society and public discourse. They spread truth by testing ideas and filtering out weaker arguments. Moreover, debating ideas publicly forces people to be coherent over time, or to adjust their beliefs faced with new evidence. This is why we need more public debates on AI development, as AI will fundamentally transform our world, for better or worse. Most of us at Conjecture expect advanced AI to be catastrophic by default, and that the only path to a good future goes through solving some very hard technical and social challenges. However, many others inside and outside of the AI field have very different expectations! Some think very powerful AI systems are coming soon, but it will be easy to control them. Others think very powerful AI systems are just very far away, and there's no reason to worry yet. Open debate about AI should start now, to discuss these and many more issues. As Conjecture, we have a standing offer to publicly debate AI risk and progress in good faith. If you want to publicly debate AI risk with us, send us an email at hello@conjecture.dev with information about you, suggested topics, and the suggested platform. By default, we prefer the debate to be a live discussion streamed on Youtube or Twitch. Given our limited time, we won't be able to accept all requests, but we'll explain in cases where we reject. As a rule of thumb, we will give priority to people with more reach and/or prominence. Some relevant topics can include: What are reasons for and against expecting that the default outcome of developing powerful AI systems is human extinction? Is open source development of powerful AI systems a good or bad idea? How far are we from existentially dangerous AI systems? Should we stop development of more powerful AI, or continue development towards powerful general AI and superintelligence? Is a global moratorium on development of superintelligence feasible? How easy or hard is it going to be to control powerful AI systems? Here's a recent debate between Connor Leahy (Conjecture CEO) and Joseph Jacks (open source software investor) on whether AGI is an existential risk, and a debate between Robin Hanson (Prof. of Economics at GMU) and Jaan Tallinn (Skype co-founder, AI investor) on whether we should pause AI research. To see some of our stances on these topics, you can find some recent public appearances from Connor (CEO) here and here. An overview of our main research agenda is available here and here. We ran a debate initiative in the past, but it was focused on quite technical discussions with people already deep in the field of AI alignment. As AI risk gets into the mainstream, the conversation should become much broader. Two discussions that we published from that initiative: Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes If the linked page doesn't load on your browser, try CMD + Shift + R on Mac or CTRL + F5 on Windows to hard reload the page. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes - Transcript, published by Andrea Miotti on February 24, 2023 on LessWrong. The following is the transcript of a discussion between Paul Christiano (ARC) and Gabriel Alfour, hereafter GA (Conjecture), which took place on December 11, 2022 on Slack. It was held as part of a series of discussions between Conjecture and people from other organizations in the AGI and alignment field. See our retrospective on the Discussions for more information about the project and the format. You can read a summary of the discussion here. Note this that transcript has been lightly edited for readability. Introduction [GA] let's start? [Christiano] sounds good [GA] Cool, just copy-pasting our two selections of topic [editor's note: from an email exchange before the discussion]: “[Topics sent by Christiano] Probability of deceptive alignment and catastrophic reward hacking. How likely various concrete mitigations are to work (esp. interpretability, iterated amplification, adversarial training, theory work) How are labs like to behave: how much will they invest in alignment, how much will they (or regulators) slow AI development. Feasibility of measuring and establishing consensus about risk. Takeoff speeds, and practicality of delegating alignment to AI systems. Other sources of risk beyond those in Christiano's normal model. Probably better for GA to offer some pointers here.” “[Topics sent by GA] How much will reinforcement learning with human feedback and other related approaches (e.g., debate) lead to progress on prosaic alignment? (similar to Christiano's point number 2 above) How much can we rely on unaligned AIs to bootstrap aligned ones? (in the general category of "use relatively unaligned AI to align AI", and matching Christiano's second part of point number 5 above) At the current pace of capabilities progress vis-a-vis prosaic alignment progress, will we be able to solve alignment on time? General discussions on the likelihood of a sharp left turn, how it will look like and how to address it. (related to "takeoff speeds" above, in point number 5 above) AGI timelines / AGI doom probability” [Christiano] I would guess that you know my view on these questions better than I know your view I have a vague sense that you have a very pessimistic outlook, but don't really know anything about why you are pessimistic (other than guessing it is similar to the reasons that other people are pessimistic) [GA] Then I guess I am more interested in “- How likely various concrete mitigations are to work (esp. interpretability, iterated amplification, adversarial training, theory work) How are labs like to behave: how much will they invest in alignment, how much will they (or regulators) slow AI development.” as these are where most of my pessimism is coming from > [Christiano]: “(other than guessing it is similar to the reasons that other people are pessimistic)” I guess I could start with this [Christiano] it seems reasonable to either talk about particular mitigations and whether they are likely to work, or to try to talk about some underlying reason that nothing is likely to work Alignment Difficulty [GA] I think the mainline for my pessimism is: There is an AGI race to the bottom Alignment is hard in specific ways that we are bad at dealing with (for instance: we are bad at predicting phase shifts) We don't have a lot of time to get better, given the pace of the race [Christiano] (though I'd also guess there is a lot of disagreement about what happens by default without anything that is explicitly labelled as an alignment solution) [GA] > [Christiano] “(though I'd also guess there is a lot of disagreement about what happens by default without anything that is explicitly labelled as an alignment solution)” We can also explore this...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on the 2022 Conjecture AI Discussions, published by Andrea Miotti on February 24, 2023 on LessWrong. At the end of 2022, following the success of the 2021 MIRI Conversations, Conjecture started a project to host discussions about AGI and alignment with key people in the field. The goal was simple: surface positions and disagreements, identify cruxes, and make these debates public whenever possible for collective benefit. Given that people and organizations will have to coordinate to best navigate AI's increasing effects, this is the first, minimum-viable coordination step needed to start from. Coordination is impossible without at least common knowledge of various relevant actors' positions and models. People sharing their beliefs, discussing them and making as much as possible of that public is strongly positive for a series of reasons. First, beliefs expressed in public discussions count as micro-commitments or micro-predictions, and help keep the field honest and truth-seeking. When things are only discussed privately, humans tend to weasel around and take inconsistent positions over time, be it intentionally or involuntarily. Second, commenters help debates progress faster by pointing out mistakes. Third, public debates compound. Knowledge shared publicly leads to the next generation of arguments being more refined, and progress in public discourse. We circulated a document about the project to various groups in the field, and invited people from OpenAI, DeepMind, Anthropic, Open Philanthropy, FTX Future Fund, ARC, and MIRI, as well as some independent researchers to participate in the discussions. We prioritized speaking to people at AGI labs, given that they are focused on building AGI capabilities. The format of discussions was as follows: A brief initial exchange with the participants to decide on the topics of discussion. By default, the discussion topic was “How hard is Alignment?”, since we've found we disagree with most people about this, and the reasons for it touch on many core cruxes about AI. We held the discussion synchronously for ~120 minutes, in writing, each on a dedicated, private Slack channel. We involved a moderator when possible. The moderator's role was to help participants identify and address their cruxes, move the conversation forward, and summarize points of contention. We planned to publish cleaned up versions of the transcripts and summaries to Astral Codex Ten, LessWrong, and the EA Forum. Participants were given the opportunity to clarify positions and redact information they considered infohazards or PR risks, as well as veto publishing altogether. We included this clause specifically to address the concerns expressed by people at AI labs, who expected heavy scrutiny by leadership and communications teams on what they can state publicly. People from ARC, DeepMind, and OpenAI, as well as one independent researcher agreed to participate. The two discussions with Paul Christiano and John Wentworth will be published shortly. One discussion with a person working at DeepMind is pending approval before publication. After a discussion with an OpenAI researcher took place, OpenAI strongly recommended against publishing, so we will not publish it. Most people we were in touch with were very interested in participating. However, after checking with their own organizations, many returned saying their organizations would not approve them sharing their positions publicly. This was in spite of the extensive provisions we made to reduce downsides for them: making it possible to edit the transcript, veto publishing, strict comment moderation, and so on. We think organizations discouraging their employees from speaking openly about their views on AI risk is harmful, and we want to encourage more openness. We are pausing the project for...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Christiano (ARC) and GA (Conjecture) Discuss AI Alignment Cruxes - Summary, published by Andrea Miotti on February 24, 2023 on LessWrong. The following is a summary of a discussion between Paul Christiano (ARC) and Gabriel Alfour, hereafter GA (Conjecture), which took place on December 11, 2022 on Slack. It was held as part of a series of discussions between Conjecture and people from other organizations in the AGI and alignment field. See our retrospective on the Discussions for more information about the project and the format. You can read the full transcript of this discussion here (note that it has been lightly edited for readability). Introduction GA is pessimistic about alignment being solved because he thinks there is (1) an AGI race to the bottom, (2) alignment is hard in ways that we are bad at dealing with, and (3) we don't have a lot of time to get better, given the pace of the race. Christiano clarifies: does GA expect a race to the bottom because investment in alignment will be low, people won't be willing to slow development/deployment if needed, or something else? He predicts alignment investment will be 5-50% of total investment, depending on how severe risk appears. If the risks look significant-but-kind-of-subtle, he expects getting 3-6 months of delay based on concern. In his median doomy case, he expects 1-2 years of delay. GA expects lower investment (1-5%). More crucially, though, GA expects it to be hard to turn funding and time into effective research given alignment's difficulty. Alignment Difficulty, Feedback Loops, & Phase Shifts GA's main argument for alignment difficulty is that getting feedback on our research progress is difficult, because Core concepts and desiderata in alignment are complex and abstract. We are bad at factoring complex, abstract concepts into smaller more tractable systems without having a lot of quantitative feedback. We are bad at building feedback loops when working on abstract concepts We are bad at coming to agreement on abstract concepts. All this will make it difficult to predict when phase shifts – eg qualitative changes to how systems are representing information, which might break our interpretability methods – will occur. Such phase shifts seem likely to occur when we shift from in vitro to in vivo, which makes it particularly likely that the alignment techniques we build in vitro won't be robust to them. Despite theorists arguing connecting AI systems to e.g. the internet is dangerous for this reason, labs will do it, because the path from current systems to future danger is complex and we may not see legibly catastrophic failures until it is too late. So, even getting better at predicting may not help. Christiano disagrees building feedback loops is hard in alignment. We can almost certainly study reward hacking in vitro in advance, together with clear measurements of whether we are succeeding at mitigating the problem in a way that should be expected to generalize to AI coup. Conditioned on deceptive alignment being a problem that emerges, there's a >50% chance that we can study it in the same sense. Furthermore, Christiano argues most plausible approaches to AI alignment have much richer feedback loops than the general version of either of these problems. For example, if you have an approach that requires building a kind of understanding of the internals of your model then you can test whether you can build that kind of understanding in not-yet-catastrophic models. If you have an approach that requires your model being unable to distinguish adversarial examples from deployment cases, you can test whether your models can make that distinction. You can generally seek methods that don't have particular reasons to break at the same time that things become catastrophic. GA is skeptical that alignment techni...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on the 2022 Conjecture AI Discussions, published by Andrea Miotti on February 24, 2023 on The AI Alignment Forum. At the end of 2022, following the success of the 2021 MIRI Conversations, Conjecture started a project to host discussions about AGI and alignment with key people in the field. The goal was simple: surface positions and disagreements, identify cruxes, and make these debates public whenever possible for collective benefit. Given that people and organizations will have to coordinate to best navigate AI's increasing effects, this is the first, minimum-viable coordination step needed to start from. Coordination is impossible without at least common knowledge of various relevant actors' positions and models. People sharing their beliefs, discussing them and making as much as possible of that public is strongly positive for a series of reasons. First, beliefs expressed in public discussions count as micro-commitments or micro-predictions, and help keep the field honest and truth-seeking. When things are only discussed privately, humans tend to weasel around and take inconsistent positions over time, be it intentionally or involuntarily. Second, commenters help debates progress faster by pointing out mistakes. Third, public debates compound. Knowledge shared publicly leads to the next generation of arguments being more refined, and progress in public discourse. We circulated a document about the project to various groups in the field, and invited people from OpenAI, DeepMind, Anthropic, Open Philanthropy, FTX Future Fund, ARC, and MIRI, as well as some independent researchers to participate in the discussions. We prioritized speaking to people at AGI labs, given that they are focused on building AGI capabilities. The format of discussions was as follows: A brief initial exchange with the participants to decide on the topics of discussion. By default, the discussion topic was “How hard is Alignment?”, since we've found we disagree with most people about this, and the reasons for it touch on many core cruxes about AI. We held the discussion synchronously for ~120 minutes, in writing, each on a dedicated, private Slack channel. We involved a moderator when possible. The moderator's role was to help participants identify and address their cruxes, move the conversation forward, and summarize points of contention. We planned to publish cleaned up versions of the transcripts and summaries to Astral Codex Ten, LessWrong, and the EA Forum. Participants were given the opportunity to clarify positions and redact information they considered infohazards or PR risks, as well as veto publishing altogether. We included this clause specifically to address the concerns expressed by people at AI labs, who expected heavy scrutiny by leadership and communications teams on what they can state publicly. People from ARC, DeepMind, and OpenAI, as well as one independent researcher agreed to participate. The two discussions with Paul Christiano and John Wentworth will be published shortly. One discussion with a person working at DeepMind is pending approval before publication. After a discussion with an OpenAI researcher took place, OpenAI strongly recommended against publishing, so we will not publish it. Most people we were in touch with were very interested in participating. However, after checking with their own organizations, many returned saying their organizations would not approve them sharing their positions publicly. This was in spite of the extensive provisions we made to reduce downsides for them: making it possible to edit the transcript, veto publishing, strict comment moderation, and so on. We think organizations discouraging their employees from speaking openly about their views on AI risk is harmful, and we want to encourage more openness. We are pausing th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes, published by Andrea Miotti on February 24, 2023 on The AI Alignment Forum. The following are the summary and transcript of a discussion between Paul Christiano (ARC) and Gabriel Alfour, hereafter GA (Conjecture), which took place on December 11, 2022 on Slack. It was held as part of a series of discussions between Conjecture and people from other organizations in the AGI and alignment field. See our retrospective on the Discussions for more information about the project and the format. Here's a summary of the discussion, as well as the full transcript below the summary, lightly edited for readability. Summary Introduction GA is pessimistic about alignment being solved because he thinks there is (1) an AGI race to the bottom, (2) alignment is hard in ways that we are bad at dealing with, and (3) we don't have a lot of time to get better, given the pace of the race. Christiano clarifies: does GA expect a race to the bottom because investment in alignment will be low, people won't be willing to slow development/deployment if needed, or something else? He predicts alignment investment will be 5-50% of total investment, depending on how severe risk appears. If the risks look significant-but-kind-of-subtle, he expects getting 3-6 months of delay based on concern. In his median doomy case, he expects 1-2 years of delay. GA expects lower investment (1-5%). More crucially, though, GA expects it to be hard to turn funding and time into effective research given alignment's difficulty. Alignment Difficulty, Feedback Loops, & Phase Shifts GA's main argument for alignment difficulty is that getting feedback on our research progress is difficult, because Core concepts and desiderata in alignment are complex and abstract. We are bad at factoring complex, abstract concepts into smaller more tractable systems without having a lot of quantitative feedback. We are bad at building feedback loops when working on abstract concepts We are bad at coming to agreement on abstract concepts. All this will make it difficult to predict when phase shifts – eg qualitative changes to how systems are representing information, which might break our interpretability methods – will occur. Such phase shifts seem likely to occur when we shift from in vitro to in vivo, which makes it particularly likely that the alignment techniques we build in vitro won't be robust to them. Despite theorists arguing connecting AI systems to e.g. the internet is dangerous for this reason, labs will do it, because the path from current systems to future danger is complex and we may not see legibly catastrophic failures until it is too late. So, even getting better at predicting may not help. Christiano disagrees building feedback loops is hard in alignment. We can almost certainly study reward hacking in vitro in advance, together with clear measurements of whether we are succeeding at mitigating the problem in a way that should be expected to generalize to AI coup. Conditioned on deceptive alignment being a problem that emerges, there's a >50% chance that we can study it in the same sense. Furthermore, Christiano argues most plausible approaches to AI alignment have much richer feedback loops than the general version of either of these problems. For example, if you have an approach that requires building a kind of understanding of the internals of your model then you can test whether you can build that kind of understanding in not-yet-catastrophic models. If you have an approach that requires your model being unable to distinguish adversarial examples from deployment cases, you can test whether your models can make that distinction. You can generally seek methods that don't have particular reasons to break at the same time that things become catastrophic. GA is ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI in sight: our look at the game board, published by Andrea Miotti on February 18, 2023 on The AI Alignment Forum. From our point of view, we are now in the end-game for AGI, and we (humans) are losing. When we share this with other people, they reliably get surprised. That's why we believe it is worth writing down our beliefs on this. 1. AGI is happening soon. Significant probability of it happening in less than 5 years. Five years ago, there were many obstacles on what we considered to be the path to AGI. But in the last few years, we've gotten: Powerful Agents (Agent57, GATO, Dreamer V3) Reliably good Multimodal Models (StableDiffusion, Whisper, Clip) Just about every language tasks (GPT3, ChatGPT, Bing Chat) Human and Social Manipulation Robots (Boston Dynamics, Day Dreamer, VideoDex, RT-1: Robotics Transformer ) AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark, for We don't have any obstacle left in mind that we don't expect to get overcome in more than 6 months after efforts are invested to take it down. Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are. 2. We haven't solved AI Safety, and we don't have much time left. We are very close to AGI. But how good are we at safety right now? Well. No one knows how to get LLMs to be truthful. LLMs make things up, constantly. It is really hard to get them not to do this, and we don't know how to do this at scale. Optimizers quite often break their setup in unexpected ways. There have been quite a few examples of this. But in brief, the lessons we have learned are: Optimizers can yield unexpected results Those results can be very weird (like breaking the simulation environment) Yet very few extrapolate from this and find these as worrying signs No one understands how large models make their decisions. Interpretability is extremely nascent, and mostly empirical. In practice, we are still completely in the dark about nearly all decisions taken by large models. RLHF and Fine-Tuning have not worked well so far. Models are often unhelpful, untruthful, inconsistent, in many ways that had been theorized in the past. We also witness goal misspecification, misalignment, etc. Worse than this, as models become more powerful, we expect more egregious instances of misalignment, as more optimization will push for more and more extreme edge cases and pseudo-adversarial examples. No one knows how to predict AI capabilities. No one predicted the many capabilities of GPT3. We only discovered them after the fact, while playing with the models. In some ways, we keep discovering capabilities now thanks to better interfaces and more optimization pressure by users, more than two years in. We're seeing the same phenomenon happen with ChatGPT and the model behind Bing Chat. We are uncertain about the true extent of the capabilities of the models we're training, and we'll be even more clueless about upcoming larger, more complex, more opaque models coming out of training. This has been true for a couple of years by now. 3. Racing towards AGI: Worst game of chicken ever. The Race for powerful AGIs has already started. There already are general AIs. They just are not powerful enough yet to count as True AGIs. Actors Regardless of why people are doing it, they are racing for AGI. Everyone has their theses, their own beliefs about AGIs and their motivations. For instance, consider: AdeptAI is working on giving AIs access to everything. In their introduction post, one can read “True general intelligence requires models that can no...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI in sight: our look at the game board, published by Andrea Miotti on February 18, 2023 on LessWrong. From our point of view, we are now in the end-game for AGI, and we (humans) are losing. When we share this with other people, they reliably get surprised. That's why we believe it is worth writing down our beliefs on this. 1. AGI is happening soon. Significant probability of it happening in less than 5 years. Five years ago, there were many obstacles on what we considered to be the path to AGI. But in the last few years, we've gotten: Powerful Agents (Agent57, GATO, Dreamer V3) Reliably good Multimodal Models (StableDiffusion, Whisper, Clip) Just about every language tasks (GPT3, ChatGPT, Bing Chat) Human and Social Manipulation Robots (Boston Dynamics) AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark for We don't have any obstacle left in mind that we don't expect to get overcome in more than 6 months after efforts are invested to take it down. Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are. 2. We haven't solved AI Safety, and we don't have much time left. We are very close to AGI. But how good are we at safety right now? Well. No one knows how to get LLMs to be truthful. LLMs make things up, constantly. It is really hard to get them not to do this, and we don't know how to do this at scale. Optimizers quite often break their setup in unexpected ways. There have been quite a few examples of this. But in brief, the lessons we have learned are: Optimizers can yield unexpected results Those results can be very weird (like breaking the simulation environment) Yet very few extrapolate from this and find these as worrying signs No one understands how large models make their decisions. Interpretability is extremely nascent, and mostly empirical. In practice, we are still completely in the dark about nearly all decisions taken by large models. RLHF and Fine-Tuning have not worked well so far. Models are often unhelpful, untruthful, inconsistent, in many ways that had been theorized in the past. We also witness goal misspecification, misalignment, etc. Worse than this, as models become more powerful, we expect more egregious instances of misalignment, as more optimization will push for more and more extreme edge cases and pseudo-adversarial examples. No one knows how to predict AI capabilities. No one predicted the many capabilities of GPT3. We only discovered them after the fact, while playing with the models. In some ways, we keep discovering capabilities now thanks to better interfaces and more optimization pressure by users, more than two years in. We're seeing the same phenomenon happen with ChatGPT and the model behind Bing Chat. We are uncertain about the true extent of the capabilities of the models we're training, and we'll be even more clueless about upcoming larger, more complex, more opaque models coming out of training. This has been true for a couple of years by now. 3. Racing towards AGI: Worst game of chicken ever. The Race for powerful AGIs has already started. There already are general AIs. They just are not powerful enough yet to count as True AGIs. Actors Regardless of why people are doing it, they are racing for AGI. Everyone has their theses, their own beliefs about AGIs and their motivations. For instance, consider: AdeptAI is working on giving AIs access to everything. In their introduction post, one can read “True general intelligence requires models that can not only read and write, but act in a way that is helpful to users. ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't accelerate problems you're trying to solve, published by Andrea Miotti on February 15, 2023 on The AI Alignment Forum. If one believes that unaligned AGI is a significant problem (>10% chance of leading to catastrophe), speeding up public progress towards AGI is obviously bad. Though it is obviously bad, there may be circumstances which require it. However, accelerating AGI should require a much higher bar of evidence and much more extreme circumstances than is commonly assumed. There are a few categories of arguments that claim intentionally advancing AI capabilities can be helpful for alignment, which do not meet this bar. Two cases of this argument are as follows It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after. We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems. We address these two arguments directly, arguing that the downsides are much higher than they may appear, and touch on why we believe that merely plausible arguments for advancing AI capabilities aren't enough. Dangerous argument 1: It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after. For a specific instance of this, see Paul Christiano's “Thoughts on the impact of RLHF research”: RLHF is just not that important to the bottom line right now. Imitation learning works nearly as well, other hacky techniques can do quite a lot to fix obvious problems [.] RLHF is increasingly important as time goes on, but it also becomes increasingly overdetermined that people would have done it. In general I think your expectation should be that incidental capabilities progress from safety research is a small part of total progress [.] Markets aren't efficient, they only approach efficiency under heavy competition when people with relevant information put effort into making them efficient. This is true for machine learning, as there aren't that many machine learning researchers at the cutting edge, and before ChatGPT there wasn't a ton of market pressure on them. Perhaps something as low hanging as RLHF or something similar would have happened eventually, but this isn't generally true. Don't assume that something seemingly obvious to you is obvious to everyone. But even if something like RLHF or imitation learning would have happened eventually, getting small steps of progress slightly earlier can have large downstream effects. Progress often follows an s-curve, which appears exponential until the current research direction is exploited and tapers off. Moving an exponential up, even a little, early on can have large downstream consequences: The red line indicates when the first “lethal” AGI is deployed, and thus a hard deadline for us to solve alignment. A slight increase in progress now can lead to catastrophe significantly earlier! Pushing us up the early progress exponential has really bad downstream effects! And this is dangerous decision theory too: if every alignment researcher took a similar stance, their marginal accelerations would quickly add up. Dangerous Argument 2: We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems. Again, from Paul: Avoiding RLHF at best introduces an important overhang: people will implicitly underestimate the capabilities of AI systems for longer, slowing progress now but leading to faster and more abrupt change later as people realize they've been wrong. But there is no clear distinction between eliminating capability overhangs and discovering new capabilities. Eliminating capability overhangs is discovering AI capabilities faste...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't accelerate problems you're trying to solve, published by Andrea Miotti on February 15, 2023 on LessWrong. If one believes that unaligned AGI is a significant problem (>10% chance of leading to catastrophe), speeding up public progress towards AGI is obviously bad. Though it is obviously bad, there may be circumstances which require it. However, accelerating AGI should require a much higher bar of evidence and much more extreme circumstances than is commonly assumed. There are a few categories of arguments that claim intentionally advancing AI capabilities can be helpful for alignment, which do not meet this bar. Two cases of this argument are as follows It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after. We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems. We address these two arguments directly, arguing that the downsides are much higher than they may appear, and touch on why we believe that merely plausible arguments for advancing AI capabilities aren't enough. Dangerous argument 1: It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after. For a specific instance of this, see Paul Christiano's “Thoughts on the impact of RLHF research”: RLHF is just not that important to the bottom line right now. Imitation learning works nearly as well, other hacky techniques can do quite a lot to fix obvious problems [.] RLHF is increasingly important as time goes on, but it also becomes increasingly overdetermined that people would have done it. In general I think your expectation should be that incidental capabilities progress from safety research is a small part of total progress [.] Markets aren't efficient, they only approach efficiency under heavy competition when people with relevant information put effort into making them efficient. This is true for machine learning, as there aren't that many machine learning researchers at the cutting edge, and before ChatGPT there wasn't a ton of market pressure on them. Perhaps something as low hanging as RLHF or something similar would have happened eventually, but this isn't generally true. Don't assume that something seemingly obvious to you is obvious to everyone. But even if something like RLHF or imitation learning would have happened eventually, getting small steps of progress slightly earlier can have large downstream effects. Progress often follows an s-curve, which appears exponential until the current research direction is exploited and tapers off. Moving an exponential up, even a little, early on can have large downstream consequences: The red line indicates when the first “lethal” AGI is deployed, and thus a hard deadline for us to solve alignment. A slight increase in progress now can lead to catastrophe significantly earlier! Pushing us up the early progress exponential has really bad downstream effects! And this is dangerous decision theory too: if every alignment researcher took a similar stance, their marginal accelerations would quickly add up. Dangerous Argument 2: We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems. Again, from Paul: Avoiding RLHF at best introduces an important overhang: people will implicitly underestimate the capabilities of AI systems for longer, slowing progress now but leading to faster and more abrupt change later as people realize they've been wrong. But there is no clear distinction between eliminating capability overhangs and discovering new capabilities. Eliminating capability overhangs is discovering AI capabilities faster, so also pu...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't accelerate problems you're trying to solve, published by Andrea Miotti on February 15, 2023 on The AI Alignment Forum. If one believes that unaligned AGI is a significant problem (>10% chance of leading to catastrophe), speeding up public progress towards AGI is obviously bad. Though it is obviously bad, there may be circumstances which require it. However, accelerating AGI should require a much higher bar of evidence and much more extreme circumstances than is commonly assumed. There are a few categories of arguments that claim intentionally advancing AI capabilities can be helpful for alignment, which do not meet this bar. Two cases of this argument are as follows It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after. We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems. We address these two arguments directly, arguing that the downsides are much higher than they may appear, and touch on why we believe that merely plausible arguments for advancing AI capabilities aren't enough. Dangerous argument 1: It doesn't matter much to do work that pushes capabilities if others are likely to do the same or similar work shortly after. For a specific instance of this, see Paul Christiano's “Thoughts on the impact of RLHF research”: RLHF is just not that important to the bottom line right now. Imitation learning works nearly as well, other hacky techniques can do quite a lot to fix obvious problems [.] RLHF is increasingly important as time goes on, but it also becomes increasingly overdetermined that people would have done it. In general I think your expectation should be that incidental capabilities progress from safety research is a small part of total progress [.] Markets aren't efficient, they only approach efficiency under heavy competition when people with relevant information put effort into making them efficient. This is true for machine learning, as there aren't that many machine learning researchers at the cutting edge, and before ChatGPT there wasn't a ton of market pressure on them. Perhaps something as low hanging as RLHF or something similar would have happened eventually, but this isn't generally true. Don't assume that something seemingly obvious to you is obvious to everyone. But even if something like RLHF or imitation learning would have happened eventually, getting small steps of progress slightly earlier can have large downstream effects. Progress often follows an s-curve, which appears exponential until the current research direction is exploited and tapers off. Moving an exponential up, even a little, early on can have large downstream consequences: The red line indicates when the first “lethal” AGI is deployed, and thus a hard deadline for us to solve alignment. A slight increase in progress now can lead to catastrophe significantly earlier! Pushing us up the early progress exponential has really bad downstream effects! And this is dangerous decision theory too: if every alignment researcher took a similar stance, their marginal accelerations would quickly add up. Dangerous Argument 2: We should avoid capability overhangs, so that people are not surprised. To do so, we should extract as many capabilities as possible from existing AI systems. Again, from Paul: Avoiding RLHF at best introduces an important overhang: people will implicitly underestimate the capabilities of AI systems for longer, slowing progress now but leading to faster and more abrupt change later as people realize they've been wrong. But there is no clear distinction between eliminating capability overhangs and discovering new capabilities. Eliminating capability overhangs is discovering AI capabilities faste...
Porta é pra quem não consegue atravessar parede! Brunão e Plínio recebem as visitas ilustres de Alexandre Nerdmaster e Leonardo Miotti para discutir, destrinchar, babar ovo do mais novo filme da DC: Adão Negro.
No podcast de hoje, temos o retorno de um grande amigo para uma participação especial. Neste episódio, Brunão e Baconzitos convidam Nerdmaster para celebrar a obra Inimigo Meu de Wolfgang Petersen que deixou esse plano terreno recentemente. Mas para a equipe estar completa precisávamos da lenda, do mito, do homem que fala o melhor inglês: Miotti! Aprenda com a gente a fazer efeitos visuais piores do que um filme anterior, jogue um filme inteiro fora porque estava uma porcaria, realoque a produção para o quintal da sua casa, se envolva romanticamente com seu maior inimigo e tenha um filho com ele!
Estar sano no sólo es cuidar tu cuerpo físico, es también cuidar tu mente y tus pensamientos, tus emociones y tu espíritu o alma, como quieras llamarle. Así que aunque suene como algo muy elevado, te lo compartimos de una manera simple para que descubras una opción más a considerar dentro de tu camino de sanación. Nos acompaña Ana Miotti, Lic. en Psicología Universidad Nacional de CórdobaArgentina y vive en la Ciudad de México desde 2014. Tiene más de 28 años de profesión y es psicóloga clínica transpersonal, consultora empresarial internacional, es reikista y maestra y lectora certificada de registros akáshicos con 14 años de experiencia.Es también terapeuta floral con más de 15 años de experiencia y está certificada en programación neurolinguistica e hipnosis eriksoniana.Es especializada en Terapias Sistémicas, Constelaciones Familiares y Árbol Genealógico.www.anamiotti.comFacebook https://www.facebook.com/mexicoregistrosakashicos/Escuela Akáshica en Facebook https://www.facebook.com/Ana-Miotti-Escuela-Akáshica-para-la-Evolución-del-Ser-109676280929874/Instagram https://www.instagram.com/lic.ana.miotti?r=nametagYouTube https://youtube.com/channel/UCoBrVkGWSO9z39Ftk_SSINASuscríbete para apoyar al podcast y comparte tus episodios favoritoshttps://www.instagram.com/aha.mx/?hl=eshttps://www.youtube.com/channel/UCv8U1AvWPzorcjRnTn0xB9ghttps://m.facebook.com/ahamomentsmx/postshttps://www.instagram.com/mindbodypau/?hl=eshttps://www.instagram.com/valeriabenavidesb/?hl=es
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/3RVhLZcCDT0 En esta conferencia Ana explicará qué significa crear realidad propia. El poder creador de nuestra palabra. Hablaremos de Los Registros Akáshicos y el poder de creación Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente -----------INFORMACIÓN SOBRE MINDALIA---------- Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindaliacom - Odysee: https://odysee.com/@Mindalia.com *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/3RVhLZcCDT0 En esta conferencia Ana explicará qué significa crear realidad propia. El poder creador de nuestra palabra. Hablaremos de Los Registros Akáshicos y el poder de creación Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente -----------INFORMACIÓN SOBRE MINDALIA---------- Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindaliacom - Odysee: https://odysee.com/@Mindalia.com *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/3RVhLZcCDT0 En esta conferencia Ana explicará qué significa crear realidad propia. El poder creador de nuestra palabra. Hablaremos de Los Registros Akáshicos y el poder de creación Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente -----------INFORMACIÓN SOBRE MINDALIA---------- Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindaliacom - Odysee: https://odysee.com/@Mindalia.com *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
➡️ Like The Podcast? Leave A Rating: https://ratethispodcast.com/successstory ➡️ About The Guest Daniel Kwak first immigrated to the United States with his family at the age of 5. Due to a financially disadvantaged upbringing, at the age of 20, he had a negative amount of $187.65 in his bank account. Motivated by continued financial hardship throughout his life, he started learning about Real Estate Investing. For the first two years, he learned everything he could, and at the age of 22, he did his first deal. By age 23 he had 83 rental units, along with having raised millions of dollars in capital and also having done a variety of different deals and strategies. At age 26, Daniel founded Miotti Partners Capital, a core-satellite fund that has introduced the equities fund management model into the Real Estate space for the first time. He has also traveled across the country training and mentoring hundreds and thousands of aspiring real estate investors. Daniel and his brother currently run an online financial education company, along with a Youtube channel (The Kwak Brothers) that currently has around 200k subscribers. Overall, Daniel aspires to be a great husband, leader, and friend by being more aware of God's love today than the day before. ➡️ Show Links https://www.instagram.com/thekwakbrothers/ https://www.linkedin.com/in/dnlkwak/ https://thekwakbrothers.com/ ➡️ Podcast Sponsors HUBSPOT - https://hubspot.com/ SWAG - https://swag.com/success (Promo Code: Success10) ➡️ Talking Points 00:00 - Intro 03:19 - Daniel Kwak's origin story 13:19 - How does Daniel use his childhood story to action what he learns? 17:24 - What are some of the action steps that Daniel Kwak takes to grow his business? 20:20 - What were some learnings of Daniel Kwak having 87 doors? 24:10 - Did religion help Daniel Kwak in growing his business? 27:26 - Why does Daniel Kwak think that it is important to speak about religion? 29:46 - Is money regarded as good or evil in religion and how does Daniel Kwak wrestle with that? 35:53 - How does Daniel Kwak ground himself? 37:35 - Does Daniel Kwak find more value in investing profit doing charity or reinvesting into business? 40:21 - How did Daniel Kwak raise money for his book? 44:44 - What is the reason for people pivoting into real estate? 46:40 - What is a real estate perspective on hedging against inflation? 51:52 - What differentiates Daniel Kwak from other entrepreneurs working in real estate investment? 1:01:33 - How does Daniel Kwak protect himself in the business deals he is doing? 1:02:34 - Why did Daniel Kwak build his own personal brand? 1:04:41 - Building a virtual relationship or community building; which one is more important? 1:08:10 - Closing thoughts from Daniel Kwak 1:10:16 - What was the biggest challenge Daniel Kwak had in his personal life and how does he overcome it 1:10:32 - Who has been a mentor to Daniel Kwak? 1:10:58 - A book or a podcast recommended by Daniel Kwak 1:11:24 - What would Daniel Kwak tell his 20-year-old self? 1:11:39 - What does success mean to Daniel Kwak? Learn more about your ad choices. Visit podcastchoices.com/adchoices
A pocos días de la primera vuelta presidencial en Francia, el poder adquisitivo resulta el tema crucial de estas elecciones. Muy por encima de la inmigración o la seguridad, la preocupación mayor de los franceses por su capacidad de adquirir bienes y servicios alcanza la cima ante una inflación inevitable empujada por la pandemia y ahora la guerra en Ucrania. Para escuchar el reportaje, haga ckick en el ícono play, encima de la foto ¿Por qué la inflación, que había desaparecido incluso las demandas sociales, rompe ahora con fuerza en las conversaciones de los franceses? “La explicación es bastante simple”, afirma el economista Luis Miotti y agrega “Venimos de un proceso de lucha contra la pandemia que implicó caídas extremadamente fuertes de la producción y, al mismo tiempo, un esfuerzo de los países extremadamente fuerte para sostener la actividad, aunque no hubiera actividad”. El experto explica que el Estado francés pagaba el 80% del salario de la gente que no podía trabajar y pagaba los alquileres de locales como restaurantes, cines y los teatros, aunque no estuviera funcionando. Además de sostener los salarios y mantener subsidios, debió evitar un proceso de quiebra sobre todo en la pequeña y mediana empresa. “Francia, que no es de los países que más gastó en la pandemia, gastó 25 % del Producto Interno Bruto (PIB). Alemania gastó entre 30 y 40 % por ciento. Japón 45%, Estados Unidos casi 40%. En un año, esos países duplicaron el gasto social, que en estas sociedades ya es bastante alto”. Para financiar este gasto social extraordinario existen dos métodos: el endeudamiento y la emisión monetaria. De esta forma; dos años después de declarada la pandemia, Francia se encuentra con una deuda y una moneda para comprar bienes que no se produjeron. No queremos más promesas sino alza de salario El 17 de marzo pasado, decenas de miles de personas hicieron sentir en las calles de París la cólera social. La reivindicación principal: un aumento salarial. Si bien el poder adquisitivo de los asalariados franceses nos e ha ido aun a pique, existe la percepción general de que eso está a punto de suceder. La inflación como una realidad que te encuentras a la vuelta de la esquina es una anticipación, explica Miotti. Según el economista, existen precios que afectan mucho la canasta familiar. Es el caso de la energía, precisa. “En el momento en que la economía mundial, los precios de la energía se disparan. Esa suba de ese mercado muy particular repercute en el presupuesto de la gente. Entonces, hay una percepción de que el poder adquisitivo del salario ha sufrido mucho. Pero ojo, esa percepción se monta sobre una tendencia larga en la cual el aumento del poder de compra de los salarios ha sido extremadamente bajo”. El quiebre del pacto social de los años 60 Para entender por qué el poder adquisitivo de los franceses se ha ido mermando hay que hacer un poco de historia y remontarnos al momento de ruptura de los estados de bienestar que empieza con la crisis de mediados de los años setenta y se consagra con el modelo neoliberal de la era Thatcher y Reagan. “Entre 1950 y 1975, los famosos treinta gloriosos, la productividad se multiplicó por tres y medio, es decir 350 % y, al mismo tiempo, los salarios subieron 350%. Es como si el salario mínimo de hoy que es de 1400 euros aproximadamente, treinta años después ese mismo salario mínimo es de 7 mil euros”, explica Miotti. La llamada “Generación del baby boom” (después de la Segunda Guerra) tuvo un horizonte largo con una tasa de inflación extremadamente baja, salarios que crecían y eso generó el país moderno que es Francia, Alemania, Gran Bretaña. “Países con grandes clases medias y pobreza muy baja, además acompañadas de un salario indirecto; es decir, seguro de desempleo, jubilación, subsidios a las familias. Todo eso permitió una realocación de recursos y una distribución de ingresos muy equitativa”. El economista explica que el problema de hoy, la fractura social actual, radica en que ese modelo se rompió y esa generación son los jubilados de hoy que terminan siendo más ricos que sus hijos, y éstos terminaron pidiéndole o utilizando un poco del ahorro de los padres o de sus abuelos. “En realidad todo eso se destruyó a mediados de los años setenta y a partir de allí una desconexión extremadamente fuerte entre salario y cantidad de trabajo puesto en funcionamiento por los trabajadores". Esa desconexión hizo que los salarios subieran en treinta años entre 30 y 40%, o sea apenas 1% anual. Mientras que la productividad subió hasta 300%. Las empresas seguían aumentando sus ganancias, pero no iban a los salarios de los trabajadores, sino a los beneficios de las empresas. Con el proceso de globalización que se implanta desde los años 80, los beneficios de las empresas obtenidos por el aumento de su producción van al mercado financiero que impone una política de endeudamiento a los asalariados bajo la lógica de “ustedes no tienen un ingreso que crece para consumir, pero pueden endeudarse con nosotros. Se trata de mantener el consumo de masas a partir del endeudamiento de masas”. Empresarios y obreros: una brecha salarial escandalosa Entre los años 60 y principios de los 80 un dirigente de empresa en Francia ganaba veinte veces el salario promedio. Hacia finales de los ochenta y principios de los noventa el salario de un dirigente equivalía a trescientas y cuatrocientas es el salario promedio. “En el mismo lapso, ese salario promedio de los trabajadores apenas creció un 1% por año. Ese porcentaje, para un salario mínimo de 15.000 euros anuales, apenas representa 200 euros que es casi nada. Mientras que el salario de los dirigentes subió de manera espectacular. Pasó de 400.000 euros a ganar seis millones” señala Miotti apoyado en un gráfico que muestra la brecha salarial en Francia. Esa brecha salarial hace que explote el compromiso social de los años sesenta, famoso pacto social. Y explica la fractura de la sociedad francesa y a aparición de los “gillets jaunes” (los chalecos amarillos) que son toda esa masa de trabajadores cuyos salarios han subido muy poco y ha perdido la movilidad social frente a la opulencia de otras clases que gana muchísimo dinero, afirma Miotti y agrega que la pandemia vino a empeorar la situación porque trae consigo un proceso inflacionario. “Lo que eso va a generar es una demanda y una tensión muy social fuerte. Ante ese panorama, hay que negociar salarios y una repartición de costos entre el Estado, las empresas y la gente”. No es bajando el precio de la baguette, hay que aumentar salarios El eje de la negociación entre estos tres actores, explica el especialista, se centra en la cuestión de hasta dónde se puede aumentar el salario mínimo, teniendo en cuenta que el mínimo salarial es el indicador de todos los otros salarios y que, al aumentarse, automáticamente hay un aumento del resto. Tras la pandemia, algunos sectores de la producción en Francia proponen un aumento salarial, tal es el caso del sector hotelero y de restaurantes pues, aunque quieren abrir los negocios, la gente no quiere trabajar como camarero, chef, etc. “No hay que olvidar que Francia es una potencia mundial en turismo y que ese sector representa anualmente 80 millones de personas. Y ese sector esta proponiendo un aumento salarial entre el 20 y 30 por ciento”. Para el experto cualquiera que ganar estas elecciones presidenciales tendrá que negociar un aumento salarial. “La clave es evitar que ese aumento genere un proceso inflacionario aun cuando inevitablemente habrá una inflación importada de bienes que llegaran al país con precios muy elevados”. Todos los partidos, independientemente de la línea política que enarbolen, saben que una vez en el poder se debe negociar el aumento de los salarios. Pero en campaña lanzan un discurso polarizado y prometen lo imposible. “La izquierda promete más allá de lo que es factible. La derecha también. Y eso hace que la gente se aleje de la política porque sabe que le están diciendo cosas que no se pueden hacer. El Estado esta exhausto. De dónde va a sacar más dinero. Aumentar los impuestos le implica ahorcar al sector empresarial y con ello ahorcar la posibilidad de generar empleo y de invertir. Las trasferencias son complicadas porque la capa de ultra ricos es muy pequeña y no se alcanza a distribuir. Tampoco es bajando el precio de la baguette, eso es llamativo pero no sirve de nada. Hay que negociar con las empresas hasta cuánto van a aceptar el aumento salarial y el Estado que tendrá que, en contraprestación, cubrir una parte del porcentaje de ese aumento con una rediuccion de impuestos o de cargas social. Se trata de arbitrar entre esos tres grandes actores cómo hacer frente al proceso inflacionario y a la presión social”.
A pocos días de la primera vuelta presidencial en Francia, el poder adquisitivo resulta el tema crucial de estas elecciones. Muy por encima de la inmigración o la seguridad, la preocupación mayor de los franceses por su capacidad de adquirir bienes y servicios alcanza la cima ante una inflación inevitable empujada por la pandemia y ahora la guerra en Ucrania. Para escuchar el reportaje, haga ckick en el ícono play, encima de la foto ¿Por qué la inflación, que había desaparecido incluso las demandas sociales, rompe ahora con fuerza en las conversaciones de los franceses? “La explicación es bastante simple”, afirma el economista Luis Miotti y agrega “Venimos de un proceso de lucha contra la pandemia que implicó caídas extremadamente fuertes de la producción y, al mismo tiempo, un esfuerzo de los países extremadamente fuerte para sostener la actividad, aunque no hubiera actividad”. El experto explica que el Estado francés pagaba el 80% del salario de la gente que no podía trabajar y pagaba los alquileres de locales como restaurantes, cines y los teatros, aunque no estuviera funcionando. Además de sostener los salarios y mantener subsidios, debió evitar un proceso de quiebra sobre todo en la pequeña y mediana empresa. “Francia, que no es de los países que más gastó en la pandemia, gastó 25 % del Producto Interno Bruto (PIB). Alemania gastó entre 30 y 40 % por ciento. Japón 45%, Estados Unidos casi 40%. En un año, esos países duplicaron el gasto social, que en estas sociedades ya es bastante alto”. Para financiar este gasto social extraordinario existen dos métodos: el endeudamiento y la emisión monetaria. De esta forma; dos años después de declarada la pandemia, Francia se encuentra con una deuda y una moneda para comprar bienes que no se produjeron. No queremos más promesas sino alza de salario El 17 de marzo pasado, decenas de miles de personas hicieron sentir en las calles de París la cólera social. La reivindicación principal: un aumento salarial. Si bien el poder adquisitivo de los asalariados franceses nos e ha ido aun a pique, existe la percepción general de que eso está a punto de suceder. La inflación como una realidad que te encuentras a la vuelta de la esquina es una anticipación, explica Miotti. Según el economista, existen precios que afectan mucho la canasta familiar. Es el caso de la energía, precisa. “En el momento en que la economía mundial, los precios de la energía se disparan. Esa suba de ese mercado muy particular repercute en el presupuesto de la gente. Entonces, hay una percepción de que el poder adquisitivo del salario ha sufrido mucho. Pero ojo, esa percepción se monta sobre una tendencia larga en la cual el aumento del poder de compra de los salarios ha sido extremadamente bajo”. El quiebre del pacto social de los años 60 Para entender por qué el poder adquisitivo de los franceses se ha ido mermando hay que hacer un poco de historia y remontarnos al momento de ruptura de los estados de bienestar que empieza con la crisis de mediados de los años setenta y se consagra con el modelo neoliberal de la era Thatcher y Reagan. “Entre 1950 y 1975, los famosos treinta gloriosos, la productividad se multiplicó por tres y medio, es decir 350 % y, al mismo tiempo, los salarios subieron 350%. Es como si el salario mínimo de hoy que es de 1400 euros aproximadamente, treinta años después ese mismo salario mínimo es de 7 mil euros”, explica Miotti. La llamada “Generación del baby boom” (después de la Segunda Guerra) tuvo un horizonte largo con una tasa de inflación extremadamente baja, salarios que crecían y eso generó el país moderno que es Francia, Alemania, Gran Bretaña. “Países con grandes clases medias y pobreza muy baja, además acompañadas de un salario indirecto; es decir, seguro de desempleo, jubilación, subsidios a las familias. Todo eso permitió una realocación de recursos y una distribución de ingresos muy equitativa”. El economista explica que el problema de hoy, la fractura social actual, radica en que ese modelo se rompió y esa generación son los jubilados de hoy que terminan siendo más ricos que sus hijos, y éstos terminaron pidiéndole o utilizando un poco del ahorro de los padres o de sus abuelos. “En realidad todo eso se destruyó a mediados de los años setenta y a partir de allí una desconexión extremadamente fuerte entre salario y cantidad de trabajo puesto en funcionamiento por los trabajadores". Esa desconexión hizo que los salarios subieran en treinta años entre 30 y 40%, o sea apenas 1% anual. Mientras que la productividad subió hasta 300%. Las empresas seguían aumentando sus ganancias, pero no iban a los salarios de los trabajadores, sino a los beneficios de las empresas. Con el proceso de globalización que se implanta desde los años 80, los beneficios de las empresas obtenidos por el aumento de su producción van al mercado financiero que impone una política de endeudamiento a los asalariados bajo la lógica de “ustedes no tienen un ingreso que crece para consumir, pero pueden endeudarse con nosotros. Se trata de mantener el consumo de masas a partir del endeudamiento de masas”. Empresarios y obreros: una brecha salarial escandalosa Entre los años 60 y principios de los 80 un dirigente de empresa en Francia ganaba veinte veces el salario promedio. Hacia finales de los ochenta y principios de los noventa el salario de un dirigente equivalía a trescientas y cuatrocientas es el salario promedio. “En el mismo lapso, ese salario promedio de los trabajadores apenas creció un 1% por año. Ese porcentaje, para un salario mínimo de 15.000 euros anuales, apenas representa 200 euros que es casi nada. Mientras que el salario de los dirigentes subió de manera espectacular. Pasó de 400.000 euros a ganar seis millones” señala Miotti apoyado en un gráfico que muestra la brecha salarial en Francia. Esa brecha salarial hace que explote el compromiso social de los años sesenta, famoso pacto social. Y explica la fractura de la sociedad francesa y a aparición de los “gillets jaunes” (los chalecos amarillos) que son toda esa masa de trabajadores cuyos salarios han subido muy poco y ha perdido la movilidad social frente a la opulencia de otras clases que gana muchísimo dinero, afirma Miotti y agrega que la pandemia vino a empeorar la situación porque trae consigo un proceso inflacionario. “Lo que eso va a generar es una demanda y una tensión muy social fuerte. Ante ese panorama, hay que negociar salarios y una repartición de costos entre el Estado, las empresas y la gente”. No es bajando el precio de la baguette, hay que aumentar salarios El eje de la negociación entre estos tres actores, explica el especialista, se centra en la cuestión de hasta dónde se puede aumentar el salario mínimo, teniendo en cuenta que el mínimo salarial es el indicador de todos los otros salarios y que, al aumentarse, automáticamente hay un aumento del resto. Tras la pandemia, algunos sectores de la producción en Francia proponen un aumento salarial, tal es el caso del sector hotelero y de restaurantes pues, aunque quieren abrir los negocios, la gente no quiere trabajar como camarero, chef, etc. “No hay que olvidar que Francia es una potencia mundial en turismo y que ese sector representa anualmente 80 millones de personas. Y ese sector esta proponiendo un aumento salarial entre el 20 y 30 por ciento”. Para el experto cualquiera que ganar estas elecciones presidenciales tendrá que negociar un aumento salarial. “La clave es evitar que ese aumento genere un proceso inflacionario aun cuando inevitablemente habrá una inflación importada de bienes que llegaran al país con precios muy elevados”. Todos los partidos, independientemente de la línea política que enarbolen, saben que una vez en el poder se debe negociar el aumento de los salarios. Pero en campaña lanzan un discurso polarizado y prometen lo imposible. “La izquierda promete más allá de lo que es factible. La derecha también. Y eso hace que la gente se aleje de la política porque sabe que le están diciendo cosas que no se pueden hacer. El Estado esta exhausto. De dónde va a sacar más dinero. Aumentar los impuestos le implica ahorcar al sector empresarial y con ello ahorcar la posibilidad de generar empleo y de invertir. Las trasferencias son complicadas porque la capa de ultra ricos es muy pequeña y no se alcanza a distribuir. Tampoco es bajando el precio de la baguette, eso es llamativo pero no sirve de nada. Hay que negociar con las empresas hasta cuánto van a aceptar el aumento salarial y el Estado que tendrá que, en contraprestación, cubrir una parte del porcentaje de ese aumento con una rediuccion de impuestos o de cargas social. Se trata de arbitrar entre esos tres grandes actores cómo hacer frente al proceso inflacionario y a la presión social”.
Charlene Martins Miotti (UFJF) cursou Letras e se interessou pelos Estudos Clássicos a partir do Latim. Neste episódio, Charlene Miotti nos encanta com seus projetos: metodologia de ensino de línguas clássicas e literatura. Descreve sobre as Metodologias Ativas de Aprendizagem e como utilizou o Team Based Learning (TBL - Aprendizagem Baseada em Equipes) nas disciplinas de Estudos Fundamentais de Literatura Latina e Literatura Grega. Miotti ressalta como as práticas ativas possibilitam a aprendizagem significativa. Sobre a Literatura, destaca a importância de não fugir de trechos difíceis e falar sobre isto em sala de aula com franqueza e criticidade. Por fim, Charlene Miotti nos conta sobre sua atuação na SBEC e na Classica, patrimônio dos Estudos Clássicos no Brasil. No segundo bloco do Podcast Archai, Charlene Mitotti nos apresenta a personagem Quintiliano, sua grande obra Institutio Oratoria e explica a designação Pseudo-Quintiliano e as Declamações Maiores. Quintiliano foi um rhetor, gramático latino e professor do séc I EC (30-96 EC). Miotti mostra como as declamações são um gênero discursivo novo, híbrido que envolve literatura e retórica na mesma medida, além de incluir também a performance. Charlene reflete sobre a importância de os classicistas, falantes de Língua Portuguesa, trabalharem por uma biblioteca de fontes em acesso livre e traduzida em português.
Olá taradinhes, tudo bem com vocês? Neste episódio, trouxemos a ginecologista e mastologista Lisiane Miotti, do podcast Primaveras, pra conversar com a gente a respeito de Climatério e Menopausa, duas fases importantes da vida para as quais muitas vezes não nos preparamos. Este é o nosso 2º episódio de 2022 participante da campanha O Podcast […] O conteúdo Climatério e Menopausa – Entrevista com Lisiane Miotti aparece primeiro em Sexo Explícito.
Ecco la selezione di notizie di questa settimana:- Pace, vette, alpinismo: qualche spunto di riflessione- Pellegrinon festeggia i 50anni di "Nuovi sentieri" e lancia l'idea di un archivio storico delle Dolomiti a Falcade- Popi Miotti sul numero 115 di Meridiani Montagne racconta il Cengalo della sua via Cacao Meravigliao. Cengalo dal quale però è meglio ora stare alla larga...- MonteRosa edizioni annuncia una nuova collana per sole scrittrici.Su fattidimontagna.it testi, immagini e link per approfondire
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/335sMQxVwGo Ana Miotti forma parte del Congreso "Vive saludableMENTE", organizado por Mindalia.com y retransmitido en directo para todo el planeta, a través de nuestros canales de Youtube “Mindalia Plus”, Facebook, Twitch, Vaughn Live, Twitter, Odysee y VK. Para más información: https://www.mindaliacongresos.com Los Registros Akáshicos son una herramienta para el crecimiento y la evolución. ¿Cómo pueden ayudarnos a cambiar nuestra mente y nuestra vida cuando vivimos desde el propósito de nuestra alma y nuestro ser? ¿Cómo podemos transitar nuestra vida con más conciencia y armonía? Descúbrelo en esta conferencia. Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. https://bloganamiotti.wixsite.com/web... https://www.instagram.com/lic.ana.mio... https://www.facebook.com/Ana-Miotti-E... https://twitter.com/AnaMiotti1?s=09 Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente ------------INFORMACIÓN SOBRE MINDALIA---------- Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Periscope: https://www.pscp.tv/mindaliacom - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindalia *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/335sMQxVwGo Ana Miotti forma parte del Congreso "Vive saludableMENTE", organizado por Mindalia.com y retransmitido en directo para todo el planeta, a través de nuestros canales de Youtube “Mindalia Plus”, Facebook, Twitch, Vaughn Live, Twitter, Odysee y VK. Para más información: https://www.mindaliacongresos.com Los Registros Akáshicos son una herramienta para el crecimiento y la evolución. ¿Cómo pueden ayudarnos a cambiar nuestra mente y nuestra vida cuando vivimos desde el propósito de nuestra alma y nuestro ser? ¿Cómo podemos transitar nuestra vida con más conciencia y armonía? Descúbrelo en esta conferencia. Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. https://bloganamiotti.wixsite.com/web... https://www.instagram.com/lic.ana.mio... https://www.facebook.com/Ana-Miotti-E... https://twitter.com/AnaMiotti1?s=09 Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente ------------INFORMACIÓN SOBRE MINDALIA---------- Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Periscope: https://www.pscp.tv/mindaliacom - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindalia *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/335sMQxVwGo Ana Miotti forma parte del Congreso "Vive saludableMENTE", organizado por Mindalia.com y retransmitido en directo para todo el planeta, a través de nuestros canales de Youtube “Mindalia Plus”, Facebook, Twitch, Vaughn Live, Twitter, Odysee y VK. Para más información: https://www.mindaliacongresos.com Los Registros Akáshicos son una herramienta para el crecimiento y la evolución. ¿Cómo pueden ayudarnos a cambiar nuestra mente y nuestra vida cuando vivimos desde el propósito de nuestra alma y nuestro ser? ¿Cómo podemos transitar nuestra vida con más conciencia y armonía? Descúbrelo en esta conferencia. Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. https://bloganamiotti.wixsite.com/web... https://www.instagram.com/lic.ana.mio... https://www.facebook.com/Ana-Miotti-E... https://twitter.com/AnaMiotti1?s=09 Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente ------------INFORMACIÓN SOBRE MINDALIA---------- Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Periscope: https://www.pscp.tv/mindaliacom - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindalia *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/lxkK3qxuhII Los Registros Akáshicos son una herramienta para el crecimiento y la evolución. ¿Cómo pueden ayudarnos a cambiar nuestra mente y nuestra vida cuando vivimos desde el propósito de nuestra alma y nuestro ser? ¿Cómo podemos transitar nuestra vida con más conciencia y armonía? Aprende esto y mucho más en esta interesante entrevista. Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. https://bloganamiotti.wixsite.com/web... https://www.instagram.com/lic.ana.mio... https://www.facebook.com/Ana-Miotti-E... https://twitter.com/AnaMiotti1?s=09 Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente -----------INFORMACIÓN SOBRE MINDALIA----------DPM Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Periscope: https://www.pscp.tv/mindaliacom - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindalia *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/lxkK3qxuhII Los Registros Akáshicos son una herramienta para el crecimiento y la evolución. ¿Cómo pueden ayudarnos a cambiar nuestra mente y nuestra vida cuando vivimos desde el propósito de nuestra alma y nuestro ser? ¿Cómo podemos transitar nuestra vida con más conciencia y armonía? Aprende esto y mucho más en esta interesante entrevista. Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. https://bloganamiotti.wixsite.com/web... https://www.instagram.com/lic.ana.mio... https://www.facebook.com/Ana-Miotti-E... https://twitter.com/AnaMiotti1?s=09 Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente -----------INFORMACIÓN SOBRE MINDALIA----------DPM Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Periscope: https://www.pscp.tv/mindaliacom - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindalia *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
En Ivoox puedes encontrar sólo algunos de los audios de Mindalia. Para escuchar las 4 grabaciones diarias que publicamos entra en https://www.mindaliatelevision.com. Si deseas ver el vídeo perteneciente a este audio, pincha aquí: https://youtu.be/lxkK3qxuhII Los Registros Akáshicos son una herramienta para el crecimiento y la evolución. ¿Cómo pueden ayudarnos a cambiar nuestra mente y nuestra vida cuando vivimos desde el propósito de nuestra alma y nuestro ser? ¿Cómo podemos transitar nuestra vida con más conciencia y armonía? Aprende esto y mucho más en esta interesante entrevista. Ana Miotti Licenciada en Psicología y especialista en Reiki, Registros Akáshicos, terapia floral, PNL, hipnosis eriksoniana y diversas terapias sistémicas como constelaciones familiares o árbol genealógico. https://bloganamiotti.wixsite.com/web... https://www.instagram.com/lic.ana.mio... https://www.facebook.com/Ana-Miotti-E... https://twitter.com/AnaMiotti1?s=09 Infórmate de todo el programa en: http://television.mindalia.com/catego... ***CON PREGUNTAS AL FINAL DE LA CONFERENCIA PARA RESOLVER TUS DUDAS **** Si te parece interesante.... ¡COMPÁRTELO!! :-) DURACIÓN: 45m Aproximadamente -----------INFORMACIÓN SOBRE MINDALIA----------DPM Mindalia.com es una ONG internacional sin ánimo de lucro. Nuestra misión es la difusión universal de contenidos para la mejora de la consciencia espiritual, mental y física. -Apóyanos con tu donación mediante Paypal https://www.mindaliatelevision.com/ha... -Colabora con el mundo suscribiéndote a este canal, dejándonos un comentario de energía positiva en nuestros vídeos y compartiéndolos. De esta forma, este conocimiento llegará a mucha más gente. - Sitio web: https://www.mindalia.com - Facebook: https://www.facebook.com/mindalia.ayuda/ - Twitter: http://twitter.com/mindaliacom - Instagram: https://www.instagram.com/mindalia_com/ - Periscope: https://www.pscp.tv/mindaliacom - Twitch: https://www.twitch.tv/mindaliacom - Vaughn: https://vaughn.live/mindalia - VK: https://vk.com/mindalia *Mindalia.com no se hace responsable de las opiniones vertidas en este vídeo, ni necesariamente participa de ellas. *Mindalia.com no se responsabiliza de la fiabilidad de las informaciones de este vídeo, cualquiera sea su origen. *Este vídeo es exclusivamente informativo.
Nesse especial do Reflix, compilamos mais de cinco horas de Pós Créditos em um programa imenso sobre a série Band of Brothers da HBO. Brunão, Miotti, Artur e Baconzitos falam não só sobre a minisérie, mas como do livro biográfico sobre o qual ela se baseia, nos fatos históricos e até nas experiências de vida de cada um. Band of Brothers é uma minisérie da HBO de 2001, baseada no livro de Stephen E. Ambrose, sobre a história da Companhia Easy, 2º Batalhão, 506º Regimento de Infantaria Paraquedista da 101ª Divisão Aerotransportada. O seriado conta a História desde o treinamento nos EUA até o final da Segunda Guerra Mundial. Produzida por Steven Spielberg, Tom Hanks, Preston Smith, Erik Jendresen e Stephen E. Ambrose, e estrelada por Damian Lewis, Donnie Wahlberg, Ron Livingston, Matthew Settle e Neal McDonough
Nesse especial do Reflix, compilamos mais de cinco horas de Pós Créditos em um programa imenso sobre a série Band of Brothers da HBO. Brunão, Miotti, Artur e Baconzitos falam não só sobre a minisérie, mas como do livro biográfico sobre o qual ela se baseia, nos fatos históricos e até nas experiências de vida de cada um. Band of Brothers é uma minisérie da HBO de 2001, baseada no livro de Stephen E. Ambrose, sobre a história da Companhia Easy, 2º Batalhão, 506º Regimento de Infantaria Paraquedista da 101ª Divisão Aerotransportada. O seriado conta a História desde o treinamento nos EUA até o final da Segunda Guerra Mundial. Produzida por Steven Spielberg, Tom Hanks, Preston Smith, Erik Jendresen e Stephen E. Ambrose, e estrelada por Damian Lewis, Donnie Wahlberg, Ron Livingston, Matthew Settle e Neal McDonough
A pocos días del primer partido de Los Pumas en 2021, nuestros especialistas analizaron a los aperturas convocados a Los Pumas por Mario Ledesma para la ventana de julio: Nicolás Sánchez, con presente en el Stade Francais y con Gonzalo Quesada como head coach, y Domingo Miotti, transitando su primera experiencia profesional en Western Force de Australia. Sus temporadas disímiles, las variables que maneja Ledesma y toda la información de la lista de convocados, en este imperdible capítulo.
Salimos a la Cancha y una entrevista exclusiva con Domingo Miotti, apertura del Western Force y de la Selección Argentina de Rugby.
En un nuevo capítulo de ESPN Scrum, viajamos virtualmente a Australia para dialogar con el tucumano Domingo Miotti, quien actualmente milita en Western Force de aquel país: el apertura no solo se expresó sobre su exitoso andar en el Super Rugby, donde comparte plantel con los argentinos Tomás Cubelli y Santiago Medrano, y acerca de su futuro en Glasgow Warrior a partir de la próxima temporada; también se tomó un tiempo para recordar su paso por Jaguares y el dolor de no poder verlo en el Súper Rugby, además del debut en Los Pumas en un partido histórico, el sueño de jugar el Mundial de Francia 2023 y su admiración por Nico Sánchez. No te lo pierdas.
Preparando todo mundo para o lançamento da Disney+ no Brasil, o Portal Refil lança um "compacto" de mais de duas horas sobre a primeira temporada de O Mandaloriano. Foram várias sessões de gravações com a turma toda debatendo o que tá acontecendo com a melhor série de 2019. Se você ainda não assistiu, semana que vem tá na Disney+ Brasil, pra quem já assistiu é um resumão pra preparar pra assistir a segunda temporada. The Mandalorian (O Mandaloriano) é uma série de televisão de ópera espacial americana da Disney+. Situada entre a queda do Império e antes da ascensão da Primeira Ordem, a série segue um pistoleiro solitário nas extremidades da galáxia, longe da autoridade da Nova República. Estrelando Pedro Pascal, Carl Weathers, Gina Carano, Werner Herzog, Nick Nolte e Taika Waititi. Escrita por e produzida por Jon Favreau e Dave Filoni, conta com os diretores Jon Favreau, Dave Filoni, Taika Waititi, Bryce Dallas Howard, Rick Famuyiwa e Deborah Chow.
No Reflix da semana Brunão, Miotti e Baconzitos discutem sobre o último episódio de Monstro do Pântano. Uma série da Atomic Monster, DC Universe e Warner Bros. Television estrelando Andy Bean, Derek Mears, Crystal Reed, Maria Sten, Jeryl Prescott, Virginia Madsen, Will Patton, Henderson Wade e Kevin Durand.
No Reflix da semana Brunão, Miotti e Baconzitos discutem sobre o nono episódio de Monstro do Pântano. Uma série da Atomic Monster, DC Universe e Warner Bros. Television estrelando Andy Bean, Derek Mears, Crystal Reed, Maria Sten, Jeryl Prescott, Virginia Madsen, Will Patton, Henderson Wade e Kevin Durand.
O jovem veterano do exército Atticus Black junta-se a sua amiga Letitia e a seu tio George para uma viagem pelas estradas dos Estados Unidos da década de 1950, com objetivo de encontrar seu pai desaparecido. Na jornada, eles se deparam com o terrores racistas e forças sobrenaturais.
No Reflix da semana Brunão, Miotti e Baconzitos discutem sobre o sétimo episódio de Monstro do Pântano. Uma série da Atomic Monster, DC Universe e Warner Bros. Television estrelando Andy Bean, Derek Mears, Crystal Reed, Maria Sten, Jeryl Prescott, Virginia Madsen, Will Patton, Henderson Wade e Kevin Durand.
O jovem veterano do exército Atticus Black junta-se a sua amiga Letitia e a seu tio George para uma viagem pelas estradas dos Estados Unidos da década de 1950, com objetivo de encontrar seu pai desaparecido. Na jornada, eles se deparam com o terrores racistas e forças sobrenaturais.
O jovem veterano do exército Atticus Black junta-se a sua amiga Letitia e a seu tio George para uma viagem pelas estradas dos Estados Unidos da década de 1950, com objetivo de encontrar seu pai desaparecido. Na jornada, eles se deparam com o terrores racistas e forças sobrenaturais.