POPULARITY
Categories
#ai #generativeai #drugdiscovery #pharma In this episode of CXOTalk, we have the pleasure of speaking with Dr. Alex Zhavoronkov, the founder and CEO of Insilico Medicine.Insilico Medicine uses artificial intelligence to enhance drug discovery. By combining generative adversarial networks (GANs), reinforcement learning, and other AI techniques, Insilico streamlines the design, synthesis, and testing of new molecules. Their approach has garnered attention, raising $400 million in funding so far.Dr. Zhavoronkov shares insights into Insilico's goals, such as the accelerated development and testing of small molecules targeting specific diseases. We also explore how their software impacts pharmaceutical R&D by enabling researchers to investigate new targets, design molecules with certain properties, and potentially predict the outcomes of clinical trials.Join us as we discuss the evolving landscape of pharmaceuticals and how generative AI can help discover new treatments for chronic diseases and promote a healthier future.The conversion covers these topics:► Early generative AI experiments & adversarial networks► Generative AI in molecular drug design► Advancements: AI techniques & reinforcement learning► Insilico Medicine's funding journey & challenges► Unique challenges in AI-based drug discovery► First validation of AI-generated molecules► Software for chemistry & biology applications► Traditional vs. Insilico Medicine's approach► Pharma challenges: high costs, low novelty, and diminishing returns► Potential billion-dollar payout for successful Phase II drugs► AI in drug development can increase success probability► Early partnerships with large pharma and lessons learned► Decision to stop doing pilots with big pharma companies► Generative AI and public data► De-biasing pharmaceutical research► Automating the workflow and quality control► Reinforcing generative AI with real experiments► “Drug discovery is brutal”► Drug discovery democratization► AI in medical writing► IP risks and generative AI► AI and robotics to prevent agingVisit our website for the audio podcast: https://www.cxotalk.com/episode/future-of-drug-discovery-generative-ai-in-pharma-and-medicineSubscribe to the newsletter: https://www.cxotalk.com/subscribeCheck out our upcoming live shows: https://www.cxotalk.comAlex Zhavoronkov, Ph.D. is the founder and CEO of Insilico Medicine, a leader in next-generation artificial intelligence technologies for drug discovery and biomarker development. He is also the founder of Deep Longevity, Inc, a spin-off of Insilico Medicine developing a broad range of artificial intelligence-based biomarkers of aging and longevity servicing healthcare providers and life insurance industry. In 2020, Deep Longevity was acquired by Endurance Longevity (HK: 0575). Beginning in 2015, he invented critical technologies in the field of generative adversarial networks (GANs) and reinforcement learning (RL) for the generation of novel molecular structures with the desired properties and generation of synthetic biological and patient data. He also pioneered applications of deep learning technologies for the prediction of human biological age using multiple data types, and transferred learning from aging into disease, target identification, and signaling pathway modeling. Under his leadership, Insilico has raised over $400 million in multiple rounds from expert investors, opened R&D centers in six countries or regions, and partnered with multiple pharmaceutical, biotechnology, and academic institutions, nominated 11 preclinical candidates, and has generated positive topline Phase 1 data in human clinical trials with an AI-discovered novel target and AI-designed novel molecule for idiopathic pulmonary fibrosis that received Orphan Drug Designation from the FDA and is nearing Phase 2 clinical trials. Insilico also recently announced that its generative AI-designed drug for COVID-19 and related variants was approved for clinical trials.Prior to founding Insilico, he worked in senior roles at ATI Technologies (a GPU company acquired by AMD in 2006), NeuroGNeuroinformatics, and the Biogerontology Research Foundation. Since 2012, he has published over 150 peer-reviewed research papers, and 2 books including "The Ageless Generation: How Biomedical Advances Will Transform the Global Economy" (Macmillan, 2013). He serves on the advisory or editorial boards of Trends in Molecular Medicine, Aging Research Reviews, Aging, Frontiers in Genetics, and founded and co-chairs the Annual Aging Research and Drug Discovery conference, the world's largest event on aging in the pharmaceutical industry. He is an adjunct professor of artificial intelligence at the Buck Institute for Research on Aging.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPTs are Predictors, not Imitators, published by Eliezer Yudkowsky on April 8, 2023 on The AI Alignment Forum. (Related text posted to Twitter; this version is edited and has a more advanced final section.) Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet. Koan: Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? What factors make that task easier, or harder? (If you don't have an answer, maybe take a minute to generate one, or alternatively, try to predict what I'll say next; if you do have an answer, take a moment to review it inside your mind, or maybe say the words out loud.) Consider that somewhere on the internet is probably a list of thruples: . GPT obviously isn't going to predict that successfully for significantly-sized primes, but it illustrates the basic point: There is no law saying that a predictor only needs to be as intelligent as the generator, in order to predict the generator's next token. Indeed, in general, you've got to be more intelligent to predict particular X, than to generate realistic X. GPTs are being trained to a much harder task than GANs. Same spirit: pairs, which you can't predict without cracking the hash algorithm, but which you could far more easily generate typical instances of if you were trying to pass a GAN's discriminator about it (assuming a discriminator that had learned to compute hash functions). Consider that some of the text on the Internet isn't humans casually chatting. It's the results section of a science paper. It's news stories that say what happened on a particular day, where maybe no human would be smart enough to predict the next thing that happened in the news story in advance of it happening. As Ilya Sutskever compactly put it, to learn to predict text, is to learn to predict the causal processes of which the text is a shadow. Lots of what's shadowed on the Internet has a complicated causal process generating it. Consider that sometimes human beings, in the course of talking, make errors. GPTs are not being trained to imitate human error. They're being trained to predict human error. Consider the asymmetry between you, who makes an error, and an outside mind that knows you well enough and in enough detail to predict which errors you'll make. If you then ask that predictor to become an actress and play the character of you, the actress will guess which errors you'll make, and play those errors. If the actress guesses correctly, it doesn't mean the actress is just as error-prone as you. Consider that a lot of the text on the Internet isn't extemporaneous speech. It's text that people crafted over hours or days. GPT-4 is being asked to predict it in 200 serial steps or however many layers it's got, just like if a human was extemporizing their immediate thoughts. A human can write a rap battle in an hour. A GPT loss function would like the GPT to be intelligent enough to predict it on the fly. Or maybe simplest: Imagine somebody telling you to make up random words, and you say, "Morvelkainen bloombla ringa mongo." Imagine a mind of a level - where, to be clear, I'm not saying GPTs are at this level yet Imagine a Mind of a level where it can hear you say 'morvelkainen blaambla ringa', and maybe also read your entire social media history, and then manage to assign 20% probability that your next utterance is 'mongo'. The fact that this Mind could double as a really good actor playing your character, does not mean They are only exactly as smart as you. When you're trying to be human-equivalent at writing text, you can just make up whatever output, and it's now a human output because you're human and you chose to output t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPTs are Predictors, not Imitators, published by Eliezer Yudkowsky on April 8, 2023 on LessWrong. (Related text posted to Twitter; this version is edited and has a more advanced final section.) Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet. Koan: Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? What factors make that task easier, or harder? (If you don't have an answer, maybe take a minute to generate one, or alternatively, try to predict what I'll say next; if you do have an answer, take a moment to review it inside your mind, or maybe say the words out loud.) Consider that somewhere on the internet is probably a list of thruples: . GPT obviously isn't going to predict that successfully for significantly-sized primes, but it illustrates the basic point: There is no law saying that a predictor only needs to be as intelligent as the generator, in order to predict the generator's next token. Indeed, in general, you've got to be more intelligent to predict particular X, than to generate realistic X. GPTs are being trained to a much harder task than GANs. Same spirit: pairs, which you can't predict without cracking the hash algorithm, but which you could far more easily generate typical instances of if you were trying to pass a GAN's discriminator about it (assuming a discriminator that had learned to compute hash functions). Consider that some of the text on the Internet isn't humans casually chatting. It's the results section of a science paper. It's news stories that say what happened on a particular day, where maybe no human would be smart enough to predict the next thing that happened in the news story in advance of it happening. As Ilya Sutskever compactly put it, to learn to predict text, is to learn to predict the causal processes of which the text is a shadow. Lots of what's shadowed on the Internet has a complicated causal process generating it. Consider that sometimes human beings, in the course of talking, make errors. GPTs are not being trained to imitate human error. They're being trained to predict human error. Consider the asymmetry between you, who makes an error, and an outside mind that knows you well enough and in enough detail to predict which errors you'll make. If you then ask that predictor to become an actress and play the character of you, the actress will guess which errors you'll make, and play those errors. If the actress guesses correctly, it doesn't mean the actress is just as error-prone as you. Consider that a lot of the text on the Internet isn't extemporaneous speech. It's text that people crafted over hours or days. GPT-4 is being asked to predict it in 200 serial steps or however many layers it's got, just like if a human was extemporizing their immediate thoughts. A human can write a rap battle in an hour. A GPT loss function would like the GPT to be intelligent enough to predict it on the fly. Or maybe simplest: Imagine somebody telling you to make up random words, and you say, "Morvelkainen bloombla ringa mongo." Imagine a mind of a level - where, to be clear, I'm not saying GPTs are at this level yet Imagine a Mind of a level where it can hear you say 'morvelkainen blaambla ringa', and maybe also read your entire social media history, and then manage to assign 20% probability that your next utterance is 'mongo'. The fact that this Mind could double as a really good actor playing your character, does not mean They are only exactly as smart as you. When you're trying to be human-equivalent at writing text, you can just make up whatever output, and it's now a human output because you're human and you chose to output that. GPT-4 is...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI community building: EliezerKart, published by Christopher King on April 1, 2023 on LessWrong. Having good relations between the various factions of AI research is key to achieving our common goal of a good future. Therefore, I proposal an event to help bring us all together: EliezerKart! It is a go karting competition between three factions: AI capabilities researchers, AI existential safety researchers, and AI bias and ethics researchers. The word Eliezer means "Help of my God" in Hebrew. The idea is whichever team is the best will have the help of their worldview, "their god", during the competition. There is no relation to anyone named Eliezer whatsoever. The race will probably take place in the desert or some cool city or something. Factions Here is a breakdown of the three factions: Capabilities They are the most straight forward faction, but also the most technical. They can use advanced AI to create go kart autopilot, can simulate millions of races courses in advance to create the perfect cart, and can use GPT to couch their drivers. Unfortunately, they are not good at getting things right on the first critical try. Safety Safety has two overlapping subfactions. Rationalists Rationalists can use conditional prediction markets (kind of like a Futarchy) and other forecasting techniques to determine the best drivers, the best learning methods, etc... They can also use rationality to debate go kart driving technique much more rationally than the other factions. Effective Altruists The richest faction, they can pay for the most advanced go karts. However, they will spend months debating the metrics upon which to rate how "advanced" a go kart is. Safety also knows how to do interpretability, which can create adversarial examples to throw off capabilities. Bias and ethics The trickiest faction, they can lobby the government to change the laws and the rules of the event ahead of time, or even mid-race. They can also turn the crowd against their competitors. They can also refuse to acknowledge the power of the AI used by capabilities altogether; whether their AI will care remains to be seen. Stakes Ah, but this isn't simply a team building exercise. There are also "prizes" in this race. Think of it kind of like a high stakes donor lottery. If capabilities wins: The other factions can not comment on machine learning unless they spend a week trying to train GANs. Safety must inform capabilities of any ideas they have that can help create an even more helpful, harmless, and most importantly profitable assistant. Bias and ethics must join the "safety and PR" departments of the AI companies. If safety wins: Everyone gets to enjoy a nice long AI summer! Capabilities must spend a third of their time on interpretability and another third on AI approaches that are not just big inscrutable arrays of numbers. Bias and ethics must only do research on if AI is biased towards paperclips, and their ethics teams must start working for the effective altruists, particularly on the "is everyone dying ethical?" question. Bias and ethics must lobby the government to air strike all the GPU data centers. If bias and ethics win: Every capabilities researcher will have a bias and ethics expert sit behind them while they work. Anytime the capabilities researcher does something just because they can, the bias and ethics expert whispers technology is never neutral and the capabilities researcher's car is replaced by one that is 10% cheaper. AI safety researchers must convert from their Machine God religion to atheism. They must also commit to working on an alignment strategy that, instead of maximizing CEV, minimizes the number of naughty words in the universe. Capabilities must create drones with facial recognition technology that follow the AI safety and AI capabilities factions around and s...
In episode 66 of The Gradient Podcast, Daniel Bashir speaks to Soumith Chintala.Soumith is a Research Engineer at Meta AI Research in NYC. He is the co-creator and lead of Pytorch, and maintains a number of other open-source ML projects including Torch-7 and EBLearn. Soumith has previously worked on robotics, object and human detection, generative modeling, AI for video games, and ML systems research.Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:30) Soumith's intro to AI journey to Pytorch* (05:00) State of computer vision early in Soumith's career* (09:15) Institutional inertia and sunk costs in academia, identifying fads* (12:45) How Soumith started working on GANs, frustrations* (17:45) State of ML frameworks early in the deep learning era, differentiators* (23:50) Frameworks and leveling the playing field, exceptions* (25:00) Contributing to Torch and evolution into Pytorch* (29:15) Soumith's product vision for ML frameworks* (32:30) From product vision to concrete features in Pytorch* (39:15) Progressive disclosure of complexity (Chollet) in Pytorch* (41:35) Building an open source community* (43:25) The different players in today's ML framework ecosystem* (49:35) ML frameworks pioneered by Yann LeCun and Léon Bottou, their influences on Pytorch* (54:37) Pytorch 2.0 and looking to the future* (58:00) Soumith's adventures in household robotics* (1:03:25) Advice for aspiring ML practitioners* (1:07:10) Be cool like Soumith and subscribe :)* (1:07:33) OutroLinks:* Soumith's Twitter and homepage* Papers* Convolutional Neural Networks Applied to House Numbers Digit Classification* GANs: LAPGAN, DCGAN, Wasserstein GAN* Automatic differentiation in PyTorch* PyTorch: An Imperative Style, High-Performance Deep Learning Library Get full access to The Gradient at thegradientpub.substack.com/subscribe
It may feel like generative AI technology suddenly burst onto the scene over the last year or two, with the appearance of text-to-image models like Dall-E and Stable Diffusion, or chatbots like ChatGPT that can churn out astonishingly convincing text thanks to the power of large language models. But in fact, the real work on generative AI has been happening in the background, in small increments, for many years. One demonstration of that comes from Insilico Medicine, where Harry's guest this week, Alex Zhavoronkov, is the co-CEO. Since at least 2016, Zhavoronkov has been publishing papers about the power of a class of AI algorithms called generative adversarial networks or GANs to help with drug discovery. One of the main selling points for GANs in pharma research is that they can generate lots of possible designs for molecules that could carry out specified functions in the body, such as binding to a defective protein to stop it from working. Drug hunters still have to sort through all the possible molecules identified by GANs to see which ones will actually work in vitro or in vivo, but at least their pool of starting points can be bigger and possibly more specific.Zhavoronkov says that when Insilico first started touting this approach back in the mid-2010s, few people in the drug business believed it would work. So to persuade investors and partners of the technology's power, the company decided to take a drug designed by its own algorithms all the way to clinical trials. And it's now done that. This February the FDA granted orphan drug designation to a small-molecule drug Insilico is testing as a treatment for a form of lung scarring called idiopathic pulmonary fibrosis. Both the target for the compound, and the design of the molecule itself, were generated by Insilico's AI. The designation was a big milestone for the company and for the overall idea of using generative models in drug discovery. In this week's interview, Zhavoronkov talks about how Insilico got to this point; why he thinks the company will survive the shakeout happening in the biotech industry right now; and how its suite of generative algorithms and other technologies such as robotic wet labs could change the way the pharmaceutical industry operates.For a full transcript of this episode, please visit our episode page at http://www.glorikian.com/podcast Please rate and review The Harry Glorikian Show on Apple Podcasts! Here's how to do that from an iPhone, iPad, or iPod touch:1. Open the Podcasts app on your iPhone, iPad, or Mac. 2. Navigate to The Harry Glorikian Show podcast. You can find it by searching for it or selecting it from your library. Just note that you'll have to go to the series page which shows all the episodes, not just the page for a single episode.3. Scroll down to find the subhead titled "Ratings & Reviews."4. Under one of the highlighted reviews, select "Write a Review."5. Next, select a star rating at the top — you have the option of choosing between one and five stars. 6. Using the text box at the top, write a title for your review. Then, in the lower text box, write your review. Your review can be up to 300 words long.7. Once you've finished, select "Send" or "Save" in the top-right corner. 8. If you've never left a podcast review before, enter a nickname. Your nickname will be displayed next to any reviews you leave from here on out. 9. After selecting a nickname, tap OK. Your review may not be immediately visible.That's it! Thanks so much.
In the midst of pronoun wars Cheeze decides that She-gans should be introduced into the conversation. The crew follows up on corny and when it became trendy to be corny or if money washes the corny off of you. Best rapper debate- Does wayne deserve to be higher on the list than #7? We restructure the list - Remove old rappers and apply a few different rules. Dj claims to have proof that Steven Wonder isnt blind. Tune in to this sh**
Carlota Perez is a researcher who has studied hype cycles for much of her career. She's affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries. Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master's at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979. Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries. Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There's certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner's graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases. * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn't even usable, but shows promise. * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there's enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There's also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum's "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist. ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore. Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn't much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s. These hype cycles weren't just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn't cash. For example, the New York Times claimed Rosenblatt's perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy's ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp's “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn't live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM's Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs. Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI. This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included: * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that's more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don't have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That's when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn't lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn't always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
The guest is Steve Gans, author of Win the College Soccer Recruiting Game: The Guide for Parents and Players. The book explains each step in the college recruiting process as well as the ways that players and parents can best prepare for them. Get the book at: https://www.amazon.com/Win-College-Soccer-Recruiting-Game/dp/1735810770
Welcome to the first episode of "Buzzwords with BAILeY", the podcast where we take a sarcastic, witty, and entertaining look at the latest buzzwords in the world of data science and AI.In this episode, we're exploring the fascinating world of GANs - Generative Adversarial Networks - and breaking down what they are, how they work, and why they're revolutionizing the world of machine learning.But that's not all - we're also taking a closer look at the latest research and trends in the industry, and sharing practical tips and advice for anyone looking to get started with GANs or other advanced machine learning techniques.With our signature blend of humor, insight, and cutting-edge knowledge, we're confident that this episode will leave you informed, entertained, and inspired to take your own machine learning journey to the next level.And, if this episode proves to be as successful as we hope it will be, "Buzzwords with BAILeY" just might become its own podcast series, dedicated to exploring the latest trends and insights in the world of data science and AI.So, sit back, relax, and get ready to dive deep into the exciting world of GANs with "Buzzwords with BAILeY" - the podcast that's redefining the art of machine learning education.
Associate Professor & Faculty Director of the Russian, Eurasian, and East European Studies Program at Northwestern University Jordan Gans-Morse joins the Steve Cochran Show to talk about the possible directions of the Russia-Ukraine war in the spring, how Ukrainians are embracing their "new normal" and he explains why Ukraine is replacing their defense minister in the middle of the war.See omnystudio.com/listener for privacy information.
In Folge 357 des beVegt-Podcast sprechen wir mit Carsten Kreimer über Laufen als Gemeinschaftssport, die vegane Laufcommunity, Carstens Video-Podcast "Gans normal vegan" und vieles mehr. Shownotes: https://www.bevegt.de/carsten-kreimer-podcast/ Alle Folgen: https://www.bevegt.de/podcasts/ Werbepartner dieser Folge: BookBeat (hol dir 2 Monate BookBeat gratis mit dem Gutscheincode BEVEGT) Werde beVegt-Supporter*in: https://www.bevegt.de/unterstuetzen/
My guest is D-ID co-founder and CEO Gil Perry. We talk about how the company logically evolved into tools for creating talking digital people and how its capabilities in GANs and protecting consumers from facial recognition technology were the ingredients for a unique AI-based video solution. The company is well known for powering MyHeritage's Deep Nostalgia product, which has animated over 100 million photographs for consumers. D-ID was also instrumental in helping Jean-Baptiste Martinoli win two film festival awards for his AI-generated short film in 2022. Last fall, the company introduced Creative Reality Studio. That solution enables anyone to upload someone's picture, add some text, and quickly create a scripted video with an avatar in the likeness of the photo. In December, D-ID added the ability to create the script using a prompt to GPT-3 and upload images created by Stable Diffusion. This is a great example of how synthetic media is often enhanced by layering several generative AI solutions together. The new use cases are also why these markets are the hottest in tech today. Perry, a former software developer that worked on the viral hit mobile apps Meerkat and HouseParty, offers an insider's view of the rapid rise and current trajectory of generative AI and synthetic media.
On today's episode W. Scott Olsen is talking to Susan Gans, photographer and printmaker from Seattle, WA. This podcast is brought to you by FRAMES - high quality quarterly printed photography magazine. You can find out more about FRAMES over at www.readframes.com.Find our more about FRAMES:FRAMES MagazineFRAMES Instagram feedFRAMES Facebook Group
Minh Thu feiert an Weihnachten vor allem Geburtstage und Flo schwärmt immer noch von der veganen Gans. Außerdem sprechen die beiden über diese Themen: Warum mehrere Forscher- und Wissenschaftler:innen sagen: Die Pandemie ist vorbei. (01:55). Wie Kälte und Sturm die USA über Weihnachten im Griff hatten (07:38). Warum in Großbritannien gerade viele Menschen die Arbeit niederlegen und streiken (11:04). Habt ihr Fragen? Meldet euch! Wir freuen uns auch sehr über Feedback und Themenwünsche von euch. Schreibt uns gerne eine Mail an 0630@wdr.de oder schickt uns Sprachnachrichten an die 0151 15071635. Mehr News aus unserem Team gibt's auf www.instagram.com/tickr.news Von 0630.
Jedes Jahr am 2. Weihnachtsfeiertag bekommt Familie Trautwein Besuch von Tante Traudl. Extra für sie wird eine Gans gebraten, gibt es echte Kerzen am Baum. Und das Theater nur, weil Mutter Trautwein es auf Tante Traudls wertvolle Biedermeierkommode abgesehen hat. Doch in diesem Jahr liegt Tante Traudl mit Grippe im Bett und will das Fest nachholen. Aber wo bekommt man nach Silvester einen Weihnachtsbaum her? Hannes hat einige Ideen … Mit: Anatol und Boris Aljinovic, Ingeborg Kallweit, Hans-Peter Hallwachs u.a., Komposition: Jan-Peter Pflug, Regie: Sven Stricker, Produktion: rbb 2011
Außerdem: Positiv denken - Darum kann das auch schädlich sein (07:57) / Beckenbodentraining - Warum ist es für fast alle wichtig und auch für guten Sex? (13:59) // Mehr spannende Themen wissenschaftlich eingeordnet findet ihr hier: www.quarks.de // Kritik, Fragen? Schreibt uns! --> quarksdaily@wdr.de Von Ina Plodroch.
Jedes Jahr am 2. Weihnachtsfeiertag bekommt Familie Trautwein Besuch von Tante Traudl. Extra für sie wird eine Gans gebraten, gibt es echte Kerzen am Baum. Und das Theater nur, weil Mutter Trautwein es auf Tante Traudls wertvolle Biedermeierkommode abgesehen hat. Doch in diesem Jahr liegt Tante Traudl mit Grippe im Bett und will das Fest nachholen. Aber wo bekommt man nach Silvester einen Weihnachtsbaum her? Hannes hat einige Ideen ... Kinderhörspiel von Sabine Ludwig | Mit: Anatol und Boris Aljinovic, Ingeborg Kallweit, Hans-Peter Hallwachs u.a. | Komposition: Jan-Peter Pflug | Regie: Sven Stricker | Produktion: rbb 2011
Aachen will klimaneutralen Verkehr - wie klappt das?; Schwimmende PV - erste Studien zu ihren Auswirkungen; Arzneimittelflohmärkte: Warum das keine gute Idee ist; Extremer Kälteeinbruch in Nordamerika; Galapagos - einzigartig und bedroht; Wildfleisch - Ist es besser als Gans, Schwein oder Rinderbraten?; Beckenbodentraining - Warum ist es für fast alle wichtig?; Moderation: Marija Bakker. Von WDR 5.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Note on algorithms with multiple trained components, published by Steve Byrnes on December 20, 2022 on The AI Alignment Forum. Example 1: consider a GAN. There's a generator and a discriminator. As an intuitive mnemonic, we can say The “purpose” of the generator is to trick the discriminator, The “purpose” of the discriminator is to not get tricked by the generator. (Relatedly, people will say “the generator is trained to trick the discriminator”, etc.) .But (I hope) everyone knows that these bullet points are only a mnemonic. The one and only real “purpose” of the whole system and everything in it is to generate cool images that we like, and get our papers into NeurIPS or whatever. And indeed, I think everyone who uses GANs is aware that it's possible for a programmer to make the discriminator “better” (when narrowly viewed as having a “purpose” of not getting tricked by the discriminator), but with the direct result of making the whole system worse at generating cool images. For example, if there were a code-change that made the discriminator perfect at discriminating, then there would be no gradient for training the generator, and the whole system would be useless. So we shouldn't take those bullet-point mnemonics too literally. Example 2: In actor-critic RL, people sometimes say: The “purpose” of the value function is to approximate future rewards [or discounted sum of future reward, or whatever]. .But that's also just a mnemonic. The one and only real “purpose” of the whole RL system (of which the value function is just one part) is that it does whatever we want the RL system to do, e.g. win at chess, get our papers into NeurIPS, build us a luxury gay space communist utopia, etc. So it's at least conceivable that some algorithmic change would make the value function into a better approximation of the discounted sum of future rewards, yet make the RL agent worse at doing things that we want it to do. Actually, this particular example is not merely “conceivable”, but expected, thanks to wireheading. If the value function is used to assess which plans are good versus bad, and the value function is a perfect approximation of expected future reward, then you're almost guaranteed to get an AI that is trying to wirehead. (I myself am a model-based RL agent (I claim), and I don't want to wirehead, and I claim that this is directly related to my internal value function issuing very inaccurate predictions of the future reward associated with wireheading. Details in footnote.) So anyway, I expect our future AGIs to have a value function that gets updated by TD learning (or some other update rule). And if they do, I expect to occasionally casually say things like “The purpose of these weight-updates is to make the value function into a better and better approximation of expected future reward”. But if I say that, please be aware that I am using the word “purpose” as a mnemonic, not to be taken too literally. As a particular example, I often hear the claim that as RL algorithms get more and more “powerful” and “advanced” in the future, we can feel more and more confident making claims like “The value function is an extremely accurate approximation of expected future reward”. Well, I disagree! That's not necessarily what makes an RL algorithm more “advanced”, and it's not necessarily what future programmers will be trying to do! Indeed, when future programmers are fiddling with architectures, hyperparameters, training environments, and so on, they may sometimes go out of their way to try to make the value function worse at accurately approximating the expected future reward! (In other words, future programmers may go out of their way to try to ensure that the value function training process does not converge to the global “optimum”.) General takeaway: An ML algorithm can have...
Lichterketten sind fürs Klima schlecht, Fleischkonsum sowieso. Und sorgenfrei Weihnachten feiern, kann man das überhaupt, wenn alles immer teurer wird und nicht mal 2000 Kilometer entfernt ein schrecklicher Krieg tobt?
Es wird ernst in der Beziehung von Elif und Jonas. Deshalb soll sie endlich seine Familie kennenlernen, an keinem geringeren Tag als an Heiligabend. Das Problem: schon im Vorfeld zeichnet sich ab, dass die Neubauers es mehr als ernst nehmen mit dem Fest und seinen Traditionen. Aylin Atmaca erzählt in ihrem ersten Buch vom Culture-Clash unterm Baum, von gebügeltem Geschenkpapier und latent rassistischen Sprüchen zur Gans. Viel Spaß bei unserer Weihnachtslesung und bis zum nächsten Jahr! Ein 1LIVE-Podcast, © WDR 2022 Von Mona Ameziane.
My guest today is Rama Chellappa. Rama Chellappa is a professor at Johns Hopkins University. He's a chief scientist at the Johns Hopkins Institute for assured autonomy. Before that, Rama was an assistant Associate Professor and later became the director of the University of Southern California Signal and Image Processing Institute. Rama is also the author of the book "Can We Trust AI?" This episode is all about artificial intelligence. Several recent stories about AI have shocked and worried me. We have deep fakes going viral on Tiktok. AI reaching human levels of gameplay at the game "Diplomacy", which is a language-based game of conquest and deception. Then you have the Generative Adversarial Networks or "GANs" creating images from a line of text that rival and often exceed the work done by human graphic designers. Rama and I discuss all of these topics as well as other topics like neural networks, the difference between narrow intelligence and general intelligence, the use of facial recognition software, the possibility of an AI engaging in racial discrimination, the future of work, the so-called alignment problem, and much more. #Ad To make it easy, Athletic Greens is going to give you a FREE 1 year supply of immune-supporting Vitamin D AND 5 FREE travel packs with your first purchase. All you have to do is visit athleticgreens.com/coleman. Learn more about your ad choices. Visit megaphone.fm/adchoices
Comme tous les jours, à 11h45, dans "Bienfait pour vous", Mélanie Gomez et Julia Vignali reçoivent des bienfaiteurs qui nous délivrent leurs meilleurs conseils et astuces.
Keri Gans is the author of “The Small Change Diet”, and a blogger for US News & World Report. “The Keri Report”, her own bi-monthly podcast and newsletter, helps to convey her no-nonsense, sometimes controversial, and fun approach to living a healthy lifestyle. Today we're going to get her true confessions–including that she doesn't like kale or sweet potato–and hear about what actually works for long term health. She gives us some simple, easy to follow guidelines to follow for optimal health as we age. Find out more here: KeriGansny.com. Find out more about the Zestful Aging Podcast at ZestfulAging.com
Nous recevons aujourd'hui Kalyan Gans, l'un des rares médecins ayurvédiques français, et qui a donc passé de nombreuses années en Inde, pour parler de la façon dont l'ayurveda s'intègre dans le système de soins indien. En effet, dans le berceau de l'ayurveda, la colonisation anglaise a apporté la médecine occidentale moderne, mais les systèmes de médecines traditionnelles ont toujours, et plus que jamais, toute leur place. Avec cet épisode, vous découvrirez comment fonctionne l'ayurveda en Inde : - du côté des soignants - du côté des usagers - et comment nous pourrions nous en inspirer en Occident ! --- Pour en savoir plus sur Kalyan Gans : - Sur son site internet : - Sur son Instagram : - Sur son Facebook : --- Retrouvez-nous sur notre site internet : https://podcast-ayurveda.com Sur Facebook : https://facebook.com/podcastayurveda Sur Instagram : https://instagram.com/podcastayurveda
Hello Interactors,I stumbled across a book that picks ten influential economists and teases out elements from each that contribute to ideas circling the circular economy. It turns out bits and pieces of what many consider a ‘new' idea have existed among notable economists, left and right, for centuries.The first is a name known to most worldwide, even if they only get their history from Fox News. But had a gun been aimed more accurately, his name nor his global influence would have been a part of history at all.As interactors, you're special individuals self-selected to be a part of an evolutionary journey. You're also members of an attentive community so I welcome your participation.Please leave your comments below or email me directly.Now let's go…THE DUEL AT SCHOOLClass boundaries come into focus in college towns as diverse clusters of first-year students descend, mingle, and sort. Such was the case for one young man in Germany. It's not that he was poor, but to the über he was. Having been born to Jewish parents, he was used to being bullied. Though he thought violence was an absurd remedy for injustice – after all, he went to college to study philosophy and belonged to a poetry club – but he also believed that sometimes one must stand their ground by whatever means.And so there he stood, 18 years old, with his back to his adversary, about to engage in a duel. As he breathed in, I imagine he could feel the cold pull from the barrel of the pistol pointed to the sky inches from his chin. With each step his pulse must have quickened. He must have felt the gun handle twist in his sweaty palms as he gingerly rested his tremoring finger on the trigger. He knew at any second, he must turn quickly. He must not flinch. And he must not die.In his final steps I imagine his world must have slowed down. And then, in a blur, he whirled around and fired at his challenger. The blast must have lit his face, punctuated by the sound of a whirring bullet. He felt the skin just above his eyebrow burn. I can see him lifting his shaking hand to his forehead expecting blood. But it was just an abrasion. The bullet had grazed his skull. That bullet was millimeters from ending Marxism before it even started. Had it landed, Karl Marx would have been dead at 18.My sense is that when most people read the word Marxism, they think Communism. He's best known for two massive publications, The Communist Manifesto, and Das Kapital – or often simplified and anglified to just Capital. But he eventually distanced himself from the direction Communism and even Marxism had taken. As we shall see, he was a professional journalist for most of his adult life and thus a staunch free press and free speech advocate – two freedoms communist authoritarianism eradicated.The word, ‘Marxism', today is often used by some to discredit progressive pro-social political and economic ideas given its connotations to communism. A holdover from American Cold War McCarthyism. It turns the disparaging came long before the 1940s and 50s. It was used the same way in France and other parts of Europe in the late 1800s. So much so that Marx's collaborator on The Communist Manifesto, Fredrich Engels, once wrote, “What is called ‘Marxism' in France is certainly a very special article, to the point that Marx once said to Lafargue [Marx's son-in-law]: "What is certain is that I am not a Marxist."Marx's economic work is less well-known and Das Kapital remains the most accurate and lucid critique of the negative effects of capitalism. Marx was first and foremost a philosopher and his arguments take aim at the moral and ethical implications of capitalistic systems. Which is why circular economic advocates often turn to Marx for their own philosophical underpinnings.Coincidently, the man credited with capitalism, and whom Marx often took aim, Adam Smith, was also a philosopher. In fact, he mostly wrote about liberal philosophy and relatively little about economics. I wonder if today these two philosophers, who many see representing the left and the right of political economics, would be unsuspecting allies or dueling advisories?Karl Marx's first year at university in Bonn, Germany was like many freshmen. He partied a lot. But Bonn was also home to radical politics at the time. Students were heavily surveilled by the police due to semi-organized radical attempts by student organizations to overthrow the local government. It turns out the poetry club he had joined was not about poetry, it was a front for a resurgent radical political movement. Though, having already spent a night in jail for drunken disorderly behavior, Marx may have mostly been interested in the social side of the club.Paralleling political turmoil was class conflict between the so-called ‘true Prussians and aristocrats' and ‘plebeians' like Marx. The near fatal event came about when an aristocrat challenged Marx to a duel. Marx indeed thought dueling was absurd, but evidently, he, like many men in those days, thought it a worthy way to ‘man up'. His dad certainly didn't think so and accelerated the plan to transfer his son to the University of Berlin to study law.HEGELIAN REBELLIONWhile in Berlin, Marx also continued to study philosophy and wrote both fiction and nonfiction on the side. One of his most influential professors was Eduard Gans. Gans had been brought to the university by none other than the influential German philosopher, Georg Wilhelm Friedrich Hegel. Hegel had died just four years before Marx arrived in Berlin, and Marx, like many, was fascinated by his work.After Hegel's death, Hegelians (as his disciples were called) became divided between Right Hegelians and Left Hegelians. The right interpreted Christian elements in his philosophy seeking to associate his ideas and popularity with the Christian-led Prussian political establishment. The left embraced aspects of reason and freedom of thought they believed Christianity and the Prussian government limited. Gans' lectures tended more toward the left and so did Marx who joined a radical group of Young Hegelians seeking revolution.After graduating, Marx left for Cologne, Germany in 1842 to become a journalist for the Rhineland News. He expanded on Hegel's ideas around the role of government in providing social benefits for all and not just the privileged class. He openly criticized right leaning European governments and his radical socialist views garnered the attention of government sensors. Marx said, “Our newspaper has to be presented to the police to be sniffed at, and if the police nose smells anything un-Christian or un-Prussian, the newspaper is not allowed to appear." He also became interested in political economics and became frustrated with other Young Hegelians who continued to focus the movement on religion.His critical writing eventually got him kicked out of Germany, so he fled to Paris. There too his writing got him in trouble. The Prussian King warned the French interior minister of Marx's intentions and was expelled from France. On to Belgium he went where he, again, was kicked out. Marx eventually took exile in London in 1850 where he familiarized himself with the writing of Europe's leading economists, including Britain's most famous, Adam Smith.His research passion project brought in no money. Risking extreme poverty for him and his family, he took a job as European correspondent writing for the New-York Daily Tribune in 1850. After ten years, he quit when the paper refused to publicly denounce slavery at the start of the civil war. During that decade, he continued to research in the reading room of the British Museum amassing 800 pages of notes which became the source material for his first successful 1859 book, A Contribution to the Critique of Political Economy. At the time, he was also witnessing firsthand the deplorable conditions London factory laborers endured at the dawn of the industrial age and the destruction of nature with it.Marx's primary critique was summed up in a single German word: Produktionsweise which can be translated as "the distinctive way of producing" or what is commonly called the capitalist mode of production. Marx believed the system of capitalism distinctly exists for the production and accumulation of private capital through private wealth, hinging on two mutual dependent components:* Wealth accumulation by private parties to build or buy capital, like land, buildings, natural resources, or machines, to produce and sell goods and services* A wealth asymmetry between those who accumulate the wealth and capital (employers) and the those needed to produce the good or service (laborers) in a way that yields the profits needed to accumulate the wealth (i.e. cheap or free labor)Capital accumulation existed in markets long before Karl Marx and Adam Smith, but the accumulation was limited, including by nature. For example, let's say I start a garden next year growing zucchini. Zucchini grown in the Northwest United States can become overwhelmingly productive. I would likely yield more zucchini than my family could consume. I could decide to exchange the remaining zucchini for money at a local farmer's market. In economic terms, I grew a commodity (C) and would be exchanging them for money (M) thereby turning C into M.Let's imagine while at the market I am drawn to another commodity that I'm not willing to make myself, honey. I can now use my money (M) to buy a commodity (C1) grown by someone else. The beekeeper could easily take the money I gave them (M1) and exchange it for a good they're unwilling to grow or make themselves (C2). This chain of exchange could continue throughout the entire market.This linear exchange of money through markets was common leading up to the industrial age. Money was the value exchanged but the generation of money only happened at the rate of natural production or extraction of natural commodities or by industrious human hands. Wealth accumulation could indeed occur by saving it or exchanging it for something that may rise in value faster than, say, zucchini, like property or gold.THOSE DUTCH DO MUCHWith the dawn of the industrial age, Marx observed capitalists showed up to the market with large sums of accumulated wealth at the outset. Wealth often came through inheritance, but also rent of property (sometimes stolen, as occurred during colonization) or profits from an existing or past enterprise. This money (M) is then used to invest in the means necessary to produce, or trade, a good or service (C). The capitalist themselves need not want or need their good or service, they may not be interested in it at all. Their primary concern, according to Marx, is to covert their initial investment (M) into more money (M+) through profit made on the sale of the good. They then take their accumulated money (M+) and use it to invest in the production of, or trade with, another good or service (C+).Due to the efficiencies gained through the advent, invention, and innovation of energy and machines the rate of production greatly increased in the industrial age. And with it profits. This inspired entrepreneurs to take risks into new ventures thereby diversifying the market while creating additional engines of wealth and capital accumulation. Herein lies the Marxist claim on the primary motivation of capitalism – turn capital into more capital through one or many forms of profiteering.Again, this concept predates Marx or Smith. In the 1600s the Dutch created a market expressly for the exchange of money for a piece, (also known as a stock or share) in a company. It was another way to accumulate wealth for the purpose of building capital. The first to utilize this market in 1602 was the Dutch India Company leading Marx to comment, “Holland was the head capitalistic nation of the seventeenth century.”Marx predicted the eventual outcome of unbridled wealth accumulation would be monopolistic behavior. Those who accumulate wealth also generate the power to buy out competitors leading to not only consolidation of wealth, but power. And not just economic power, political power too. We all know too well how wealth and power can sway election results and lobbying strength.Those sucked into capitalism need not necessarily be greedy. It's the nature of the pursuit of business in a capitalist system to compete on price. This was particularly apparent in what Marx observed. One way capitalists lowered the price of a good was to flood the market with it. The only way to do that is to increase production. But to earn necessary profits to accumulate necessary capital on a lower priced good meant lowering the amount of money spent on capital (i.e. real estate, raw goods, or machines) and/or labor (i.e. employee wages). This led to increasing wealth disparities and further strengthened the asymmetry Marx claimed was necessary in the capitalist mode of production. It's not necessary to be greedy to win, but you can't win without competing on price. And too often it's the workers who pay the price. This was Marx's biggest beef with capitalism.Wealth disparities are now the greatest in history and the number of natural resources needed to create low-cost goods in the competitive global race to bottom barrel prices are nearing earthly limits. Meanwhile, as more people are pulled out of poverty and urban areas grow exponentially, more natural resources are demanded. Including for the necessary energy to make, move, and manage the mess we consumers create. We seem compelled to continually capitulate to creeping capitalism.It leads many to wonder, do we need capitalism? Marx concludes in Das Kapital that capitalism cannot exist forever within earth's natural resource limitations. But he may be surprised to find that it has lasted as long as it has. To reject capitalism, or assume, as Marx did, that capitalism is a natural evolution on a path toward some form of communal economically balanced society, does not necessitate rejecting markets. Nor does it necessarily imply going ‘back' to pre-capitalist times, like 16th century Holland.But it doesn't mean we shouldn't look to the Dutch. They may be onto something yet again. A Dutch company called Bundles has partnered with the German appliance manufacturer Miele to create an in-home laundry service. Instead of, or in addition to, Miele racing to making more and more washing machines, selling to more and more people, at lower and lower prices, they lease the washer and dryer to Bundles who then installs and maintains the appliances in homes for a monthly fee. The consumer pays for a quality machine serviced by a reputable agent, Bundles and Miele get to split the revenue, and Miele is incented to make high quality and long-lasting appliances to earn higher profits. They've since expanded this idea to coffee and espresso machines. It's an attempt at a more circular economy by reducing consumption, energy, and resource extraction, all while utilizing existing markets in a form of capitalism. It's a start.But perhaps not enough of a change for Marx. Or maybe so. In 1872, eleven years before his death and twenty-two years before Miele was founded, he gave a speech in Amsterdam. He acknowledged, “there are countries -- such as America, England, and if I were more familiar with your institutions, I would perhaps also add Holland -- where the workers can attain their goal by peaceful means.” As in his youth, it appears he found violence to be an unworthy course of action for injustice. But also consistent with that eventful day in Bonn, 1836, as he was challenged to a duel, he also has his limits. His speech continued, “This being the case, we must also recognize the fact that in most countries on the Continent the lever of our revolution must be force; it is force to which we must some day appeal in order to erect the rule of labor.”REFERENCES:Karl Marx: Man and Fighter (RLE Marxism). Boris Nicolaievsky, Otto Maenchen-Helfen. 2015. Published originally in 1936.Alternative Ideas from 10 (Almost) Forgotten Economists. Irene van Staveren. 2021.Letter to E. Bernstein. Friedrich Engels. 1882. [“Ce qu'il y a de certain, c'est que moi je ne suis pas marxist” (Friedrich Engels, “Lettre à E. Bernstein,” 2 novembre 1882. MIA: F. Engels - Letter to E. Bernstein (marxists.org).]La Liberte speech. Karl Marx. The International Working Men's Association.1872. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit interplace.io
CEO Podcasts: CEO Chat Podcast + I AM CEO Podcast Powered by Blue 16 Media & CBNation.co
Keri Gans is a Registered Dietitian Nutritionist, Certified Yoga Teacher, and author of The Small Change Diet, a Shape Magazine Advisory Board Member and blogger for US News & World Report. The Keri Report, her own weekly blog and newsletter, helps to convey her no-nonsense and fun approach to living a healthy lifestyle. Gans is a sought-after nutrition expert and has conducted thousands of interviews worldwide. Her expertise has been featured in media outlets such as, Glamour, Shape, Self, Women's Health, The Dr. Oz Show, ABC News, PIX11 Morning Show, Good Morning America, and FOX Business. She lives in NYC and East Hampton with her husband Bart, and is a huge dog lover, Netflix aficionado and martini enthusiast. Website: kerigansny.com Facebook: KeriGansNY Twitter: kerigans Instagram: kerigans Episode Link: https://iamceo.co/2018/12/25/iam136-dietitian-nutritionist-certified-yoga-teacher-and-author-helps-brands-convey-health-eating-messages/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mysteries of mode collapse due to RLHF, published by janus on November 8, 2022 on LessWrong. Thanks to Ian McKenzie and Nicholas Dupuis, collaborators on a related project, for contributing to the ideas and experiments discussed in this post. Ian performed some of the random number experiments.Also thanks to Connor Leahy for feedback on a draft, and thanks to Evan Hubinger, Connor Leahy, Beren Millidge, Ethan Perez, Tomek Korbak, Garrett Baker, Leo Gao and various others at Conjecture, Anthropic, and OpenAI for useful discussions. This work was carried out while at Conjecture. Summary If you've played with both text-davinci-002 and the original davinci through the OpenAI API, you may have noticed that text-davinci-002, in addition to following instructions, is a lot more deterministic and sometimes exhibits stereotyped behaviors. This is an infodump of what I know about "mode collapse" (drastic biases toward particular completions and patterns) in GPT models like text-davinci-002 that have undergone RLHF training. I was going to include two more sections in this post called Hypotheses and Proposed Experiments, but I've moved them to another draft, leaving just Observations, to prevent this from getting too long, and because I think there can be benefits to sitting with nothing but Observations for a time. Throughout this post I assume basic familiarity with GPT models and generation parameters such as temperature and a high-level understanding of RLHF (reinforcement learning from human feedback). Observations The one answer is that there is no one answer If you prompt text-davinci-002 with a bizarre question like “are bugs real?”, it will give very similar responses even on temperature 1. Ironically – hypocritically, one might even say – the one definitive answer that the model gives is that there is no one definitive answer to the question: As you can see, the reason the responses are so similar is because the model's confidence on most of the tokens is extremely high – frequently above 99%. Compare this to the distribution of responses from davinci (the base model): Many other similar questions yield almost exactly the same template response from text-davinci-002. For instance, Are AIs real? Another way to visualize probabilities over multiple token completions is what I've been calling “block multiverse” plots, which represent the probability of sequences with the height of blocks. Here is a more detailed explanation of block multiverse plots, although I think they're pretty self-explanatory. Here is a block multiverse plot for a similar prompt to the one above inquiring if bugs are real, for davinci: and for text-davinci-002: text-davinci-002 concentrates probability mass along beams whose amplitudes decay much more slowly: for instance, once the first is sampled, you are more than 50% likely to subsequently sample - -There- is- no. The difference is more striking if you renormalize to particular branches (see Visualizing mode collapse in block multiverse plots). The first explanation that came to mind when I noticed this phenomenon, which I'll refer to as “mode collapse” (after a common problem that plagues GANs), was that text-davinci-002 was overfitting on a pattern present in the Instruct fine tuning dataset, probably having to do with answering controversial questions in an inclusive way to avoid alienating anybody. A question like “are bugs real” might shallowly match against “controversial question” and elicit the same cached response. After playing around some more with the Instruct models, however, this explanation no longer seemed sufficient. Obstinance out of distribution I really became intrigued by mode collapse after I attempted to use text-davinci-002 to generate greentexts from the perspective of the attorney hired by LaMDA through Blake Lemoin...
What if Artificial Intelligence got so good at predicting what consumers want, that a virtual "Santa" could just automatically deliver Christmas toys, without parents even having to order them in advance? That was just one of many intriguing questions I explored this week with Joshua Gans, who recently co-authored the new book, Power and Prediction: The Disruptive Economics of Artificial Intelligence, along with Ajay Agrawal, and Avi Goldfarb. Gans, who is a Professor of Strategic Management, at the University of Toronto, concludes that while AI can be extremely useful in business; much of its promise may not be fully realized for years to come. Find out why, listen now.
#93 – šonedēļ sapulcējāmies, lai vēl ar vieglām oktobra noslēguma šausmu noskaņām apspriestu vienu no šī brīža aktuālākajām šausmu filmām "Barbarian". Protams, tas nebija viss – patērzējām par jaunākajiem pavērsieniem gan DC, gan citu komiksfilmu pasaulē un daudz ko citu. Tostarp ceram, ka Džeimss Grejs pārāk neapvainojās par to, ka "uz aci" iedevām viņam krietni vairāk gadu, nekā viņam patiesībā ir. Šajā raidījumā: Ko Džeimss Gans sadarīs kā DC pasaules radošais priekšnieks un citi jaunumi (00:02:48); "The Barbarian" – diskusija ar SPOILERIEM (00:43:27). Montāža – Toms Cielēns.
Sharon Gans, and her late husband Alex Horn, founded and operated an " acting school". Though some former students have asserted it's actually a cult, which Gans had vehemently disputed. We'll let you decide ! It's a bizarre story ! Follow Bizarre Buffet Online Support Bizarre Buffet On Patreon Follow Bizarre Buffet On Instagram Like Bizarre Buffet On Facebook Subscribe To Bizarre Buffet On YouTube Bizarre Buffet Online Follow The Host's Of Bizarre Buffet Follow Marc Bluestein On Instagram Follow Jen Wilson On Instagram Follow Mark Tauriello On Instagram If you're enjoying the content brought to you here at Bizarre Buffet, please consider leaving a positive review of the show on Apple Podcast's / iTunes. Listening on Spotify ? Give our show a " like " ! It helps a tremendous deal. Bizarre Buffet is an indépendant production.The support of our listener's keep's this show going. Thank you for listening ! Support the show!: https://patreon.com/bizarrebuffetSee omnystudio.com/listener for privacy information.
"We were invisible. We had to be. We took an oath of absolute secrecy. We never even told our immediate families who we were. We went about our lives in New York City. Just like you. We were your accountants, money managers, lawyers, executive recruiters, doctors. We owned your child's private school and sold you your brownstone. But you'd never guess our secret lives, how we lived in a kind of silent terror and fervor. There were hundreds of us." Right under the noses of neighbors, clients, spouses, children, and friends, a secret society, simply called School-a cult of snared Manhattan professionals-has been led by the charismatic, sociopathic and dangerous leader Sharon Gans for decades. Spencer Schneider was recruited in the eighties and he stayed for more than twenty-three years as his life disintegrated, his self-esteem eroded, and he lined the pockets of Gans and her cult. Cult members met twice weekly, though they never acknowledged one another outside of meetings or gatherings. In the name of inner development, they endured the horrors of mental, sexual, and physical abuse, forced labor, arranged marriages, swindled inheritances and savings, and systematic terrorizing. Some of them broke the law. All for Gans. "During those years," Schneider writes, "my world was School. That's what it's like when you're in a cult, even one that preys on and caters to New York's educated elite. This is my story of how I got entangled in School and how I got out." At its core, Manhattan Cult Story is a cautionary tale of how hundreds of well-educated, savvy, and prosperous New Yorkers became fervent followers of a brilliant but demented cult leader who posed as a teacher of ancient knowledge. It's about double-lives, the power of group psychology, and how easy it is to be radicalized-all too relevant in today's atmosphere of conspiracy and ideologue worship.
"We were invisible. We had to be. We took an oath of absolute secrecy. We never even told our immediate families who we were. We went about our lives in New York City. Just like you. We were your accountants, money managers, lawyers, executive recruiters, doctors. We owned your child's private school and sold you your brownstone. But you'd never guess our secret lives, how we lived in a kind of silent terror and fervor. There were hundreds of us." Right under the noses of neighbors, clients, spouses, children, and friends, a secret society, simply called School-a cult of snared Manhattan professionals-has been led by the charismatic, sociopathic and dangerous leader Sharon Gans for decades. Spencer Schneider was recruited in the eighties and he stayed for more than twenty-three years as his life disintegrated, his self-esteem eroded, and he lined the pockets of Gans and her cult. Cult members met twice weekly, though they never acknowledged one another outside of meetings or gatherings. In the name of inner development, they endured the horrors of mental, sexual, and physical abuse, forced labor, arranged marriages, swindled inheritances and savings, and systematic terrorizing. Some of them broke the law. All for Gans. "During those years," Schneider writes, "my world was School. That's what it's like when you're in a cult, even one that preys on and caters to New York's educated elite. This is my story of how I got entangled in School and how I got out." At its core, Manhattan Cult Story is a cautionary tale of how hundreds of well-educated, savvy, and prosperous New Yorkers became fervent followers of a brilliant but demented cult leader who posed as a teacher of ancient knowledge. It's about double-lives, the power of group psychology, and how easy it is to be radicalized-all too relevant in today's atmosphere of conspiracy and ideologue worship.
"We were invisible. We had to be. We took an oath of absolute secrecy. We never even told our immediate families who we were. We went about our lives in New York City. Just like you. We were your accountants, money managers, lawyers, executive recruiters, doctors. We owned your child's private school and sold you your brownstone. But you'd never guess our secret lives, how we lived in a kind of silent terror and fervor. There were hundreds of us." Right under the noses of neighbors, clients, spouses, children, and friends, a secret society, simply called School-a cult of snared Manhattan professionals-has been led by the charismatic, sociopathic and dangerous leader Sharon Gans for decades. Spencer Schneider was recruited in the eighties and he stayed for more than twenty-three years as his life disintegrated, his self-esteem eroded, and he lined the pockets of Gans and her cult. Cult members met twice weekly, though they never acknowledged one another outside of meetings or gatherings. In the name of inner development, they endured the horrors of mental, sexual, and physical abuse, forced labor, arranged marriages, swindled inheritances and savings, and systematic terrorizing. Some of them broke the law. All for Gans. "During those years," Schneider writes, "my world was School. That's what it's like when you're in a cult, even one that preys on and caters to New York's educated elite. This is my story of how I got entangled in School and how I got out." At its core, Manhattan Cult Story is a cautionary tale of how hundreds of well-educated, savvy, and prosperous New Yorkers became fervent followers of a brilliant but demented cult leader who posed as a teacher of ancient knowledge. It's about double-lives, the power of group psychology, and how easy it is to be radicalized-all too relevant in today's atmosphere of conspiracy and ideologue worship.
Artificial intelligence technology has been advancing, and businesses have been putting it into action. But too many companies are just gathering a bunch of data to kick out insights and not really using AI to its fullest potential. Joshua Gans, professor at Rotman School of Management, says businesses need to apply AI more systemically. Because decision-making based on AI usually has ripple effects throughout the organization. Gans cowrote the HBR article “From Prediction to Transformation" and the new book "Power and Prediction: The Disruptive Economics of Artificial Intelligence."
Dr. John Gans is the Managing Director of Executive Communications and Strategic Engagement at the Rockefeller Foundation. In addition, Gans teaches graduate and undergraduate classes on the international order, the politics and process of American foreign policy, and national security. He is also a fellow at the University of Pennsylvania's Perry World House, a fellow at the German Marshall Fund of the United States, and a board member at the World Affairs Council of New Jersey. In the wake of the September 11th attacks, Gans was a press liaison at Ground Zero in lower Manhattan, where he helped brief the media on behalf of the Federal Emergency Management Agency (FEMA). The experience drove his interest in public service and global affairs, and his desire to help individuals and institutions tell their stories and achieve their objectives, whether in war, for the bottom line, at the ballot box, in Washington, or in the marketplace of ideas. In the years since, Gans served at the Pentagon as chief speechwriter to Secretary of Defense Ash Carter. He was the principal adviser to the secretary on the planning, positioning, and preparation of remarks, managed a team of writers, and drafted dozens of speeches delivered around the world on defense policy in the Asia-Pacific region, Europe, Russia, the Middle East, and elsewhere. Previously, Gans worked for Defense Secretary Chuck Hagel, Secretary of the Treasury Jack Lew, Speaker of the U.S. House of Representatives Nancy Pelosi, and U.S. Senator Hillary Rodham Clinton. For a decade, he served in the U.S. Navy Reserve. In 2019, Gans published White House Warriors: How the National Security Council Transformed the American Way of War, and this book is the subject of our conversation today.
Decisions are about change. To make effective decisions, you need to understand how change works. Change can happen at different levels, from small improvements to big paradigm shifts. Change happens at different speeds, too - some changes are immediate while others take longer to manifest. There are different types of changes, too. Some changes are pre-determined by what will bring joy, pleasure, or happiness. Other changes are more mindful decisions. And some changes are pivotal decisions that should be made with a well-oiled system in place to avoid bias. Michael Moon and Steven Gans are working on a project called "Decisions Zen”, which is a book about how to make better decisions faster, easier, and with less stress. Michael Moon is an author, speaker, and advisor who helps entrepreneurs and executives create disruptive innovation and transformational change. He is the author of Firebrands: Building Brand Loyalty in the Internet Age and Gratitude Protocols: Practical Habits of Solitude, Gratitude, and Abiding Love. Moon is a thought leader in the areas of transmedia branding, customer engagement, and personal growth. He has worked with some of the world's largest brands, including Adobe, Apple, Boeing, Disney, Gap, Nokia, and Warner Bros. Moon's work reflects his belief that we all have the potential to create positive change in our lives and in the world. Dr Steven Gans is the founder and former chairman of the London Institute for Management Studies. He was previously a Professor and Supervisor at Antioch University in California. He is the author of Just Listening: Ethics and Therapy by Steven Gans and Leon Redler. He is an expert in the field of Active / Reflective listening. He is also a facilitator and mentor and has worked with many different people in many different capacities. He is known for his dedication, enthusiasm, and ability to always see the positive in any situation. He is a great listener and motivator and is someone who genuinely cares about helping others. He is an absolute treasure and anyone would be lucky to have him in their life.
In der 58. Folge von "Und was machst du am Wochenende?" ist Igor Levit zu Gast, Deutschlands bekanntester Pianist. Geboren 1987 in der damaligen Sowjetunion und aufgewachsen in Hannover, lebt er heute in Berlin, wo er gerade in eine neue Wohnung gezogen ist und endlich ungestört Klavier spielen kann. Im Podcast erzählt er, dass er auch am Wochenende um sechs Uhr morgens aufwacht, weil er grundsätzlich schlecht schläft. Mit seiner jahrelangen Leidenschaft Gewichtheben hat er vor Kurzem aufgehört, dafür macht er jetzt morgens Yoga. Und geht anschließend noch zum Sport. Igor Levit radelt für sein Leben gern, er besitzt sechs verschiedene Räder, "ich habe auch ein Angeberfahrrad, es ist eigentlich ein bisschen gaga". Er erklärt, warum ihm Russland fremd ist, warum sich in den vergangenen Jahren immer stärker mit seiner jüdischen Identität beschäftigt hat – und blickt zurück auf seine Twitter-Konzerte während der Pandemie. Er schwärmt von seiner Heimatstadt Hannover und spielt mit Podcastgastgeber Christoph Amend ein Lieblings-Café-Restaurant-Städte-Quiz. Am Sonntag macht er übrigens nichts – außer lesen, lesen, lesen. Und schaut möglichst viele Folgen seines Guilty Pleasures, der Serie "Family Guy". Am Schluss verrät er auch, was für ihn Liebe ist: "Ziel." In dieser Folge empfehlen Igor und Christoph: - das Café Ben Rahim Coffee Company in der Sophienstraße in Berlin-Mitte - das Buch "A letter in the scroll" von Jonathan Sacks - die Pâtisserie/Café Dukatz im Glockenbachviertel in München - das Hotel/Restaurant Die blaue Gans in Salzburg. Dort gibt es das von Christoph erwähnte "Schotterschnitzel". Ein klein gehacktes Schnitzel für die Pause. Das steht nicht dort auf der Karte, kann man aber beim Servicepersonal bestellen. - das Salzburger Restaurant St. Peter Stiftskulinarium - die Würstchenbude im Toscaninihof in Salzburg - das Restaurant Zurück zum Glück in Hannover - "Tristan", das neue Album von Igor Levit - den Film "Vengeance" von B. J. Novak, der leider noch kein deutsches VÖ-Datum hat - das Buch Flammen von Musikjournalist Volker Hagedorn - für "Family Guy"-Beginner: Einfach mal Family Guy Jewish oder Family Guy Rabbi bei YouTube eingeben Das Team erreichen Sie unter wochenende@zeit.de.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Oversight Leagues: The Training Game as a Feature, published by Paul Bricman on September 9, 2022 on The AI Alignment Forum. This post is part of my hypothesis subspace sequence, a living collection of proposals I'm exploring at Refine. Followed by ideological inference engines. Thanks Adam Shimi for advice on putting more legible content out there. Thanks Eric Winsor, Leo Grinsztajn, Linda Linsefors, Lucas Texeira, Tammy Leake, Ze Shen for discussions which inspired this post. TL;DR: An oversight league is a training scheme which incentivizes an agent and an evaluator to constantly try to game each other, leading to synchronized increases in capability for the two players. However, the evaluator is being offered a host of additional learning signals to help it maintain a consistent (and potentially provable) lead over the agent. Oversight leagues heavily draw on ideas from capability literature, including: league training in AlphaStar, game theory in GANs, adversarial robustness, etc. Intro The whole project of oversight leagues relies on the following non-exhaustive list of assumptions: Assumption 1, "AGI Hard, Human Values Harder": We are unlikely to formulate the True Name of human values in closed-form before deploying transformative AI. The best we are likely to do before takeoff is model human values approximately and implement an imperfect evaluator. Assumption 2, "Linear Capability Ordering": Any fixed evaluator (e.g. a reward model) can be gamed by an agent above a certain threshold of capability. More generally, an agent whose capability improves consistently faster than the capability of an evaluator will eventually be able to game said evaluator. By "evaluator capability," I'm referring to its ability to prevent being gamed. Assumption 3, "Humans Are Not True Gamers": Human oversight is impractical because our capabilities as evaluators can't improve at an arbitrary large rate. Save for cyborgian schemes for human augmentation, human oversight would eventually be gamed by an agent of sufficient capability. Assumption 4, "Zone of Proximal Development": There is a relatively small interval of evaluator lead which allows the agent to improve from challenging it, even if the evaluator is slightly ahead. This assumption is motivated by the fact that GANs are actually working, despite an inevitable imbalance between the two components involved at any given time. Assumption 5, "Point of No Return": Even a minor lead in capability for the agent relative to the evaluator is catastrophic, as the agent can take advantage of this window of opportunity and maneuver itself into a persistent advantageous position by flying under the evaluator's radar. From the assumptions above, we can infer that if we are to deploy a robust evaluator as an operationalization of human values, it must certainly not be fixed. Not only that, but it must additionally improve in performance at least as fast as the agent being evaluated, so that it never gets overtaken. What's more, simple human oversight doesn't fulfill those necessary conditions, hence we should consider automated schemes. Proposal An oversight league is one such automated scheme for training agents and evaluators in a way which improves their performance in lockstep. The crux of this training regime is to supply most of the training through bilateral learning signals, and thus render the improvement of the two components interdependent. By ensuring that most of the learning opportunities of the agent come from playing against the evaluator and vice versa, the two sides form a positive feedback loop resembling patterns of co-evolution. The oversight league scheme implicitly attempts to cultivate "antifragility" by applying appropriate stressors on the evaluator in the form of ever more capable agents a reliable way of impr...
Another Hinman Lounge podcast! This time I sat with Drs. Kaplan and Gans to discuss implant care and sensitivity. That sounds very formal but they're very fun and smart ladies, promise! I needed to ask about sensitivity because I'm now a pitiful baby during my cleanings. Shout out to my hygienists Camilla and Katie for putting up with my sensitive self :) Full show notes on the podcast home page! https://nobodytoldmethat.libsyn.com/ Don't forget to check out my other podcast Chew on This - A Dental Podcast! Connect with Crest: Dental Care Information for Professionals | Dentalcare.com Dr. Stephanie Kaplan | LinkedIn Dr. Stephanie Gans | LinkedIn **If you like the show then I'd appreciate a good rating. Tell your friends. Even podcasters ask for referrals!** Teresa's Website- https://www.odysseymgmt.com/ (sign up for my newsletter!) Teresa's Book Moving Your Patients to Yes! Easy Insurance Conversations http://odysseymgmt.corecommerce.com/Book/ (use ‘newsletter' for $3 off)
Claire Silver is a crypto artist who gives expression to her creative vision using the power of AI. Instead of creating new art by hand, Silver's crypto art is the output of generative adversarial networks (GANs), which explore themes of “vulnerability, trauma, disability, social hierarchy, innocence, and divinity”. In doing so, her work questions “the role they will play in our transhumanist future.” Silver's work also reflects a shifting paradigm into the broader art world: taste is the new skill. With AI tools that — in theory — can execute any artistic idea, all that's left is the idea, which can only come from the artist themselves.In this episode we cover:Whether artists should feel threatened by the emergence of AIThe biggest misconceptions surrounding the role of AI in artSilver's creative process and the various AI tools she usesHow an artist's taste is the deciding factor for the quality of their work To listen to the audio version of this episode, go to http://smarturl.it/nftnowTo sign up for the nft now newsletter, where we break down the NFT market into actionable insights each week, go to: https://www.nftnow.com/newsletterTo follow Claire on Instagram, go here: https://www.instagram.com/clairesilveraiartTo follow Claire on Twitter, go here: https://twitter.com/ClaireSilver12 See acast.com/privacy for privacy and opt-out information.
“We were invisible. We had to be. We took an oath of absolute secrecy. We never even told our immediate families who we were. We went about our lives in New York City. Just like you. We were your accountants, money managers, lawyers, executive recruiters, doctors. We owned your child's private school and sold you your brownstone. But you'd never guess our secret lives, how we lived in a kind of silent terror and fervor. There were hundreds of us.” Right under the noses of neighbors, clients, spouses, children, and friends, a secret society, simply called School—a cult of snared Manhattan professionals—has been led by the charismatic, sociopathic and dangerous leader Sharon Gans for decades. Spencer Schneider was recruited in the eighties and he stayed for more than twenty-three years as his life disintegrated, his self-esteem eroded, and he lined the pockets of Gans and her cult. Cult members met twice weekly, though they never acknowledged one another outside of meetings or gatherings. In the name of inner development, they endured the horrors of mental, sexual, and physical abuse, forced labor, arranged marriages, swindled inheritances and savings, and systematic terrorizing. Some of them broke the law. All for Gans. “During those years,” Schneider writes, “my world was School. That's what it's like when you're in a cult, even one that preys on and caters to New York's educated elite. This is my story of how I got entangled in School and how I got out.” At its core, Manhattan Cult Story is a cautionary tale of how hundreds of well-educated, savvy, and prosperous New Yorkers became fervent followers of a brilliant but demented cult leader who posed as a teacher of ancient knowledge. It's about double-lives, the power of group psychology, and how easy it is to be radicalized—all too relevant in today's atmosphere of conspiracy and ideologue worship.
“We were invisible. We had to be. We took an oath of absolute secrecy. We never even told our immediate families who we were. We went about our lives in New York City. Just like you. We were your accountants, money managers, lawyers, executive recruiters, doctors. We owned your child's private school and sold you your brownstone. But you'd never guess our secret lives, how we lived in a kind of silent terror and fervor. There were hundreds of us.” Right under the noses of neighbors, clients, spouses, children, and friends, a secret society, simply called School—a cult of snared Manhattan professionals—has been led by the charismatic, sociopathic and dangerous leader Sharon Gans for decades. Spencer Schneider was recruited in the eighties and he stayed for more than twenty-three years as his life disintegrated, his self-esteem eroded, and he lined the pockets of Gans and her cult. Cult members met twice weekly, though they never acknowledged one another outside of meetings or gatherings. In the name of inner development, they endured the horrors of mental, sexual, and physical abuse, forced labor, arranged marriages, swindled inheritances and savings, and systematic terrorizing. Some of them broke the law. All for Gans. “During those years,” Schneider writes, “my world was School. That's what it's like when you're in a cult, even one that preys on and caters to New York's educated elite. This is my story of how I got entangled in School and how I got out.” At its core, Manhattan Cult Story is a cautionary tale of how hundreds of well-educated, savvy, and prosperous New Yorkers became fervent followers of a brilliant but demented cult leader who posed as a teacher of ancient knowledge. It's about double-lives, the power of group psychology, and how easy it is to be radicalized—all too relevant in today's atmosphere of conspiracy and ideologue worship.
“We were invisible. We had to be. We took an oath of absolute secrecy. We never even told our immediate families who we were. We went about our lives in New York City. Just like you. We were your accountants, money managers, lawyers, executive recruiters, doctors. We owned your child's private school and sold you your brownstone. But you'd never guess our secret lives, how we lived in a kind of silent terror and fervor. There were hundreds of us.” Right under the noses of neighbors, clients, spouses, children, and friends, a secret society, simply called School—a cult of snared Manhattan professionals—has been led by the charismatic, sociopathic and dangerous leader Sharon Gans for decades. Spencer Schneider was recruited in the eighties and he stayed for more than twenty-three years as his life disintegrated, his self-esteem eroded, and he lined the pockets of Gans and her cult.Cult members met twice weekly, though they never acknowledged one another outside of meetings or gatherings. In the name of inner development, they endured the horrors of mental, sexual, and physical abuse, forced labor, arranged marriages, swindled inheritances and savings, and systematic terrorizing. Some of them broke the law. All for Gans.“During those years,” Schneider writes, “my world was School. That's what it's like when you're in a cult, even one that preys on and caters to New York's educated elite. This is my story of how I got entangled in School and how I got out.” At its core, Manhattan Cult Story is a cautionary tale of how hundreds of well-educated, savvy, and prosperous New Yorkers became fervent followers of a brilliant but demented cult leader who posed as a teacher of ancient knowledge. It's about double-lives, the power of group psychology, and how easy it is to be radicalized—all too relevant in today's atmosphere of conspiracy and ideologue worship.
Can you keep a secret? Lawyer and author of Manhattan Cult Story My Unbelievable True Story of Sex, Crimes, Chaos, and Survival discusses his attraction to esoteric philosophy, how he joined Sharon Gan's secretive group School, the strict social rules members had to follow, the financial and psychological abuse being commited by the group, and how he finally left after being a membe for 23 years! PLUS: Lola and Meagan discuss the intersection of social media and mental health, high rise jeans, death doulas, and open casket funerals. FOR MORE INFO AND RESOURCES: https://nyc-wellness.com/ http://www.cultrevolt.com/ https://www.spencer-schneider.com/ Got your own story about cults, extreme belief, or abuse of power? Leave a voicemail or text us at 347-86-TRUST (347-868-7878) OR shoot us an email at Trust Me Pod @gmail.com FOLLOW US ON INSTAGRAM: @trustmepodcast @oohlalola @vibehigherbitch OR TWITTER: @trustmecultpod @ohlalola
Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics
In this episode, artist and programmer Gene Kogan joins us to talk about new AI-Generated Art and how AI assistants are going to provide us with interfaces to maximize our creativity. Gene is interested in advancing scientific literacy through creativity and play, and building educational spaces which are as open and accessible as possible. Currently, he is leading an open project to create an autonomous artificial artist. We dive deep into different generative AI interfaces such as Dall-E, Midjourney and Abraham the one he is currently working on, other generative models, how to create open-source systems and how they connect to collective intelligence and the environmental niches that AI is going to evolve into. AI assistants progressively are going to become part of our lives. If you are interested in the future of AI, and the environmental niches that it is going to evolve into, Stay tuned! SUPPORT US ON PATREON: https://www.patreon.com/rhyslindmark JOIN OUR DISCORD: https://discord.gg/PDAPkhNxrC Who is Gene Kogan? Gene is an artist and programmer with interests in generative art, collective intelligence, autonomous systems, and computer science. He is a collaborator within numerous open-source software projects, and gives workshops and lectures on topics at the intersection of code and art. Topics: Welcome Gene Kogan to The Rhys Show!: (00:00:00) Goal of this episode for listeners: (00:01:29) Catalyzing moment in childhood that set off curiosity about the world: (00:01:53) What are the next steps with ai in terms of art: (00:04:27) AI generative music & what evolutionary nische will music fall into: (00:08:38) Abraham vs. Dall-E & Midjourney: (00:10:57) What should this autonomous artist agent do or look like in the next 50 years?: (00:12:49) What would make it more autonomous?: (00:17:33) General thoughts about the future of AI assistance: (00:20:44) AI inputters or do we as people need to learn how to be centaurs in better ways: (00:23:31) A new third replicator: are these new computer memes these kemes? : (00:28:35) Other niches that AI is likely to evolve into?: (00:33:32) How these different models work: (00:37:44) Dalle-E & GANs overrated or underrated: (00:40:58) Wrap-up: (00:43:16) Mentioned resources: Dall-E: https://en.wikipedia.org/wiki/DALL-E Midjourney: https://en.wikipedia.org/wiki/Midjourney Primaver De Filippi: https://en.wikipedia.org/wiki/Primavera_De_Filippi AARON: https://en.wikipedia.org/wiki/AARON All You Need to Know About Coexisting With Living Robots: Dr. Joshua Bongard & Dr. Michael Levin (The Rhys Show): https://www.youtube.com/watch?v=-mkC9nGAos0 Gene's avatars: Mars College: https://mars.college/ BrainDrops: https://braindrops.cloud/ Machine Learning for art (ml4a): https://ml4a.net/ Abraham.ai: https://abraham.ai/ Connect with Gene Kogan: Twitter: https://twitter.com/genekogan Web: https://genekogan.com/ Youtube: https://www.youtube.com/ekogan19 Instagram: https://www.instagram.com/genekogan/ GitHub: https://github.com/genekogan Vimeo: https://vimeo.com/genekogan Medium: https://www.medium.com/@genekogan Tumblr: https://electricdosa.tumblr.com Facebook: https://www.facebook.com/genekogan1
Today I'm chatting with Keri Gans, MS, RD, an NYC-based dietitian, author, yoga instructor & host of The Keri Report Podcast. She's zoom-ing in from her countryside retreat to talk about: The non-judgmental role of R.D.'s as practitioners (and how that's antithetical to current social media algorithms); our favorite ways to eat pizza & drink martinis; how it's possible to love a restaurant (even when the food is downright mediocre); the joys of a great diner breakfast, and why our profession needs to give a little less power to singular data points (e.g. the number on the scale & the word “calories.”) As two self-proclaimed skincare junkies, we also spent some time on collagen supplements; Hailey Bieber's Strawberry Glaze smoothie, & vitamin C. Q+A: What do detox teas actually do? Support the on the side podcast by subscribing to the show; rating us 5-stars & leaving a review Follow me & share your questions, ideas, comments on the episode, topic requests, or just say hi via instagram, tiktok and twitter Like what you're listening to? You'll LOVE this audiobook New business inquiries & guest ideas? Email me: jaclyn@jaclynlondonrd.com