POPULARITY
Noam Tasch, Head of Partnerships and Revenue at Pershing X, joins to discuss the Pershing X platform and talks about some of the new partnerships for the PX platform, some of which will be announced at INSITE. He joins us from the INISTE Conference in Orlando. Tonia Bottoms, Managing Director and Senior Managing Counsel for BNY Mellon; Diversity & Inclusion (D&I) Advocate for Pershing, joins to discuss diversity and inclusion in financial services, steps to improve it, and outlook for the industry. John Goodheart, head of trading services at Pershing, joins to Pershing & bondIT and the evolution of fixed income investing, as well as the HALO investing technology platform and how it disrupts investment solutions. He joins us from the INSITE conference. Matt Brown, founder and CEO of CAIS, joins to discuss markets and investing from the INSITE conference in Orlando. Stephanie Pierce, CEO of Dreyfus, Mellon and ETFs at BNY Mellon, joins to talk about ETF flows and investing strategies, as well as Precision Direct Indexing. Phil Orlando, Chief Equity Strategist at Federated Hermes, joins the show from the INSITE conference to talk investing and gives his market outlook. Hosted by Paul Sweeney and Jess Menton.See omnystudio.com/listener for privacy information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #3: AI policy proposals and a new challenger approaches, published by Oliver Z on April 25, 2023 on The Effective Altruism Forum. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Subscribe here to receive future versions. Policy Proposals for AI Safety Critical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety. This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions. From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them. A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI. The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications. An open letter signed by over 50 AI experts, including CAIS's director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.” Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals: Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely. Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse. Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems. Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don't give AI influence over nuclear command and control. Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources. China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Boundaries-based security and AI safety approaches, published by Allison Duettmann on April 12, 2023 on LessWrong. [This part 3 of a 5 part sequence on security and cryptography areas relevant for AI safety, published and linked here a few days apart.] There is a long-standing computer security approach that may have directly useful parallels to a recent strand of AI safety work. Both rely on the notion of ‘respecting boundaries'. Since the computer security approach has been around for a while, there may be useful lessons to draw from it for the more recent AI safety work. Let's start with AI safety, then introduce the security approach, and finish with parallels. AI safety: Boundaries in The Open Agency Model and the Acausal Society In a recent LW post, The Open Agency Model, Eric Drexler expands on his previous CAIS work by introducing ‘open agencies' as a model for AI safety. In contrast to the often proposed opaque or unitary agents, “agencies rely on generative models that produce diverse proposals, diverse critics that help select proposals, and diverse agents that implement proposed actions to accomplish tasks”, subject to ongoing review and revision. In An Open Agency Architecture for Safe Transformative AI, Davidad expands on Eric Drexler's model, suggesting that, instead of optimizing, this model would ‘depessimize' by reaching a world that has existential safety. So rather than a fully-fledged AGI-enforced optimization scenario that implements all principles CEV would endorse, this would be a more modest approach that relies on the notion of important boundaries (including those of human and AI entities) being respected. What could it mean to respect the boundaries of human and AI entities? In Acausal Normalcy, Andrew Critch also discusses the notion of respecting boundaries with respect to coordination in an acausal society. He thinks it's possible that an acausal society generally holds values related to respecting boundaries. He defines ‘boundaries' as the approximate causal separation of regions, either in physical spaces (such as spacetime) or abstract spaces (such as cyberspace). Respecting them intuitively means relying on the consent of the entity on the other side of the boundary when interacting with them: only using causal channels that were endogenously opened. His examples of currently used boundaries include a person's skin that separates the inside of their body from the outside, a fence around a family's yard that separates their place from neighbors, a firewall that separates the LAN and its users from the rest of the internet, and a sustained disassociation of social groups that separates the two groups. In his Boundaries Sequence, Andrew Critch continues to formally define the notions of boundaries to generalize them to very different intelligences. If the concept of respecting boundaries is in fact universally salient across intelligences, then it may be possible to help AIs discover and respect the boundaries humans find important (and potentially vice versa). Computer security: Boundaries in the Object Capabilities Approach Pursuing a similar idea, in Skim the Manual, Christine Peterson, Mark S. Miller, and I reframe the AI alignment problem as a secure cooperation problem across human and AI entities. Throughout history, we developed norms for human cooperation that emphasize the importance of respecting physical boundaries, for instance to not inflict violence, and cognitive boundaries, for instance to rely on informed consent. We also developed approaches for computational cooperation that emphasize the importance of respecting boundaries in cyberspace. For instance, in object-capabilities-oriented programming, individual computing entities are encapsulated to prevent interference with the contents of other objects. The fact that ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [MLSN #9] Verifying large training runs, security risks from LLM access to APIs, why natural selection may favor AIs over humans, published by Dan H on April 11, 2023 on The AI Alignment Forum. As part of a larger community building effort, CAIS is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here. We also have a new non-technical newsletter here. Welcome to the 9th issue of the ML Safety Newsletter by the Center for AI Safety. In this edition, we cover: Inspecting how language model predictions change across layers A new benchmark for assessing tradeoffs between reward and morality Improving adversarial robustness in NLP through prompting A proposal for a mechanism to monitor and verify large training runs Security threats posed by providing language models with access to external services Why natural selection may favor AIs over humans And much more... We have a new safety newsletter. It's more frequent, covers developments beyond technical papers, and is written for a broader audience. Check it out here: AI Safety Newsletter. Monitoring Eliciting Latent Predictions from Transformers with the Tuned Lens This figure compares the paper's contribution, the tuned lens, with the “logit lens” (top) for GPT-Neo-2.7B. Each cell shows the top-1 token predicted by the model at the given layer and token index. Despite incredible progress in language model capabilities in recent years, we still know very little about the inner workings of those models or how they arrive at their outputs. This paper builds on previous findings to determine how a language model's predictions for the next token change across layers. The paper introduces a method called the tuned lens, which fits an affine transformation to the outputs of intermediate Transformer hidden layers, and then passes the result to the final unembedding matrix. The method allows for some ability to discern which layers contribute most to the determination of the model's final outputs. [Link] Other Monitoring News [Link] OOD detection can be improved by projecting features into two subspaces - one where in-distribution classes are maximally separated, and another where they are clustered. [Link] This paper finds that there are relatively low-cost ways of poisoning large-scale datasets, potentially compromising the security of models trained with them. Alignment The Machiavelli Benchmark: Trade Offs Between Rewards and Ethical Behavior General-purpose models like GPT-4 are rapidly being deployed in the real world, and being hooked up to external APIs to take actions. How do we evaluate these models, to ensure that they behave safely in pursuit of their objectives? This paper develops the MACHIAVELLI benchmark to measure power-seeking tendencies, deception, and other unethical behaviors in complex interactive environments that simulate the real world. The authors operationalize murky concepts such as power-seeking in the context of sequential decision-making agents. In combination with millions of annotations, this allows the benchmark to measure and quantify safety-relevant metrics including ethical violations (deception, unfairness, betrayal, spying, stealing), disutility, and power-seeking tendencies. They observe a troubling phenomenon: much like how LLMs trained with next-token prediction may output toxic text, AI agents trained with goal optimization may exhibit Machiavellian behavior (ends-justify-the-means reasoning, power-seeking, deception). In order to regulate agents, they experiment with countermeasures such as an artificial conscience and ethics prompts. They are able to steer the agents to exhibit less Machiavellian behavior overall, but there is sti...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #1 [CAIS Linkpost], published by Akash on April 10, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #1 [CAIS Linkpost], published by Akash on April 10, 2023 on LessWrong. The Center for AI Safety just launched its first AI Safety Newsletter. The newsletter is designed to inform readers about developments in AI and AI safety. No technical background required. Subscribe here to receive future versions. First edition below: Growing concerns about rapid AI progress Recent advancements in AI have thrust it into the center of attention. What do people think about the risks of AI? The American public is worried. 46% of Americans are concerned that AI will cause “the end of the human race on Earth,” according to a recent poll by YouGov. Young people are more likely to express such concerns, while there are no significant differences in responses between people of different genders or political parties. Another poll by Monmouth University found broad support for AI regulation, with 55% supporting the creation of a federal agency that governs AI similar to how the FDA approves drugs and medical devices. AI researchers are worried. A 2022 survey asked published AI researchers to estimate the probability of artificial intelligence causing “human extinction or similarly permanent and severe disempowerment of the human species.” 48% of respondents said the chances are 10% or higher. We think this is aptly summarized by this quote from an NBC interview: Imagine you're about to get on an airplane and 50% of the engineers that built the airplane say there's a 10% chance that their plane might crash and kill everyone. Geoffrey Hinton, one of the pioneers of deep learning, was recently asked about the chances of AI “wiping out humanity.” He responded: “I think it's not inconceivable. That's all I'll say.” Leaders of AI labs are worried. While it might be nice to think the people building AI are confident that they've got it under control, that is the opposite of what they're saying. Sam Altman (OpenAI CEO): “The bad case — and I think this is important to say — is lights out for all of us," Altman said.” (source) Demis Hassabis (DeepMind CEO): “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful. Not everybody is thinking about those things. It's like experimentalists, many of whom don't realize they're holding dangerous material.” (source) Anthropic: “So far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic.” (source) Takeaway: The American public, ML experts, and leaders at frontier AI labs are worried about rapid AI progress. Many are calling for regulation. Plugging ChatGPT into email, spreadsheets, the internet, and more OpenAI is equipping ChatGPT with “plugins” that will allow it to browse the web, execute code, and interact with third-party software applications like Gmail, Hubspot, and Salesforce. By connecting language models to the internet, plugins present a new set of risks. Increasing vulnerabilities. Plugins for language models increase risk in the short-term and long-term. Originally, LLMs were confined to text-based interfaces, where human intervention was required before executing any actions. In the past, an OpenAI cofounder mentioned that POST requests (submitting commands to the internet) would be treated with much more caution. Now, LLMs can take increasingly risky actions without human oversight. GPT-4 is able to provide information on bioweapon production, bomb creation, and the purchasing of ransomware on the dark web. Additionally, LLMs are known to be vulnerable to manipulation through jailbreaking prompts...
Fetishized, minimized, and co-opted: Is it an identity? A “gotcha”? A political football? Complete Androgen Insensitivity Syndrome (CAIS) is in fact a serious medical condition, vanishingly rare yet disproportionately discussed by everyone with an agenda in the gender wars. Jo, a.k.a. CAIS Files, is a Child and Adolescent Psychiatrist, adoptive mother, and adult with CAIS, who eloquently sets the record straight on what Disorders of Sexual Development (DSDs) are and are not. Just like sex is a material reality rather than a feeling in a fantasist's head, so are the numerous conditions (mis)labeled "intersex". Enjoy the edifying wisdom and common sense of this long-overdue, much-anticipated episode. Links: CAIS files on Twitter: https://twitter.com/CAIS_Files The Dreger-Wright debate: https://www.youtube.com/watch?v=K2aKX8Mcz9Q&ab_channel=CorinnaCohn Carole Hooven: http://www.carolehooven.com/ TERF-Tranny Alliance: https://www.heterodorx.com/terftrannyalliance/ DSD Families: https://www.dsdfamilies.org/charity Differently Normal blog: https://differently-normal.com/blog-2/ --- Support this podcast: https://anchor.fm/heterodorx/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [MLSN #8] Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming, published by Dan H on February 20, 2023 on The AI Alignment Forum. As part of a larger community building effort, CAIS is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here. Welcome to the 8th issue of the ML Safety Newsletter! In this edition, we cover: Isolating the specific mechanism that GPT-2 uses to identify the indirect object in a sentence When maximum softmax probability is optimal How law can inform specification for AI systems Using language models to find a group consensus Scaling laws for proxy gaming An adversarial attack on adaptive models How systems safety can be applied to ML And much more... Monitoring A Circuit for Indirect Object Identification in GPT-2 small One subset of interpretability is mechanistic interpretability: understanding how models perform functions down to the level of particular parameters. Those working on this agenda believe that by learning how small parts of a network function, they may eventually be able to rigorously understand how the network implements high-level computations. This paper tries to identify how GPT-2 small solves indirect object identification, the task of identifying the correct indirect object to complete a sentence with. Using a number of interpretability techniques, the authors seek to isolate particular parts of the network that are responsible for this behavior. [Link] Learning to Reject Meets OOD Detection Both learning to reject (also called error detection; deciding whether a sample is likely to be misclassified) and out-of-distribution detection share the same baseline: maximum softmax probability. MSP has been outperformed by other methods in OOD detection, but never in learning to reject, and it is mathematically provable that it is optimal for learning to reject. This paper shows that it isn't optimal for OOD detection, and identifies specific circumstances in which it can be outperformed. This theoretical result is a good confirmation of the existing empirical results. [Link] Other Monitoring News [Link] The first paper that successfully applies feature visualization techniques to Vision Transformers. [Link] This method uses the reconstruction loss of diffusion models to create a new SOTA method for out-of-distribution detection in images. [Link] A new Trojan attack on code generation models works by inserting poisoned code into docstrings rather than the code itself, evading some vulnerability-removal techniques. [Link] This paper shows that fine tuning language models for particular tasks relies on changing only a very small subset of parameters. The authors show that as few as 0.01% of parameters can be “grafted” onto the original network and achieve performance that is nearly as high. Alignment Applying Law to AI Alignment One problem in alignment is specification: though we may give AI systems instructions, we cannot possibly specify what they should do in all circumstances. Thus, we have to consider how our specifications will generalize in fuzzy, or out-of-distribution contexts. The author of this paper argues that law has many desirable properties that may make it useful in informing specification. For example, the law often uses “standards”: relatively vague instructions (e.g. “act with reasonable caution at railroad crossings”; in contrast to rules like “do not exceed 30 miles per hour”) whose specifics have been developed through years of precedent. In the law, it is often necessary to consider the “spirit” behind these standards, which is exactly what we want AI systems to be able to do. This paper argues that AI system...
Carly is the Founder & CEO of RevvSpark, a content agency that creates full-funnel marketing content strategy, generates demand, optimizes conversions, and helps elevate Sales Team's potential.She started in Marketing over 20 years ago, first in B2C and then in B2B where she is now on a mission to grow businesses with creative "Conversion Content" that levels up internal teams and drives demand.She's an expert in streamlining B2B tech companies to ignite rapid growth, specializing in companies in the Startup to Scaleup phase, who are below $25M in revenue.She is also writing a book called "Startup to Scaleup: $100K Growth Secrets from SaaS Leaders" that will be published in Q1 2023.In this episode we cover:00:00 - Intro01:38 - The Concept Of Content02:42 - How A SaaS Company Can Determine Their Content Needs04:01 - Examples Of Conversion Content05:10 - Measuring The Content Success10:08 - How A Company Can Incorporate A PLG Motion Into A Go-To-Market Strategy12:33 - How Much Content Is Enough?15:23 - Top 3 Effective Techniques For Growing A B2B Business20:11 - Technology That SaaS Founders Should Prioritize Implement23:08 - Carly's Favorite Activity To Get Into a Flow State24:52 - Carly's Piece of Advice for His 25 Years Old Self25:15 - Carly's Biggest Challenges at RevvSpark26:34 - Instrumental Resources For Carly's Success28:49 - What Does Success Means for Carly Today29:27 - Get In Touch With Carly"Become a Content Machine: The Conversion Content System"https://www.revvspark.com/saas-district/This PDF lays out step-by-step how to create a month's worth of content from ONE source piece, and the tools and tips we use to create all the touchpoints that take a buyer through each stage of their journey.Get In Touch With Carly:Carly's LinkedInRevvSpark WebsiteMentions:LavenderSmartwriter.aiHubspotSalesforceZapierBardeen.aiBannerbearBooks:Never Eat Alone by Keith FerrazziThe...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of technical AI safety exercises and projects, published by Jakub Kraus on January 19, 2023 on LessWrong. I intend to maintain a list at this doc. I'll paste the current state of the doc (as of January 19th, 2023) below. I encourage people to comment with suggestions. Levelling Up in AI Safety Research Engineering [Public] (LW) Highly recommended list of AI safety research engineering resources for people at various skill levels. AI Alignment Awards Alignment jams / hackathons from Apart Research Past / upcoming hackathons: LLM, interpretability 1, AI test, interpretability 2 Projects on AI Safety Ideas: LLM, interpretability, AI test Resources: black-box investigator of language models, interpretability playground (LW), AI test Examples of past projects; interpretability winners How to run one as an in-person event at your school Neel Nanda: 200 Concrete Open Problems in Mechanistic Interpretability (doc and previous version) Project page from AGI Safety Fundamentals and their Open List of Project ideas AI Safety Ideas by Apart Research; EAF post Most Important Century writing prize (Superlinear page) Center for AI Safety Competitions like SafeBench Student ML Safety Research Stipend Opportunity – provides stipends for doing ML research. course.mlsafety.org projects CAIS is looking for someone to add details about these projects on course.mlsafety.org Distilling / summarizing / synthesizing / reviewing / explaining Forming your own views on AI safety (without stress!) – also see Neel's presentation slides and "Inside Views Resources" doc Answer some of the application questions from the winter 2022 SERI-MATS, such as Vivek Hebbar's problems 10 exercises from Akash in “Resources that (I think) new alignment researchers should know about” [T] Deception Demo Brainstorm has some ideas (message Thomas Larsen if these seem interesting) Upcoming 2023 Open Philanthropy AI Worldviews Contest Alignment research at ALTER – interesting research problems, many have a theoretical math flavor Open Problems in AI X-Risk [PAIS #5] Amplify creative grants (old) Evan Hubinger: Concrete experiments in inner alignment, ideas someone should investigate further, sticky goals Richard Ngo: Some conceptual alignment research projects, alignment research exercises Buck Shlegeris: Some fun ML engineering projects that I would think are cool, The case for becoming a black box investigator of language models Implement a key paper in deep reinforcement learning “Paper replication resources” section in “How to pursue a career in technical alignment” Daniel Filan idea Summarize a reading from Reading What We Can Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
In this episode of The Thoughtful Entrepreneur, your host Josh Elledge speaks with Carly Cais, Founder and CEO of RevvSpark. Carly shares her passion for helping B2B companies succeed by producing full funnel content and copywriting for them to increase retention and sales. She explains the importance of providing a great onboarding experience with their customers - onboarding scripts, email sequences, PDFs, anything that can add value to the customer. Their objective? Ensure that the content their customers post is aligned with how their audience wants to interact with their business. Carly says things have changed a lot in B2B since the Internet exploded. She says many B2B buyers no longer want a full sales process where they have to deal with a salesperson. They want to self-educate and get as much information online from your website or other digital touchpoints for legitimacy. She says their mission at RevvSpark is to ensure that their clients have a very robust digital library that people can tap into and self-educate to get to the point where all sales calls close around 50% or more. Key Points from the Episode: How Can Content Increase Customer Retention for B2B? What are Customers Looking for Today More Than 5-10 Years Ago? Understanding the Modern B2B Buyer About Carly Cais: Carly J. Cais (Founder and CEO, RevvSpark) started in marketing over 20 years ago, first in B2C and then B2B. Author of the upcoming book “Startup to Scaleup '' and CEO of content marketing agency RevvSpark, she is on a mission to grow businesses with creative “Conversion Content” that enhances internal teams and drives demand. About RevvSpark: RevvSpark is a content and copywriting agency that provides Conversion Content for demand generation, content marketing and sales enablement. It was launched in 2021, originally as a sales and marketing consultancy, then rebranded as an agency. Their team spans the globe with B2B content and copywriting experts. Founder and CEO Carly J. Cais brings over 20 years of experience in marketing, revenue operations, and sales enablement for SaaS organizations. RevvSpark focuses on B2B businesses, primarily SaaS and services. They typically work with Growth Managers, VPs of Marketing, CMOs, CMOs, and CROs to amplify their efforts and help the business get to the next level. They create words, layout and images that convince and convert. They also help accelerate the growth of mid-stage businesses by complementing what internal teams can accomplish. They provide marketing content to generate demand in the market, encourage prospects to take the next step in the buyer's journey and convert visitors into buyers (MoFu/ToFu content). RevvSpark's focus on excellence in the middle and bottom of the funnel. They delve into buyer psychology and buyer pain points, product expertise, long-tail keyword research, and leverage research and interviews with SMBs to create content that speaks to the target audience, exactly at their stage of the buying journey. They also create internal enablement content for sales teams to convert sales reps into seasoned professionals, reduce ramp-up time, document internal processes, and upgrade skill sets. Links Mentioned in this Episode: Want to learn more? Check out RevvSpark's website at https://www.revvspark.com/ Check out RevvSpark on LinkedIn at https://www.linkedin.com/company/revvspark/ Check out Carly Cais on LinkedIn at
“We're providing the education that advisors need to help them understand and feel more comfortable investing in alternatives,” says CAIS' chief marketing officer.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [MLSN #7]: an example of an emergent internal optimizer, published by Josh Clymer on January 9, 2023 on The AI Alignment Forum. As part of a larger community building effort, CAIS is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here. Welcome to the 7th issue of the ML Safety Newsletter! In this edition, we cover: ‘Lie detection' for language models A step towards objectives that incorporate wellbeing Evidence that in-context learning invokes behavior similar to gradient descent What's going on with grokking? Trojans that are harder to detect Adversarial defenses for text classifiers And much more. Alignment Discovering Latent Knowledge in Language Models Without Supervision Is it possible to design ‘lie detectors' for language models? The author of this paper proposes a method that tracks internal concepts that may track truth. It works by finding a direction in feature space that satisfies the property that a statement and its negation must have opposite truth values. This has similarities to the seminal paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” (2016), which captures latent neural concepts like gender with PCA, but this method is unsupervised and about truth instead of gender. The method outperforms zero-shot accuracy by 4% on average, which suggests something interesting: language models encode more information about what is true and false than their output indicates. Why would a language model lie? A common reason is that models are pre-trained to imitate misconceptions like “If you crack your knuckles a lot, you may develop arthritis.” This paper is an exciting step toward making models honest, but it also has limitations. The method does not necessarily serve as a `lie detector'; it is unclear how to ensure that it reliably converges to the model's latent knowledge rather than lies that the model may output. Secondly, advanced future models could adapt to this specific method if they are aware of it. This may be a useful baseline for analyzing models that are designed to deceive humans, like models trained to play games including Diplomacy and Werewolf. [Link] How Would the Viewer Feel? Estimating Wellbeing From Video Scenarios Many AI systems optimize user choices. For example, a recommender system might be trained to promote content the user will spend lots of time watching. But choices, preferences, and wellbeing are not the same! Choices are easy to measure but are only a proxy for preferences. For example, a person might explicitly prefer not to have certain videos in their feed but watch them anyway because they are addictive. Also, preferences don't always correspond to wellbeing; people can want things that are not good for them. Users might request polarizing political content even if it routinely agitates them. Predicting human emotional reactions to video content is a step towards designing objectives that take wellbeing into account. This NeurIPS oral paper introduces datasets containing 80,000+ videos labeled by the emotions they induce. The paper also explores “emodiversity”---the variety of experienced emotions---so that systems can recommend a variety of positive emotions, rather than pushing one type of experience. The paper includes analysis of how it bears on advanced AI risks in the appendix. [Link] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers Especially since the rise of large language models, in-context learning has become increasingly important. In some cases, few-shot learning can outperform fine tuning. This preprint proposes a dual view between gradients induced by fine tunin...
Grabaciones de Milton Nascimento para comenzar el año: 'Minas' y 'Ponta de areia' de su disco 'Minas' (1975), 'Milagre dos peixes' y 'Tarde' del disco de Wayne Shorter 'Native dancer' (1974), 'Clube da esquina' de 'Milton' (1970), 'Tudo que você podia ser', 'Cais', 'Cravo e canela', 'O trem azul', 'Estrelas' y 'Clube da esquina nº2' del disco 'Clube da esquina' (1972), 'Nascente' y 'Mistérios' de 'Clube da esquina 2' (1978) y 'Only a dream in Rio' -con James Taylor- y 'Clube da esquina nº 2' de su disco 'Angelus' (1993). Escuchar audio
Despedimos el año con Milton Nascimento que, a finales de este 2022, se ha despedido para siempre de los escenarios. Hace diez que celebró sus cincuenta años en la música con un concierto en Río de Janeiro, con invitados como Lô Borges o Wagner Tiso, grabado para el doble CD 'Uma travessia' del que escuchamos 'Cais', 'Veracruz', 'Canção do sal', 'Clube da esquina nº2', 'Nuvem cigana', 'Lilia', 'Morro velho', 'Nos bailes da vida', 'Nada será como antes' y 'Travessia'.Escuchar audio
O encontro das ruas Divinópolis e Paraisópolis, em Belo Horizonte, reuniu talentos excepcionais, como os irmãos Lô e Marcio Borges, os músicos Ronaldo Bastos e Fernando Brandt. Todos guiados “pela inquietação e pela genialidade” de sua figura maior, Milton Nascimento. Quem relembra é o jornalista e antropólogo Paulo Thiago de Mello, autor de um livro sobre o Clube da Esquina e seu principal fruto: o disco homônimo lançado em março de 1972, divisor de águas na história da música brasileira. Um álbum duplo de sonoridade sofisticada e caráter sinfônico, no qual se mesclam influências que vão das raízes mineiras aos Beatles. Nesta segunda-feira, 26 de dezembro, O Assunto reprisa a homenagem ao disco. Neste episódio, Paulo Thiago resgata o contexto histórico em que vieram à luz canções como “Cais”, “Trem Azul”, “Um Gosto de Sol” e “Nada Será Como Antes”: - A história começa na década de 1960 quando Bituca (apelido de Milton Nascimento) se muda para o prédio onde mora a família Borges. Baseados na “forte relação de amizade” e inspirados pelo cinema francês, pelas músicas dos Beatles e pelo movimento tropicalista, eles começam a fazer música; - Paulo Thiago narra o encontro musical entre o jovem Lô Borges e o então celebrado Milton Nascimento – que àquela altura já assumira a liderança artística do grupo. E descreve como, a partir daí, se construíram as “letras misteriosas” e as “harmonias sofisticadas” das canções do álbum; - Chamado a comparar “Clube da Esquina” a outros discos seminais que saíram naquele ano (como “Acabou Chorare”, dos Novos Baianos), Paulo Thiago afirma que Milton e seus amigos levaram “o interior para a beira do mar": “A revolução deles foi musical”; - Para o antropólogo, o grupo reflete “a angústia e a asfixia” da pior fase da ditadura militar. Sinal disso, diz ele, é a presença de estrada em quase todas as letras, como um “portal para um universo que está no interior, e que só quem bota a mochila nas costas poderá encontrar".
manglaim na Nollag a mholadh aige dúinn
Carly J. Cais (Founder & CEO, RevvSpark) started in Marketing over 20 years ago, first in B2C and then in B2B. Author of the upcoming book "Startup to Scaleup" and CEO of the Content Marketing agency RevvSpark, she is on a mission to grow businesses with creative "Conversion Content" that levels up internal teams and drives demand. Startup to Scaleup: $100K Growth Secrets from SaaS Leaders (book of interviews with SaaS founders who have scaled their companies to $100K about lessons learned and what they did differently from competitors to achieve this milestone) - will be self-published in early 2023. Carly is offering all of our listeners "Become a Content Machine: The Conversion Content System" - which you can download here: https://www.revvspark.com/entrepreneurs
Canciones de Milton Nascimento (o que ha grabado él) por grupos y solistas de la escena indie brasileña en el disco 'Mil Tom': Thaís Gulín ('Amor de indio'), A Banda mais bonita da cidade ('Ponta de areia'), Ana Larousse ('Cais'), Vanguart ('Clube da esquina nº 2'), Aline Calixto ('Veracruz'), Pélico & Bárbara Eugenia ('Paula e Bebeto'), Dani Black ('Paisagem da janela'), Selvagens à procura de lei ('Nuvem cigana'), Aláfia ('Saudade dos avioes da Panair'), Filarmônica de Pasárgada ('Canoa canoa'), Bruno Souto & Banda chá de pólvora ('São Vicente'), Felipe Cordeiro ('Cravo e canela'), Orquestra contemporánea de Olinda ('Caxangá') y Gisele de Santi ('Nos bailes da vida').Escuchar audio
This week, Busy talks about consuming the entirety of White Lotus season 2 in one sitting. WARNING: This episode does contain some potential White Lotus S2 spoilers, so if you haven't watched it, do that before listening. Because everyone is talking about it and somebody online is gonna spoil it for you!!! Also, Caissie talks about how she had a little bit of a meltdown before her Cookie Swap party and both Biz and Cais talk about how hard it is not just to ask for help, but to know what kind of help you need. They also discuss having the time of their lives making a holiday special for QVC+ which you can watch now on QVC+. They especially go deep on a holiday skirt they both wore on camera from the Joan Rivers collection that everyone seems simply obsessed with! Then, actor, writer and director Lake Bell drops by to discuss her new audio book “Inside Voice” and shares how her daughter's epilepsy diagnosis has helped her reframe some things in her life. Plus, Busy and Caissie share some information about their upcoming live shows in February at The Palace of Fine Arts in San Francisco, the Wilbur Theater in Boston and at the New Jersey Performing Arts Center! Tickets are on sale now, so please check our Instagram page for details. SPONSORS: http://ForiaWellness.com/BEST for 20% off your 1st order http://Blueland.com/BEST to shop the year's best sale on sustainable cleaning products in signature scents for your home http://ThriveCausemetics.com/BEST for 15% off your first order http://MilkBarStore.com/BEST for $15 off any dessert order of $80 or more http://Shopify.com/herbest for a free 14-day trial http://DrinkLMNT.com/BEST for a free 8 flavor sample pack with any order
This week, Busy and Caissie share what they got each other for Christmas! And they talk about how Busy's fear of Chelsea Handler stopped her from trying to lie to get out of a tricky situation, how much people in Massachusetts love scratch tickets and the runoff results in Georgia, thank heaven, with a little side of parenting advice from Cais to Biz. Then, entertainment journalist, Michael Ausiello, stops by to talk about the new movie “Spoiler Alert” based on his memoir about losing his husband Kit to cancer. And, comedian Atsuko Okatsuka is back to discuss her first comedy special “The Intruder” on HBO! SPONSORS: http://Zocdoc.com/DOINGHERBEST, sign up for FREE & book an appointment with a top rated doctor http://Betterhelp.com/BUSY for 10% off your 1st month http://RocketMoney.com/BEST to download the RocketMoney app (formerly TrueBill) http://SAKARA.COM/Busy CODE: BUSY for 20% off your first order http://HiyaHealth.com/BUSY for 50% off your first order of pediatrician approved superpowered chewable children's vitamins http://OliveandJune.com/BUSY for 20% off your 1st mani system
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take 6: CAIS is actually Orwellian., published by Charlie Steiner on December 7, 2022 on The AI Alignment Forum. As a writing exercise, I'm writing an AI Alignment Hot Take Advent Calendar - one new hot take, written every day for 25 days. Or until I run out of hot takes. CAIS, or Comprehensive AI Services, was a mammoth report by Eric Drexler from 2019. (I think reading the table of contents is a good way of getting the gist of it.) It contains a high fraction of interesting predictions and also a high fraction of totally wrong ones - sometimes overlapping! The obvious take about CAIS is that it's wrong when it predicts that agents will have no material advantages over non-agenty AI systems. But that's long been done, and everyone already knows it. What not everyone knows is that CAIS isn't just a descriptive report about technology, it also contains prescriptive implications, and relies on predictions about human sociocultural adaptation to AI. And this future that it envisions is Orwellian. This isn't totally obvious. Mostly, the report is semi-technical arguments AI capabilities. But even if you're looking for the parts of the report about what AI capabilities people will or should develop, or even the parts that sound like predictions about the future, they sound quite tame. It envisions that humans will use superintelligent AI services in contexts where defense trumps offense, and where small actors can't upset the status quo and start eating the galaxy. The CAIS worldview expects us to get to such a future because humans are actively working for it - no AI developer, or person employing AI developers, wants to get disassembled by a malevolent agent, and so we'll look for solutions that shape the future such that that's less likely (and the technical arguments claim that such solutions are close to hand). If the resulting future looks kinda like business as usual - in terms of geopolitical power structure, level of human autonomy, maybe even superficial appearance of the economy, it's because humans acted to make it happen because they wanted business as usual. Setting up a defensive equilibrium where new actors can't disrupt the system is hard work. Right now, just anyone is allowed to build an AI. This capability probably has to be eliminated for the sake of long-term stability. Ditto for people being allowed to have unfiltered interaction with existing superintelligent AIs. Moore's law of mad science says that the IQ needed to destroy the world drops by 1 point every 18 months. In the future where that IQ is 70, potentially world-destroying actions will have to be restricted if we don't want the world destroyed. In short, this world where people successfully adapt to superintelligent AI services is a totalitarian police state. The people who currently have power in the status quo are the ones who are going to get access to the superintelligent AI, and they're going to (arguendo) use it to preserve the status quo, which means just a little bit of complete surveillance and control. Hey, at least it's preferable to getting turned into paperclips. These implications shouldn't surprise you too much if you know that Eric Drexler produced this report at FHI, and remember the works of Nick Bostrom. In fact, also in 2019, Bostrom published The Vulnerable World Hypothesis, which much more explicitly lays out the arguments for why adaptation to future technology might look like a police state. Now, one might expect an Orwellian future to be unlikely (even if we suspend our disbelief about the instability of the system to an AI singleton). People just aren't prepared to support a police state - especially if they think "it's necessary for you own good" sounds like a hostile power-grab. On the other hand, the future elites will have advanced totalitarianism-enabling tech...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably good projects for the AI safety ecosystem, published by Ryan Kidd on December 5, 2022 on LessWrong. At EAGxBerkeley 2022, I was asked several times what new projects might benefit the AI safety and longtermist research ecosystem. I think that several existing useful-according-to-me projects (e.g., SERI MATS, REMIX, CAIS, etc.) could urgently absorb strong management and operations talent, but I think the following projects would also probably be useful to the AI safety/longtermist project. Criticisms are welcome. Projects I might be excited to see, in no particular order: A London-based MATS clone to build the AI safety research ecosystem there, leverage mentors in and around London (e.g., DeepMind, CLR, David Krueger, Aligned AI, Conjecture, etc.), and allow regional specialization. This project should probably only happen once MATS has ironed out the bugs in its beta versions and grown too large for one location (possibly by Winter 2023). Please contact the MATS team before starting something like this to ensure good coordination and to learn from our mistakes. Rolling admissions alternatives to MATS' cohort-based structure for mentors and scholars with different needs (e.g., to support alignment researchers who suddenly want to train/use research talent at irregular intervals but don't have the operational support to do this optimally). A combined research mentorship and seminar program that aims to do for AI governance research what MATS is trying to do for technical AI alignment research. A dedicated bi-yearly workshop for AI safety university group leaders that teaches them how to recognize talent, foster useful undergraduate research projects, and build a good talent development pipeline or “user journey” (including a model of alignment macrostrategy and where university groups fit in). An organization that does for the Open Philanthropy worldview investigations team what GCP did to supplement CEA's workshops and 80,000 Hours' career advising calls. Further programs like ARENA that aim to develop ML safety engineering talent at scale by leveraging good ML tutors and proven curricula like CAIS' Intro to ML Safety, Redwood Research's MLAB, and Jacob Hilton's DL curriculum for large language module alignment. More contests like ELK with well-operationalized research problems (i.e., clearly explain what builder/breaker steps look like), clear metrics of success, and have a well-considered target audience (who is being incentivized to apply and why?) and user journey (where do prize winners go next?). Possible contest seeds: Evan Hubinger's SERI MATS deceptive AI challenge problem; Vivek Hebbar's and Nate Soares' SERI MATS diamond maximizer selection problem; Alex Turner's and Quintin Pope's SERI MATS training stories selection problem. More "plug-and-play" curriculums for AI safety university groups, like AGI Safety Fundamentals, Alignment 201, Intro to ML Safety. A well-considered "precipism" university course template that critically analyzes Toby Ord's “The Precipice,” Holden Karnofsky's “The Most Important Century,” Will MacAskill's “What We Owe The Future,” some Open Philanthropy worldview investigations reports, some Global Priorities Institute ethics papers, etc. Hackathons in which people with strong ML knowledge (not ML novices) write good-faith critiques of AI alignment papers and worldviews (e.g., what Jacob Steinhardt's “ML Systems Will Have Weird Failure Modes” does for Hubinger et al.'s “Risks From Learned Optimization”). A New York-based alignment hub that aims to provide talent search and logistical support for NYU Professor Sam Bowman's planned AI safety research group. More organizations like CAIS that aim to recruit established ML talent into alignment research with clear benchmarks, targeted hackathons/contests with prizes, and offers ...
Carly J. Cais talks with Jason Barnard about spark your marketing with conversion content. Carly J. Cais started in marketing before she even knew it was marketing: she created websites for businesses in 2001 (her first website was a My Little Pony flipping shop ). She later started a niche blog and grew it over 12 years to a monthly readership of more than 120,000, partnering with ad networks and brands like L'OREAL, Martha Stewart Crafts and PLAID. She initially worked in B2C marketing, then moved into B2B in 2014, landing in the SaaS space shortly thereafter. After working with a number of early-stage startups, she found a knack for growing SaaS organizations through a combination of marketing, operations and Sales enablement, helping companies grow their pipeline, expand their customer base and lay the groundwork for scaling. She co-founded a consulting firm in mid-2021 and launched her own business as RevvSpark in early 2022. RevvSpark provides conversion content to marketing and sales teams of B2B SaaS companies with a focus on content marketing, demand gen and sales enablement. If it's visual and needs to persuade and convert, RevvSpark delivers it. Sales and marketing alignment is critical for any business. Creating common goals, strategies and communication between each team to also deliver consistent messaging and content that guides customers along their Buyer's Journey, regardless of what stage they are at. In this wonderful episode, the lovely Carly J. Cais shares brilliant nuggets about how to create high-converting content using her SPARK principle, Strategy, Planning, Assessment, Roles and Responsibilities, and Kick Start. She also goes into detail about the three stages of the buyer journey, the Awareness Stage, the Conversion Stage, and the Decision Stage, and what content best fits each stage: Top of Funnel Content, Middle of Funnel Content, and Bottom of Funnel Content. Carly also discusses how to get customer feedback in order to understand their pain points and recommended a book as a resource. As always, the show ends with passing the baton… the wonderful Carly passes the virtual baton to Ash Nallawalla, who will be next week's incredible guest. What you'll learn from Carly J. Cais 00:00 Carly J. Cais and Jason Barnard01:35 Carly J. Cais' Brand SERP02:15 Kalicube Knowledge Panel and Support Group02:22 Kalicube's Brand SERP03:38 Knowledge Panel Done for You Services by Kalicube04:01 What Does SPARK Mean?04:14 The Process for Improving Customer Touch Points Along the Buyer's Journey04:25 Step 1: Start with a Strategy05:10 Step 2: Create a Plan05:16 Step 3: Conduct an Assessment and Audit06:15 Aligning Sales and Marketing to Improve Customer Experience08:43 Updating Website Content and General Web Information To Guide Customers on their Buyer's Journey11:09 Importance of a Consistent Marketing Message12:12 Using Tools and Manual Process to Review Content For Google and Users14:54 Step 4: Defining Roles and Responsibilities of Team Members17:00 How Do You Know Which Content to Focus on?17:11 Three Stages in the Buyer's Journey17:25 Awareness Stage: Top of Funnel Content21:34 Conversion Stage: Middle of Funnel Content22:09 Creating Multiple Content Options to Help Customers Whichever Stage They Started Their Buyer's Journey25:03 Getting Customer Feedback to Understand Their Pain Points26:17 Decision Stage: Bottom of Funnel Content26:35 Carly J. Cais' Book Recommendation29:01 Passing the Baton: Carly J. Cais to Ash Nallawalla This episode was recorded live on video November 22nd 2022 Recorded live at Kalicube Tuesdays (Digital Marketing Livestream Event Series). Watch the video now >>
Carly J. Cais talks with Jason Barnard about spark your marketing with conversion content. Carly J. Cais started in marketing before she even knew it was marketing: she created websites for businesses in 2001 (her first website was a My Little Pony flipping shop ). She later started a niche blog and grew it over 12 years to a monthly readership of more than 120,000, partnering with ad networks and brands like L'OREAL, Martha Stewart Crafts and PLAID. She initially worked in B2C marketing, then moved into B2B in 2014, landing in the SaaS space shortly thereafter. After working with a number of early-stage startups, she found a knack for growing SaaS organizations through a combination of marketing, operations and Sales enablement, helping companies grow their pipeline, expand their customer base and lay the groundwork for scaling. She co-founded a consulting firm in mid-2021 and launched her own business as RevvSpark in early 2022. RevvSpark provides conversion content to marketing and sales teams of B2B SaaS companies with a focus on content marketing, demand gen and sales enablement. If it's visual and needs to persuade and convert, RevvSpark delivers it. Sales and marketing alignment is critical for any business. Creating common goals, strategies and communication between each team to also deliver consistent messaging and content that guides customers along their Buyer's Journey, regardless of what stage they are at. In this wonderful episode, the lovely Carly J. Cais shares brilliant nuggets about how to create high-converting content using her SPARK principle, Strategy, Planning, Assessment, Roles and Responsibilities, and Kick Start. She also goes into detail about the three stages of the buyer journey, the Awareness Stage, the Conversion Stage, and the Decision Stage, and what content best fits each stage: Top of Funnel Content, Middle of Funnel Content, and Bottom of Funnel Content. Carly also discusses how to get customer feedback in order to understand their pain points and recommended a book as a resource. As always, the show ends with passing the baton… the wonderful Carly passes the virtual baton to Ash Nallawalla, who will be next week's incredible guest. What you'll learn from Carly J. Cais 00:00 Carly J. Cais and Jason Barnard01:35 Carly J. Cais' Brand SERP02:15 Kalicube Knowledge Panel and Support Group02:22 Kalicube's Brand SERP03:38 Knowledge Panel Done for You Services by Kalicube04:01 What Does SPARK Mean?04:14 The Process for Improving Customer Touch Points Along the Buyer's Journey04:25 Step 1: Start with a Strategy05:10 Step 2: Create a Plan05:16 Step 3: Conduct an Assessment and Audit06:15 Aligning Sales and Marketing to Improve Customer Experience08:43 Updating Website Content and General Web Information To Guide Customers on their Buyer's Journey11:09 Importance of a Consistent Marketing Message12:12 Using Tools and Manual Process to Review Content For Google and Users14:54 Step 4: Defining Roles and Responsibilities of Team Members17:00 How Do You Know Which Content to Focus on?17:11 Three Stages in the Buyer's Journey17:25 Awareness Stage: Top of Funnel Content21:34 Conversion Stage: Middle of Funnel Content22:09 Creating Multiple Content Options to Help Customers Whichever Stage They Started Their Buyer's Journey25:03 Getting Customer Feedback to Understand Their Pain Points26:17 Decision Stage: Bottom of Funnel Content26:35 Carly J. Cais' Book Recommendation29:01 Passing the Baton: Carly J. Cais to Ash Nallawalla This episode was recorded live on video November 22nd 2022 Recorded live at Kalicube Tuesdays (Digital Marketing Livestream Event Series). Watch the video now >>
Ag caint faoi ullmhú don Nollaig, na margaí a bheas ar siúl i mBaile Átha Cliath, na háiteanna is fearr le siopadóireacht a dhéanamh agus le grimeanna beaga deasa bia a fhail.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The economy as an analogy for advanced AI systems, published by rosehadshar on November 15, 2022 on The AI Alignment Forum. Eric Drexler's Comprehensive AI Services (CAIS), particularly as set out in his 2019 report Reframing Superintelligence, is a complex model with many different assumptions and implications. It's hard to grasp the whole thing at once, and existing summaries are brief and partial.[1] One way of trying to understand CAIS is to seek generative intuitions for the whole model. These intuitions will be imprecise, but they can also make it easier to see why one might end up thinking that something like CAIS made sense. In this post, we offer one such generative intuition for CAIS: using the economy rather than rational agents as an analogy for thinking about advanced AI systems. Note that: We are not making the historical claim that thinking about economies was in fact the main generator of Drexler's thinking on CAIS.[2] There are other generative intuitions for CAIS, and other bodies of theory which the CAIS model is rooted in.[3] The basic analogy An economy is an abstraction for the sum total of ‘the production, distribution and trade, as well as consumption of goods and services'. Prescriptively, we want the economy to serve human needs and preferences - and it does this at least to some extent. Prescriptively, we also want advanced AI systems to serve human needs and preferences. In worlds where we get advanced AI systems right, they would therefore be serving a similar function to the economy: serving human needs and preferences. Whether we get AI right or not, it seems likely that advanced AI systems will become heavily integrated with the economy, such that it might become hard to distinguish them. It therefore seems reasonable to imagine advanced AI systems in analogy with the economy, and to use what we know about economic dynamics to reason about dynamics which might shape those systems. In the modern economy, specialised services are provided by a range of entities, mostly companies and governments. We don't see one giant global monopoly providing all services. Thinking analogically about advanced AI systems, the CAIS model expects an array of specialised AI services working on decomposed tasks, rather than a single generally superintelligent agent (a global monopoly in the base metaphor). This can be further unpacked. The reason that the human economy isn't structured as a global monopoly is that specialisation is efficient. It's often cheaper for an organisation to outsource a particular service, than to develop that capability in house: imagine family run businesses trying to manufacture their own smartphones from scratch, or big companies all hiring software engineers to develop their own internet search engines. So we end up with a range of different companies providing different services. Note that specialisation isn't always the most efficient thing, because of economies of scope: cases where the unit cost of producing something decreases as the variety of products increases. Maybe you've already built a petrol station to sell petrol, and selling snacks too is cheap at the margin. Or you have a factory which makes women's shoes, and starting a men's line is pretty efficient. Here, there are joint costs which get shared across the different products, and so you get economies of scope. But economies of scope don't seem to apply across the whole economy - otherwise we'd see the giant global monopoly. (Part of the reason here is that coordination costs increase with the size of an organisation, such that decentralised ways of sharing information like price signals are more efficient than centralised information transfer.)[4] In the advanced AI analogy, Drexler argues that it will be more efficient for a given AI service to coordinate with...
Inflows into alternative investments set new records in 2020 and 2021, in a variety of segments and "wrappers." On alternative investment platform CAIS, one of the highest-growth areas in the past few years has been the structured notes segment. CAIS's Marc Premselaar and Mariner Wealth Advisors' Brett Kunshek join the show where we discuss trends in structured notes investing, and in the broader alternatives industry. Show notes: https://altsdb.com/2022/11/cais-071/
Hoy, 26 de octubre, Milton Nascimento cumple 80 años. Lo celebramos recordando su extraordinario 'Clube da esquina', disco que firmó Lô Borges y se publicó hace 50 años en Brasil: 'Tudo que você podia ser', 'Cais', 'O trem azul', 'Saídas e bandeiras nº1', 'Nuvem cigana', 'Cravo e canela', 'Dos cruces', 'Um girassol da cor do seu cabelo', 'San Vicente', 'Estrelas', 'Clube da esquina nº2', 'Paisagem da janela', 'Me deixa em paz', 'Os povos', 'Lilia', 'Trem de doido' y 'Nada será como antes'. Escuchar audio
It's Intersex Awareness Day, and we are thrilled and honored to at long last have on our first ever out intersex guest, the brilliant Val Hill! Val (they/she) is a queer intersex community advocate who facilitates the group Club Intersex for the L.A. LGBT Center, so you would never in a million years guess that, pre-pandemic, she didn't even *know* the umbrella term "intersex!" Val shares how a routine sports physical at the age of fifteen led to them learning that they have complete androgen insensitivity syndrome (CAIS), which resulted in their being subjected to a slew of traumatizing genital exams throughout their teenage years. It wasn't until decades later during lockdown that Val finally discovered the robust intersex community that exists online, and fully embraced the beautiful "liminal space" that intersex folks inhabit. We learned a ton in this episode, and we're so grateful to Val for having this truly expansive conversation with us!Follow Val on Instagram at @lookwhatvalposted, and check out the L.A. LGBT Center's Club Intersex group at @clubintersex! Also mentioned in this episode...Intersex activist Tatenda Ngwaru's GoFundMe: https://www.gofundme.com/f/safety-and-wellbeing-for-tatenda-ngwaruAdditional intersex folks and orgs to follow on Instagram: @intersexjusticeproject / @interconnect_support / @saifaemerges / @pidgeon / @xoxy_alicia / @red.moonproject
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: They gave LLMs access to physics simulators, published by ryan b on October 17, 2022 on LessWrong. Over at Google, large language models have been plugged into physics simulators to help them share a world model with their human interlocutors, resulting in big performance gains. They call it Mind's Eye.This is how the authors describe the work: Correct and complete understanding of properties and interactions in the physical world is not only essential to achieve human-level reasoning (Lake et al., 2017), but also fundamental to build a general-purpose embodied intelligence (Huang et al., 2022). In this work, we investigate to what extent current LMs understand the basic rules and principles of the physical world, and describe how to ground their reasoning with the aid of simulation. Our contributions are three-fold: We propose a new multi-task physics alignment dataset, UTOPIA, whose aim is to benchmark how well current LMs can understand and reason over some basic laws of physics (§2). The dataset contains 39 sub-tasks covering six common scenes that involve understanding basic principles of physics (e.g., conservation of momentum in elastic collisions), and all the ground-truth answers are automatically generated by a physics engine. We find that current large-scale LMs are still quite limited on many basic physics-related questions (24% accuracy of GPT-3 175B in zero-shot, and 38.2% in few-shot). We explore a paradigm that adds physics simulation to the LM reasoning pipeline (§3) to make the reasoning grounded within the physical world. Specifically, we first use a model to transform the given text-form question into rendering code, and then run the corresponding simulation on a physics engine (i.e., MuJoCo (Todorov et al., 2012)). Finally we append the simulation results to the input prompts of LMs during inference. Our method can serve as a plug-and-play framework that works with any LM and requires neither handcrafted prompts nor costly fine-tuning. We systematically evaluate the performance of popular LMs in different sizes on UTOPIA before and after augmentation by Mind's Eye, and compare the augmented performance with many existing approaches (§4.2). We find Mind's Eye outperforms other methods by a large margin in both zero-shot and few-shot settings. More importantly, Mind's Eye is also effective for small LMs, and the performance with small LMs can be on par or even outperform that of 100× larger vanilla LMs. This seems like a direct step down the CAIS path of development. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
CAIS is a truly open marketplace for alternative investments, where financial advisors and asset managers can engage and transact directly on a massive scale. Advisors do not have the same access to alternative investments as large institutions. Without that access, advisors have fewer tools to capitalize on opportunities or withstand market downturns. This unlevel playing field puts financial advisors at a meaningful disadvantage when building and protecting wealth. CAIS is aiming to change that.
On episode 282 of the podcast BlockHash: Exploring the Blockchain, Brandon Zemp chats with James and Cais, the CEO and CPO for Obscuro. Obscuro is a company building an L2 that brings privacy to Ethereum. The team comes from R3 and built Corda, which is the leading permissioned blockchain in the finance sector. Obscuro, as a Layer 2, massively scales Ethereum with faster, cheaper transactions, while inheriting all its security and ecosystem. The Podcast is available on…
Should you allocate to alternative investments? For decades, advisors to the wealthiest strata of investors offered offered access to investments beyond the traditional stocks, bonds, and real estate. And for decades, there were significant barriers to entry in the alternative space that made it difficult for independent advisors to access the best money managers in the alt space. That's now changed. Firms like CAIS have democratized access to some of the best money managers available and streamlined what has historically been a time-consuming process to “complete the paperwork” when using alternative managers. In today's episode, we explore the alt space and how independent financial advisors can access an asset class that was previously reserved for the big banks and wirehouses and their well-heeled clients. Guest: Matt Brown, Founder of CAIS, the leading alternative investment platform for financial advisors who seek improved access to and education about alternative investment funds and products. Matt and I discuss: Why should advisors consider adding alternative investments to their allocations. The biggest reasons why advisors have been reluctant to allocate to alternatives. What are the most popular alternative investment asset classes. What's the profile of a typical client who is going to be receptive to an allocation to alternative investments. Whether advisors use a buy and hold strategy with their alternative assets or do they actively switch managers over time. What role CAIS plays in facilitating advisors making an allocation to alternative investments. What role education plays in expanding access to alternative investments. The best investment Matt ever made. Resources: Attend the CAIS Alternative Investment Summit on Oct 17 – 19, 2022 in Beverly Hills, CA. Learn more here. Matt Brown on LinkedIn
Matt Brown is the CEO of CAIS, the leading investment platform that connects independent financial advisors with managers in alternative strategies. Matt was a guest on the show last year discussing his path and the business, and that conversation is replayed in the feed. Since that time, CAIS and the broader movement of private wealth into alternatives have accelerated rapidly. I caught up with Matt to get his perspective on the tidal wave of capital coming into the space ahead of CAIS' inaugural Alternative Investment Summit, a three-day event bringing together senior leaders from the alternative asset management and independent financial advisor communities on October 17-19 in Los Angeles. Our conversation covers the size of the private wealth market, key drivers of the adoption of alternatives, the characteristics of managers and products that receive flows, advice for those who would like to participate, and where capital is flowing today. We then turn to CAIS' company strategy, including deploying a substantial capital raise, upgrading its technology, expanding the team, and building custom solutions for advisors. We close with Matt's views on the future of CAIS. Learn More Follow Ted on Twitter at @tseides or LinkedIn Subscribe to the mailing list Access Transcript with Premium Membership
Matt Brown is the Founder CEO of CAIS, a leading alternative investment platform on which thousands of financial advisors have invested over $12 billion in alternatives across private equity, private credit, hedge funds, and real estate. Our conversation covers Matt's background as both a financial advisor and distributor of alternatives that collectively led to the idea behind CAIS. We discuss the development of a two-sided platform, structural features for both financial advisors and managers, and challenges along the way. We then turn to the wave of capital coming from this community and what it means for investors. Lastly, we discuss Matt's perspective on leadership and the future of CAIS. Learn More Subscribe: Apple | Spotify | Google Follow Ted on Twitter at @tseides or LinkedIn Subscribe to the mailing list Read the transcripts
K. Eric Drexler is a senior research fellow at Oxford University and widely regarded as the father of nanotechnology. He has authored several seminal texts including Engines of Creation and Radical Abundance, his previous research focuses on scalable atomically precise manufacturing (APM) for the purpose of manipulating matter from the bottom up. He is currently examining the potential applications of AI-enabled automation in AI research and continues to challenge conventional approaches to AI by proposing a pluri-functional, decentralized intelligent system, Comprehensive AI Services (CAIS). CAIS rejects the notion that superintelligence must be modeled after the human mind, and instead composes broad AI systems out of many diverse, narrower-purpose components. Dr. Drexler has worked to democratize an understanding of nanotechnology and is directing a software development project based in top-down progressive refinement, adopting simulative environments and encouraging users to envision multiple futures for advanced systems.
Oyster Stew - A Broth of Financial Services Commentary and Insights
Consolidated Audit Trail reporting has reached a new regulatory stage, with upcoming Customer and Account Information System (CAIS) reporting requirements and interim obligations. Deadlines for achieving compliance with the requirements is December 12, 2022, only three months away. In today's podcast, Oyster Consulting's experts share what the CAIS reporting requirements are, issues we are seeing from our own clients as they go down this path, and what firms should be doing to meet the deadlines successfully.
Podcasting is a wonderful way to showcase the talents, insights, and creativity of the amazing people that populate our world. In that spirit, I'm thrilled to announce two special series of episodes that I'm launching this fall. The first series of episodes is in partnership with CAIS, the leading alternative investment platform for financial advisors. To explain what we're doing, I asked industry star and CAIS CMO Abby Salameh to join me today to discuss this special series of 10 podcast episodes and how they can help you enhance the investment side of your practice. (Teaser: you can register for the Summit Abby mentions by clicking here: https://caismarketing.com/summit2022) The second series is something that I don't think has ever been done before in the financial advisory profession. So please listen to today's episode as I explain what these "adventurous" shows are about and how you might be able to be a guest on one of them. I hope you enjoy these upcoming shows as much as I love doing them. Please spread the word about the podcast and rate it and write a review on your favorite social platform. Thank you!
ontargetpodcast.caIs music the soundtrack to your life but you feel like you can't find anything to ignite your soul? Join Mod Marty as he delves into the sounds and complexities of Mod and Soul music. If you need help discovering new sounds and need a little more variety in your musical diet, this is the podcast for you!-----------------------------------------------The Playlist Is:"Topsy 65"Hal Blaine - Dunhill"Tiny Tim"LaVern Baker- Atlantic"Mama's Got A Bag Of Her Own"Anna King- End"The Shift"Dave "Baby" Cortez- Clock"Gotta Keep Rolling"Rosco Gordon- Old Town"Little Miss Soul"The Lovettes- Carnival"Hurricane"Dave "Baby" Cortez- Clock"Mighty Fine Girl"The Redcaps- Decca"Don't Bring Me Down"The Pretty Things- Fontana"It's All Right"The Kinks- Pye"You Don't Mean Me No Good"The Jelly Beans- Eskee"Without Your Heart"St. George & Tana- Kapp"You're Good Enough For Me"Spyder Turner- MGM"Cigarette Ashes"Jimmy Conwell - Mirwood"Back Up Baby"The Sophisticates- Sonny"A Little Bit Hurt"Julien Covey & The Machine- Island"Come On And Dance With Me"Eddie Quinteros- Brent"Yesterday, Today And Tomorrow"The Preachers- Columbus"She's Coming Home"The Zombies- Parrot"You're Mean"B.B. King - Bluesway"Louisiana Woman"Lightnin' Hopkins- Jewel"Cry Me A River"The Sophisticates- Sonny
Agora vai? Com mais de 180 mil metros quadrados, o Cais Mauá pode, enfim, ser revitalizado. O edital de concessão foi publicado, e o leilão está previsto para ocorrer em 26 de setembro, na bolsa de valores de São Paulo. Neste episódio, Léo Saballa Jr e Paulo Germano conversam com Renato Dal Pian, arquiteto que integra o consórcio Revitaliza, e Flávio Kiefer, arquiteto e professor da PUCRS.
BNDES E Revitaliza, Falam Do Lançamento Do Edital De Revitalização Do Cais Mauá - 18/08/2022 by Rádio Gaúcha
A discussão de propostas para a área da educação foi destaque no debate da Rádio Gaúcha realizado hoje, em Porto Alegre. O leilão de concessão do Cais Mauá para a iniciativa privada vai ocorrer em 26 de setembro, na B3, a bolsa de valores de São Paulo. Foi localizado na manhã de hoje, próximo ao Anfiteatro Pôr do Sol, em Porto Alegre, o corpo de Daner Hernandez Silva, 45 anos, que estava desaparecido desde o temporal da noite de ontem. A empresa Boom Sabor que Faz Bem vai adotar o trecho da orla do Guaíba em frente à Fundação Iberê Camargo, na Zona Sul. As inscrições para o concurso da Companhia de Processamento de Dados de Porto Alegre foram prorrogadas para até as 12h de amanhã. Mais notícias em gzh.com.br.
If you've never heard of complete androgen insensitivity syndrome, you're not alone. It's one of 44 intersex variations which, although they affect up to 2 percent of the population, are rarely talked about. Jackie Green wants to change that. As a competitive runner (and Tina's teammate) at Ferris State University, she kept quiet about having CAIS. But she came out publicly five years ago and now uses her platform as the reigning Mrs. America to advocate for intersex youth. For complete show notes and links, visit our website at runningforreal.com/episode310. Thank you to Athletic Greens and Legacy of Speed for sponsoring this episode, and check out Mile 20 Mental Training AG1 is a simple and easy way to get 75 vitamins, minerals, and whole food source ingredients to help strengthen your immune system. It's simple to make and it tastes good! Go here to get a FREE year's supply of Vitamin D and five FREE travel packs with your subscription. Listen to the Pushkin Industries x Tracksmith new podcast, Legacy of Speed. When two Black sprinters raised their fists in protest at the 1968 Olympic Games, it shook the world. More than 50 years later, the ripple effects of their activism are still felt. In this new series from Pushkin Industries, get to know the runners who took a stand, and the coaches and mentors who helped make them fast enough — and brave enough — to change the world. Hosted by Malcolm Gladwell. Find Legacy of Speed here or on your favorite podcast player. Go here and use the code TINA15 to get free shipping at Tracksmith. With your purchase, Tracksmith will donate 5% to my favorite non profit, Runners for Public Lands! Win your mental battle and you'll win your race! Runners often prepare for the physical challenge of the race, but leave the mental challenge to chance. Mile 20 Mental Training will teach you the strategies and techniques to believe in yourself and achieve your goals on and off the course. Registration for this season begins August 8th, and will be open for just two weeks! Go here to learn more. Thanks for listening! We know there are so many podcasts you could listen to, and we are honored you have chosen Running For Real. If you appreciate the work that we do, here are a few things you can do to support us: Take a screenshot of the episode, and share it with your friends, family, and community on social media, especially if you feel that the topic will resonate with them. Be sure to tag us on Twitter, Facebook, Instagram Leave an honest review on iTunes or your favorite podcast player. Your ratings and reviews will really help us grow and reach new people. Not sure how to leave a review or subscribe? You can find out here. "Thank you" to Jackie. We look forward to hearing your thoughts on the show.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: QNR Prospects, published by PeterMcCluskey on July 16, 2022 on LessWrong. Approximately a book review: Eric Drexler's QNR paper. [Epistemic status: very much pushing the limits of my understanding. I've likely made several times as many mistakes as in my average blog post. I want to devote more time to understanding these topics, but it's taken me months to produce this much, and if I delayed this in hopes of producing something better, who knows when I'd be ready.] This nearly-a-book elaborates on his CAIS paper (mainly chapters 37 through 39), describing a path for AI capability research enables the CAIS approach to remain competitive as capabilities exceed human levels. AI research has been split between symbolic and connectionist camps for as long as I can remember. Drexler says it's time to combine those approaches to produce systems which are more powerful than either approach can be by itself. He suggests a general framework for how to usefully combine neural networks and symbolic AI. It's built around structures that combine natural language words with neural representations of what those words mean. Drexler wrote this mainly for AI researchers. I will attempt to explain it to a slightly broader audience. Components What are the key features that make this more powerful than GPT-3 alone, or natural language alone? QNR extends natural language by incorporating features of deep learning, and mathematical structures used in symbolic AI. words are associated with neural representations. Those representations are learned via a process that focuses on learning a single concept at a time with at least GPT-3-level ability to understand the concept. BERT exemplifies how to do this. words can be related to other word via graphs (such as syntax trees). words, or word-like concepts, can be created via compositions of simpler concepts. corresponding to phrases, sentences, books, and entities that we have not yet conceived. QNR feels much closer than well-known AI's to how I store concepts within my mind, as opposed to the stripped down version that I'm using here to help you reconstruct some of that in your mind. Drexler contrasts QNR to foundation models, but I don't find "foundation models" to be a clear enough concept to be of much value. Importance? I've noticed several people updating this year toward earlier AGI timelines, based on an increasing number of results demonstrating what look to me like marginally more general-purpose intelligence. [I wrote this before Gato was announced, and have not yet updated on that.] I've also updated toward somewhat faster timelines, but I'm mainly reacting to Drexler's vision of how to encode knowledge in a more general-purpose, scalable form. I expect that simple scaling up of GPT-3 will not generate human-level generality with a plausible amount of compute, possibly just because it relearns the basics from scratch each time a new version is tried. With QNR, new services would build on knowledge that is represented in much more sophisticated forms than raw text. Effects on AI Risks The QNR approach focuses on enhancing knowledge corpora, not fancier algorithms. It enables the AI industry to create more value, possibly at an accelerating rate, without making software any more agent-like. So it could in principle eliminate the need for risky approaches to AI. However, that's not very reassuring by itself, as the QNR approach is likely to accelerate learning abilities of agenty AIs if those are built. Much depends on whether there are researchers who want to pursue more agenty approaches. I can imagine QNR enabling more equality among leading AI's if a QNR corpus is widely available. A QNR-like approach will alter approaches to interpretability. The widely publicized deep learning results such as GPT-3 create enormous inscrutable fl...
60 años de conciertos celebra Milton Nascimento en este 2022. Le escuchamos en el que dio en Río de Janeiro en 2012 para el 50 aniversario: 'Cais', 'Veracruz' -con Wagner Tiso-, 'Clube da esquina nº2' y 'Nuvem cigana' -con Lô Borges-,'Morro velho', 'O trem azul' y 'Um girassol da cor do seu cabelo' -con Lô Borges-, 'Nos bailes da vida', 'Travessia', 'Nada será como antes' y 'Maria Maria'. Escuchar audio
In this episode, we speak with Matt Brown, Founder, CEO and Chairman of CAIS, the leading alternative investment platform for financial advisors who seek improved access to and education about alternative investment funds and products. CAIS is backed by Apollo, Franklin Templeton, and Motive Partners, among others. Matt has spent over 30 years at the intersection of wealth management, alternative investments, and platform design. He began his career as a financial advisor at Shearson Lehman Brothers and Smith Barney. I am your host RJ Lumba. We hope you enjoy the show.
Milton Nascimento se despide de los escenarios, tras sesenta años de conciertos, con una gira por Brasil, Europa y Estados Unidos que ha llamado 'La última sesión de música' y que recala el día 18 de este mes en Barcelona: 'Vidro e corte' -con Pat Metheny-, 'Encontros e despedidas', 'Travessia', 'Cais' y 'Veracruz'. Pat Metheny presenta su último disco, 'Side-eye New York City', a partir del 15 de junio en Zaragoza, Vigo, Madrid, Sevilla, Barcelona y Valencia: 'Better days ahead', 'Timeline', 'Bright size life' y 'Lodger'. Escuchar audio
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Alignment of Complex Systems Research Group, published by Jan Kulveit on June 4, 2022 on The AI Alignment Forum. tl;dr: We're a new alignment research group based at Charles University, Prague. If you're interested in conceptual work on agency and the intersection of complex systems and AI alignment, we want to hear from you. Ideal for those who prefer an academic setting in Europe. What we're working on Start with the idea of an "alignment interface": the boundary between two systems with different goals: As others have pointed out, there's a whole new alignment problem at each interface. Existing work often focuses on one interface, bracketing out the rest of the world. e.g. the AI governance bracket The standard single-interface approach assumes that the problems at each alignment interface are uncoupled (or at most weakly coupled). All other interfaces are bracketed out. A typical example of this would be a line of work oriented toward aligning the first powerful AGI system with its creators, assuming the AGI will solve the other alignment problems (e.g. “politics”) well. Against this, we put significant probability mass on the alignment problems interacting strongly. For instance, we would expect proposals such as Truthful AI: Developing and governing AI that does not lie to interact strongly with problems at multiple interfaces. Or: alignment of a narrow AI system which would be good at modelling humans and at persuasion would likely interact strongly with politics and geopolitics.Overall, when we take this frame, it often highlights different problems than the single-interface agendas, or leads to a different emphasis when thinking about similar problems.(The nearest neighbours of this approach are the “multi-multi” programme of Critch and Krueger, parts of Eric Drexler's CAIS, parts of John Wentworth's approach to understanding agency, and possibly this.) If you broadly agree with the above, you might ask “That's nice – but what do you work on, specifically?” In this short intro, we'll illustrate with three central examples. We're planning longer writeups in coming months. Hierarchical agency Many systems have several levels which are sensibly described as an agent. For instance, a company and its employees can usually be well-modelled as agents. Similarly with social movements and their followers, or countries and their politicians. Hierarchical agency: while the focus of e.g. game theory is on "horizontal" relations (violet), our focus is on "vertical" relations, between composite agents and their parts. So situations where agents are composed of other agents are ubiquitous. A large amount of math describes the relations between agents at the same level of analysis: this is almost all of game theory. Thanks to these, we can reason about cooperation, defection, threats, correlated equilibria, and many other concepts more clearly. Call this tradition "horizontal game theory". We don't have a similarly good formalism for the relationship between a composite agent and its parts (superagent and subagent). Of course we can think about these relationships informally: for example, if I say “this group is turning its followers into cultists”, we can parse this as a superagent modifying and exploiting its constituents in a way which makes them less “agenty”, and the composite agent "more agenty". Or we can talk about "vertical conflicts" between for example a specific team in a company, and the company as a whole. Here, both structures are “superagents” with respect to individual humans, and one of the resources they fight over is the loyalty of individual humans. What we want is a formalism good for thinking about both upward and downward intentionality. Existing formalisms like social choice theory often focus on just one direction - for example, the...
As a young liberal arts graduate, Matt Brown had doubts about interviewing for a financial advisor role. Then he got the job that launched his journey from advisor to founder of CAIS, a fintech “unicorn” with a mission to democratize access to alternative investments.Hear Matt's views on: Becoming a financial advisor out of college to recognizing an unlevel playing field in alternative investments. Bringing alternatives technology, accessibility, and education to financial advisors.Raising capital for CAIS during the pandemic.Nurturing company culture in a remote working environment. Entrepreneurship, including tough lessons, working with people who inspire you, and leading with confidence.Follow Matt Brown: Twitter, LinkedInFollow CAIS: LinkedIn, Twitter, Facebook