In this week's episode Greg and Patrick explore alternative parameterizations of the SEM-based latent curve model to capture various forms of nonlinearity, some that are approximations and others that are exact. Along the way they also discuss Swifties, remastering your life, bull testicles, the world's worst RA job, Yerkes-Dodson law, show a little ankle, the St. Louis Arch, bachelorette parties, deck screws, DIY-ing a model, being a little too quiet, complete nonsense, blasting your pecs, haters gonna hate, the worst day ever, Frankenspline's monster, being left off at the third floor, and looking for a new cohost. Stay in contact with Quantitude! Twitter: @quantitudepod Web page: quantitudepod.org Merch: redbubble.com
In this episode of Data Driven, our Andy Leonard and Frank La Vigne are joined by Chris McDermott, VP of Engineering at Wallaroo.AI. Together, they explore the challenges and advancements in the ever-evolving world of machine learning and artificial intelligence.From the importance of ongoing care for machine learning models to the rise of edge computing and decentralized networks, they touch on the critical need for flexibility and data privacy. Chris shares his insights on the technical challenges of AI and ML adoption, as well as his unique career journey. They also discuss the evolution of technology and the potential future impact of these innovations.Join us for a deep dive into the world of AI, technology, and the future of machine learning with Chris McDermott on this episode of Data Driven.Show Notes00:00 Exploring AI, data science, and data engineering.06:20 Training and inferring are different stages.08:12 Legacy AI doesn't require neural networks or GPUs.12:09 Machine learning models require consistent care and monitoring.15:10 MLOps merges skills, breaks down silos, collaborates.16:47 Prefer MLOps to avoid namespace collision. DevOps parallels original Star Wars plot.20:27 Internet-scale operations require automation and resilience.24:13 Challenges of integrating AI into business processes.28:03 New push for edge computing in technology industry.32:05 Edge technology critical, discussed in government tech symposium.34:50 Navigating from SendGrid to Twilio simplified processes.36:15 First foray into data, growing knowledge.39:33 Technology evolves, builds complexity over time.44:41 Book recommendation: "Seeing Like a State" by James C. Scott discusses legibility and centralization of power in society.46:28 Predictable tree farming fails due to ecosystem complexity.Speaker BioChris McDermott is a software engineer and entrepreneur who is passionate about creating products that make machine learning more accessible and manageable for users. His focus is on developing a platform that allows for easy deployment and management of machine learning models using any framework and on any architecture or hardware. He believes that current solutions in the market force users into a specific platform, and he aims to provide a more flexible and efficient alternative. With a strong belief in the potential of his product, Chris is dedicated to making machine learning more accessible and user-friendly for people across various industries.
Guest Percy Liang is an authority on AI who says that we are undergoing a paradigm shift in AI powered by foundation models, which are general-purpose models trained at immense scale, such as ChatGPT. In this episode of Stanford Engineering's The Future of Everything podcast, Liang tells host Russ Altman about how foundation models are built, how to evaluate them, and the growing concerns with lack of openness and transparency.Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads or Twitter/XConnect with School of Engineering >>> Twitter/XChapters:(00:00:00) IntroductionHost Russ Altman introduces Percy Liang, who runs the Stanford Center on Foundation Models (00:02:26) Defining Foundation ModelsPercy Liang explains the concept of foundation models and the paradigm shift they represent. (00:04:22) How are Foundation Models Built & Trained?Explanation of the training data sources and the scale of training data: training on trillions of words. Details on the network architecture, parameters, and the objective function.(00:10:36) Context Length & Predictive CapabilitiesDiscussion on context length and its role in predictions. Examples illustrating the influence of context length on predictive accuracy. (00:12:28) Understanding HallucinationPercy Liang explains how foundation models “hallucinate”, and the need for both truth and creative tasks which requires “lying”. (00:15:19) Alignment and Reinforcement in TrainingThe role of alignment and reinforcement learning from human feedback in controlling model outputs. (00:18:14) Evaluating Foundation ModelsThe shift from task-specific evaluations to comprehensive model evaluations, Introduction of HELM & the challenges in evaluation these models. (00:25:09) Foundation Models Transparency IndexPercy Liang details the Foundation Models Transparency Index, the initial results and reactions by the companies evaluated by it.(00:29:42) Open vs. Closed AI Models: Benefits & RisksThe spectrum between open and closed AI models , benefits and security impacts
#TheFreightCoach Morning Show is The TOP Transportation Morning Show is LIVE every weekday at 10:30 AM CST to breakdown THREE transportation industry headlines! Mark your calendars! Check out my YouTube Channel for further industry insights! https://www.youtube.com/channel/UCjrL70IEnCfDkNaiYMar3jw Make sure to subscribe and share! They are the new wave for freight brokers and freight brokerages to separate themselves from the competition! Thank you to my sponsor: https://www.greenscreens.ai/thefreightcoach Ditch your carrier packet, Drive more carrier sales and get better load coverage with seamless digital onboarding, TMS integration, and smart load coverage, visit: https://brokercarrier.com/
Guest Percy Liang is an authority on AI who says that we are undergoing a paradigm shift in AI powered by foundation models, which are general-purpose models trained at immense scale, such as ChatGPT. In this episode of Stanford Engineering's The Future of Everything podcast, Liang tells host Russ Altman how foundation models are built, how to evaluate them, and the growing concerns with lack of openness and transparency.
Life insurance agents have two main tracks to run on: face-to-face or virtual. Which one is right? Well, like most things in life...it depends. This podcast outlines the pros and cons of each so you can determine what model best fits your circumstances. The bottom line, they both work. What matters most is your activity toward whichever model you choose. Here's the video version of this podcast: https://youtu.be/diCSo05uuWI If you like this, you may enjoy my book. To protect your family, contact us. To change your life, join our team.
OpenAI's leadership has taken us all on a rollercoaster so it's great timing for another host-only episode. This week Sarah and Elad get into what has been going on at OpenAI and what the turbulent leadership changes tell us about the importance of good intent and good incentives when building these influential companies. They also talk about innovative products coming out of Pika Labs, why people are moving away from diffusion models to MLMs, and how, in AI investing, the ASP is the opportunity. Sign up for new podcasts every week. Email feedback to firstname.lastname@example.org Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @clarashih Show Notes: (0:00) Recapping the OpenAI saga (9:56) AI video products (16:14) Moving from Diffusion Models to MLMs (19:47) The beneficial margins of AI investing
Each local church has a unique personality that reflects its leadership. In context, the word church refers to those who have been attending for a while and are actively participating in the environments and equipping ministries that the leadership provides. Nominal Christians or inconsistent attendees are not part of the demographic in view here. Because of the leaders' influence, assessing the church's model for ministry is vital because no two leaders are the same, making no two churches the same. Let's examine six standard church models by looking at their upsides and downsides. Read Here: https://lifeovercoffee.com/what-kind-of-church-do-you-attend-here-are-six-models/ Will you help us to continue providing free content for everyone? You can become a supporting member here https://lifeovercoffee.com/join/, or you can make a one-time or recurring donation here https://lifeovercoffee.com/donate/.
How do we put large language models to work? Carl and Richard talk to Vishwas Lele about his work using LLMs with his customers. Vishwas talks about focusing on specific data sets for building LLMs and how size matters - things are simple when the source data is small, but as it grows, you need more complex tools to be able to allow the LLM to perform. Lots of cautionary tales and ideas on how to get great results from these new automation tools!
This is a recording of a Niche webinar performed on the 2023 Slate Stage in which Sara Jane Musk from Bellarmine University joins Niche's Brooke Urban and Will Patch to talk about using Slate's Ping service for scoring, providing more relevant comm flows, and how Bellarmine is using the service along with Niche Direct Admissions. In the Enrollment Insights Podcast, you'll hear about novel solutions to problems, ways to make processes better for students, and the questions that spark internal reflection and end up changing entire processes.
How do we put large language models to work? Carl and Richard talk to Vishwas Lele about his work using LLMs with his customers. Vishwas talks about focusing on specific data sets for building LLMs and how size matters - things are simple when the source data is small, but as it grows, you need more complex tools to be able to allow the LLM to perform. Lots of cautionary tales and ideas on how to get great results from these new automation tools!This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5634793/advertisement
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is scheming more likely in models trained to have long-term goals? (Sections 126.96.36.199-188.8.131.52 of "Scheming AIs"), published by Joe Carlsmith on November 30, 2023 on The AI Alignment Forum. This is Sections 184.108.40.206-220.127.116.11 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section here, or search "Joe Carlsmith Audio" on your podcast app. What if you intentionally train models to have long-term goals? In my discussion of beyond-episode goals thus far, I haven't been attending very directly to the length of the episode, or to whether the humans are setting up training specifically in order to incentivize the AI to learn to accomplish long-horizon tasks. Do those factors make a difference to the probability that the AI ends up with the sort of the beyond-episode goals necessary for scheming? Yes, I think they do. But let's distinguish between two cases, namely: Training the model on long (but not: indefinitely long) episodes, and Trying to use short episodes to create a model that optimizes over long (perhaps: indefinitely long) time horizons. I'll look at each in turn. Training the model on long episodes In the first case, we are specifically training our AI using fairly long episodes - say, for example, a full calendar month. That is: in training, in response to an action at t1, the AI receives gradients that causally depend on the consequences of its action a full month after t1, in a manner that directly punishes the model for ignoring those consequences in choosing actions at t1. Now, importantly, as I discussed in the section on "non-schemers with schemer-like traits," misaligned non-schemers with longer episodes will generally start to look more and more like schemers. Thus, for example, a reward-on-the-episode seeker, here, would have an incentive to support/participate in efforts to seize control of the reward process that will pay off within a month. But also, importantly: a month is still different from, for example, a trillion years. That is, training a model on longer episodes doesn't mean you are directly pressuring it to care, for example, about the state of distant galaxies in the year five trillion. Indeed, on my definition of the "incentivized episode," no earthly training process can directly punish a model for failing to care on such a temporal scope, because no gradients the model receives can depend (causally) on what happens over such timescales. And of course, absent training-gaming, models that sacrifice reward-within-the-month for more-optimal-galaxies-in-year-five-trillion will get penalized by training. In this sense, the most basic argument against expecting beyond episode-goals (namely, that training provides no direct pressure to have them, and actively punishes them, absent training-gaming, if they ever lead to sacrificing within-episode reward for something longer-term) applies to both "short" (e.g., five minutes) and "long" (e.g., a month, a year, etc) episodes in equal force. However, I do still have some intuition that once you're training a model on fairly long episodes, the probability that it learns a beyond-episode goal goes up at least somewhat. The most concrete reason I can give for this is that, to the extent we're imagining a form of "messy goal-directedness" in which, in order to build a schemer, SGD needs to build not just a beyond-episode goal to which a generic "goal-achieving engine" can then be immediately directed, but rather a larger set of future-oriented heuristics, patterns of attention, beliefs, and so o...
Hello, Climate Champions! In today's episode of the Climate Confident Podcast I had the pleasure of hosting Matt Gray, the co-founder and CEO of Transition Zero, a trailblazer in energy systems modelling.In our conversation, Matt delved into the intricate world of energy systems modelling, a crucial tool for stakeholders and decision-makers in shaping our energy future. He emphasized Transition Zero's mission to democratise this complex tool, making it accessible, auditable, and reproducible. This, Matt believes, is vital for accelerating the transition to a sustainable energy future.We explored the significant challenges in moving towards net zero, particularly the technical and political barriers. Matt highlighted the crucial role of transmission investments in the energy grid and how these investments, or the lack thereof, influence our ability to harness low-cost renewable energies like wind and solar.Another key takeaway from our chat was the importance of data transparency in fostering global collaboration. Matt underlined how Transition Zero's commitment to open data and models aims to bridge the gap between pledges and actions in climate commitments, thereby enhancing global climate action.Matt's insights on the role of transmission in achieving net zero were particularly thought-provoking, revealing how strategic investments can save trillions while facilitating a faster shift to renewable energy sources.We wrapped up with Matt's thoughts on COP28 and his future plans for Transition Zero. For those keen to learn more about their groundbreaking work or get involved, check out the TransitionZero website.Check out the video version of this episode on YouTube.Tune in, get inspired, and let's continue to make strides towards a sustainable future together! Remember, every step counts in our journey to net zero. Let's keep the conversation going – and remember to stay climate confident!Support the showPodcast supportersI'd like to sincerely thank this podcast's amazing supporters: Lorcan Sheehan Hal Good Jerry Sweeney Christophe Kottelat Andreas Werner Richard Delevan Anton Chupilko Devaang Bhatt Stephen Carroll William Brent Marcel Roquette And remember you too can Support the Podcast - it is really easy and hugely important as it will enable me to continue to create more excellent Climate Confident episodes like this one.ContactIf you have any comments/suggestions or questions for the podcast - get in touch via direct message on Twitter/LinkedIn. If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover the show. CreditsMusic credits - Intro by Joseph McDade, and Outro music for this podcast was composed, played, and produced by my daughter Luna JuniperThanks for listening, and remember, stay healthy, stay safe, stay sane!...
Why do AI chats lie? It probably starts with understanding the model's knowledge cutoff. Why does an AI's knowledge have an expiration date, and how does this impact our interaction with technology? We're cutting through the tech jargon to give you a clear view of how AI thinks and learns. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions about AI and LLMsUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: email@example.comConnect with Jordan on LinkedInTimestamps:[00:01:50] Daily AI news[00:05:50] Importance of knowledge cutoff in LLMs[00:07:55] How LLMs are trained[00:10:00] Knowledge cutoff is like a text book[00:14:30] ChatGPT modes and knowledge cutoff dates[00:21:50] Anthropic Claude knowledge cutoff date[00:27:35] Microsoft Bing Chat modes and knowledge cutoff dates[00:31:30] Google Bard knowledge cutoff date[00:33:40] Recap of LLM knowledge cutoff dates[00:35:30] Final thoughtsTopics Covered in This Episode:1. Understanding the Knowledge Cutoff in Large Language Models2. Understanding Learning Models and Knowledge Cutoffs3. Knowledge Cutoff Dates in Different Generative AI ModelsKeywords:AI, generative AI, Sports Illustrated, investigation, fake author names, AI-generated profile images, Symphony, Google, voice analytics, financial firms, natural language processing, Amazon, reInvent conference, Bedrock service, knowledge cutoff, large language models, web scraping, training, transparency, Anthropic Claude, Microsoft Bing Chat, human confirmation, GPT 4, Bing Chat modes, Google Bard, Palm 2, learning models, textbook, GPT 3.5, prompting, ChatGPT. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/ Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Welcome to another enlightening episode of the School Administrator Mastermind Recap! Join hosts Chad Ostrowski and Joshua Stamper as they dive into the dynamic world of "Unlocking Effective Co-Teaching Models in Education." In this thought-provoking recap, Chad and Joshua explore various co-teaching models that have proven successful in educational settings. From collaborative team-teaching to station teaching and parallel teaching, this discussion provides a comprehensive overview of strategies that empower educators to enhance the learning experience for all students. Key highlights include:
In this ep Brent, Chandler, and Derek are joined by Bloodbath from WNRP to discuss the Cicada. [0:00] - Introduction [0:00] - Models [00:00] - Pilot Profiles [00:00] - Dada Dive: House Marik Part [_] [00:00] - Sources and Recommendations Edited by: Brent Patreon Twitter: @OriginofMechs Discord Email: firstname.lastname@example.org Student Patrons: Harris Hoffman, Boy_inna_mech, Squared, Austin B, Mario, and Jesty Research Assistant: Torchfire Katayama, and P.C. "Mothman" McKenna Ace Pilots: John Keith III Field Marshal: Stahlkater
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meGetting Daniel Lee on the show is a real treat — with 20 years of experience in numeric computation; 10 years creating and working with Stan; 5 years working on pharma-related models, you can ask him virtually anything. And that I did…From joint models for estimating oncology treatment efficacy to PK/PD models; from data fusion for U.S. Navy applications to baseball and football analytics, as well as common misconceptions or challenges in the Bayesian world — our conversation spans a wide range of topics that I'm sure you'll appreciate!Daniel studied Mathematics at MIT and Statistics at Cambridge University, and, when he's not in front of his computer, is a savvy basketball player and… a hip hop DJ — you actually have his SoundCloud profile in the show notes if you're curious!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas and Luke Gorrie.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Daniel on Linkedin: https://www.linkedin.com/in/syclik/Daniel on Twitter: https://twitter.com/djsyclikDaniel on GitHub: https://github.com/syclikDaniel's DJ profile:
Episode Resources:Click here to view the abstract “Beautiful Tiny Stomas: Neonatal Ostomy Models”Click here to view the interactive ePoster for “Beautiful Tiny Stomas: Neonatal Ostomy Models” About the Speaker:Nikki Bruster has been a nurse for 13 years, and a wound/ostomy nurse for more than 7 years. Nikki is currently pursuing her Doctor of Nursing Practice (DNP). Nikki's nursing background includes pediatric and neonatal bedside and flight, Post Anesthesia Care Unit (PACU), and now wound, ostomy, and continence! Nikki has presented posters at multiple WOCNext® conferences. Her most recent poster “Beautiful Tiny Stomas: Neonatal Ostomy Models”, presented at WOCNext 2023, showed how to make quick, affordable ostomy models for educational purposes. Nikki has also published articles with the Oklahoma Nurses Association (ONA).
This week, returning from Thanksgiving, we are grateful for both Mike Kaput and Paul Roetzer joining us to share the latest AI news. Together, they tackle the latest OpenAI developments, analyze Andrej Karpathy's insightful video on LLMs, explore Ethan Mollick's new article on AI's business impact, and cover the latest AI advancements from various companies. 00:02:41 — AGI, Q* and the latest drama at OpenAI 00:21:59 — Andrej Karpathy's “The Busy Person's Intro to LLMs,” is now on YouTube 00:41:14 — Ethan Mollick's article on how AI should cause companies to reinvent themselves 00:49:54 — Anthropic releases a new version of Claude, Claude 2.1 00:52:42 — Inflection unveils Inflection-2, an AI model that may outperform Google and Meta 00:55:10 — Google's Bard Chatbot can now answer questions about YouTube videos 00:56:37 — ElevenLabs Speech to Speech tool 00:58:06 — StabilityAI releases Stable Video Diffusion 00:59:39 — Cohere launches a suite of fine-tuning tools to customize AI models. Meet Akkio, the generative business intelligence platform that lets agencies add AI-powered analytics and predictive modeling to their service offering. Akkio lets your customers chat with their data, create real-time visualizations, and make predictions. Just connect your data, add your logo, and embed an AI analytics service to your site or Slack. Get your free trial at akkio.com/aipod. Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
Christine Foster, CID, NCIDQ, EDAC is a Licensed New York State Interior Designer and 25-year veteran of the design industry. Learn about her innovative housing model that aligns affordable housing for caregivers with a multi-generational residential home for aging in community. About Christine Christine practices as a New York State Licensed Interior Design Professional and operates eight two three Interior Planning / Design LLC, located in Horseheads New York. As a graduate of Rochester Institute of Technology, she received her BFA in Interior Design with a concentration in Environmental Studies in 1995. While New York State Licensure, and NCIDQ Certification attest to her understanding of public health, safety, and welfare requirements, her commitment to design extends well beyond these critical fundamentals. Christine's desire to align evidence in the healthcare industry with built environment led her to obtain the EDAC-Evidence Based Design Accreditation and Certification, in 2020. She continues her commitment to the utilization of built environment as a tool for preventative medicine in the spaces she creates. Understanding how spaces impact our psychological and physiological well-being remains her true passion. Christine encourages the utilization of Biophilic and Trauma Informed Design approaches in her work. She maintains that these interventions are imperative to include in the planning of successful collaborative living environments. Most recently, Christine has pioneered a grassroots community initiative in support of a model that aligns affordable housing for caregivers alongside a collaborative, multi-generational residential home for aging in community. Through this initiative, Christine has created an opportunity to elevate her knowledge of evidence-based design practice by connecting both established and evolving programs within the long-term care industry with built environment. Her affiliations with SAGE- Society for Advancement of Gerontological Environments and The Center for Health Design since 2018 has afforded the alignment of relevant research with design of the built environment. Key Takeaways New models for senior living are essential. Retirement communities, nursing homes, or aging in place can no longer be the only options. It takes innovation and creativity to find new solutions. Connecting the built environment with long-term care could have profound impacts on aging. From the trajectory of many diseases, such as Alzheimer's, one's space can have a positive influence on varying aspects of aging – and it's where a lot of other solutions lie. We need options for mid-market seniors. Currently, they are rare, and we have a "tsunami" of older adults needing solutions for longt-erm care. A collaborative housing model looks at separating the cost of housing from the cost of care. This housing model would home five to seven residents with an adjacent dwelling for a caregiver, rotating family member, student, or other community member. Among other things, it would help curb the loneliness factor of aging. Trauma-informed and biophilic design research is key to the success of collaborative housing models.
This episode looks at Llloyd's Register Foundation's new project Maritime Innovation in Miniature which is one of the most exciting maritime heritage projects of recent years and a leader in terms of innovation in the maritime heritage field. The aim of the project is to film the world's best ship models. They are removed from their protective glass cases and filmed in studio conditions with the very latest camera equipment. In particular, the ships are filmed using a macro probe lens, which offers a unique perspective and extreme close up shots. It allows the viewer to get up close and personal with the subject, whilst maintaining a bug-eyed wide angle image. This makes the models appear enormous - simply put, it's a way of bringing the ships themselves back to life.Ship models are a hugely under-appreciated, under-valued and under-exploited resource for engaging large numbers of people with maritime history. The majority of museum-quality ship models exist in storage; those that are on display have little interpretation; few have any significant online presence at all; none have been preserved on film using modern techniques. These are exquisitely made 3D recreations of the world's most technologically significant vessels, each with significant messages about changing maritime technology and the safety of seafarers.The ships may no longer survive…but models of them do. This project acknowledges and celebrates that fact by bringing them to life with modern technology, in a way that respects and honours the art of the original model makers and the millions of hours of labour expended to create this unparalleled historical resource.This episode looks in particular at the extraordinary models that were filmed in 2022 at the Swedish National Maritime Museum in Stockholm. Hosted on Acast. See acast.com/privacy for more information.
In this episode, Phil walks Allen and Joel through new research from IntelStor on optimal wind turbine models for maximizing asset owner profits. Turns out that bigger isn't always better--stick with a 1.5-2.5 MW machine from a good manufacturer with a good PPA. Asset owners and investors, visit IntelStor.com for more actionable intelligence on optimizing your renewable energy projects. Sign up now for Uptime Tech News, our weekly email update on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard's StrikeTape Wind Turbine LPS retrofit. Follow the show on Facebook, YouTube, Twitter, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary Barnes' YouTube channel here. Have a question we can answer on the show? Email us! Pardalote Consulting - https://www.pardaloteconsulting.comWeather Guard Lightning Tech - www.weatherguardwind.comIntelstor - https://www.intelstor.com Allen Hall: I'm Allen Hall, president of Weather Guard Lightning Tech, and I'm here with the founder and CEO of IntelStor, Phil Totaro and the chief commercial officer of Weather Guard, Joel Saxum, and this is your News Flash. News Flash is brought to you by our friends at IntelStor. If you need actionable information about renewable projects or technologies, check out IntelStor at intelstor.com. IntelStor released information on onshore wind turbine profitability on LinkedIn. It provided a unique insight into the specific turban profitability and garnered several hundred thousand views. Okay, Phil, explain what that chart means and why we should care. Philip Totaro: This chart on asset owner net profit after they've achieved a net positive return on capital. What that means is, if you've spent, let's say, 200 million dollars on building a project, once your project has paid back that 200 million plus, how much is really left over? And which makes and models of turbine are actually giving you the best possible financial return? What we came to the conclusion of with this was, the turbines that are towards the top of this list are the ones that have a fairly reasonable net capacity factor, but they also have pretty high legacy PPA. That's usually on the order of, 65, 70, 75 dollars plus. Those are the turbines that are going to end up producing the best financial performance for you. Not necessarily just the ones with the best technical performance. Joel Saxum: What if we're talking, we want to talk apples to apples. So if I have the same piece of ground and the same wind resource in one spot, what is the best performing technically turbine? What make model should we be actually installing and why? Philip Totaro: In that ranking, what we've got are GE and Vestas are at the top. And then you've actually got Siemens, Siemens Gamesa, and Goldwind are actually the three and four. And GE and Vestas do get a bit of preferential treatment that they've got the better performing project sites, but at the end of the day, the turbine availability is going to be one of the largest determining factors of that profitability, because whatever your net capacity factor is, it's just, what the site does. So a project site that's got, let's say a 45 to 50 percent net capacity factor, but only a 12 a megawatt hour PPA, financially, is going to underperform a project site that's got, a 25 percent capacity factor, but an 80 a megawatt hour PPA. Allen Hall: But when it gets down to same site, which turbines are better? And what I think I'm seeing in this data, Phil, is the smaller turbines outperform the bigger turbines, and it's not even really close. Philip Totaro: They are, and the reason for that is they benefit a lot from the legacy power purchase contracts. Basically, it makes up for the fact that their net capacity factor is a lot lower then a brand new, shiny turbine that's got like a 50 percent net capacity factor.
Episode Topic: In this episode of PayPod, we delve into the world of financial modeling, shedding light on the intricacies of accurately reflecting and projecting the movement of money. We engage with Chris Reilly, the founder of Mission Capital Consulting and Financial Modeling Education, to explore the crucial role financial modeling plays in decision-making within the financial industry. Lessons You'll Learn: In this episode, listeners will gain insights into the challenges and nuances of financial modeling, with a particular emphasis on the importance of solid design and avoiding unnecessary complexity. Chris Reilly shares valuable lessons drawn from his experiences, highlighting the significance of building models as effective decision-making tools rather than getting lost in perfectionism. The conversation unfolds to reveal the balance between technical skills and soft skills, emphasizing the role of empathy and emotional intelligence in sustaining long-term success in the field. About Our Guest: Chris Reilly, a seasoned professional in Corporate Finance, Consulting, and Private Equity, embarked on an entrepreneurial journey in 2020. He started Financial Modeling Education to teach advanced financial models for FP&A and Private Equity. He also founded Mission Capital Consulting in Littleton, Colorado, providing precision M&A and FP&A models across the U.S. Chris empowers professionals and businesses, making his ventures symbols of innovation in the financial world. Topics Covered: The episode covers a range of topics, from Chris Reilly's early experiences working on the Lehman Brothers bankruptcy to his transition into private equity. The conversation explores the challenges faced in financial modeling, with a focus on poor design and overcomplication. Chris shares his perspective on the evolving landscape of financial modeling, touching on the impact of new technologies like AI. The discussion also extends to Chris's active presence on LinkedIn, where he leverages content creation as a top-of-funnel lead generation strategy for his businesses. Check our website: https://www.soarpay.com
IN THIS EPISODE, YOU'LL LEARN: Why is Jeff training an AI agent to personify himself? How could such an agent potentially be used in the future? What does this mean for overall productivity for people that have the means to train such an agent? What other things might happen as a result of this AI growth? What is the difference between the way Jeff is training his AI versus the way Preston is training his? What method will win in the long-haul for training AI agents? Is there concern people should have with providing the data for these agents? How does Bitcoin enter into this equation? Why are Jeff and Preston so interesting in the FinCEN proposal that was recently released for comment? Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Jeff's VC Firm Ego Death Capital. Jeff Booth's Twitter. Jeff's book, The Price of Tomorrow. NEW TO THE SHOW? Check out our We Study Billionaires Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: River Efani Babbel AlphaSense Vanta American Express Business Gold Card Alto Salesforce NetSuite Ka'Chava Wise Toyota Shopify
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A taxonomy of non-schemer models (Section 1.2 of "Scheming AIs"), published by Joe Carlsmith on November 22, 2023 on The AI Alignment Forum. This is Section 1.2 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section here. Other models training might produce I'm interested, in this report, in the likelihood that training advanced AIs using fairly baseline ML methods (for example, of the type described in Cotra (2022)) will give rise, by default, to schemers - that is, to agents who are trying to get high reward on the episode specifically in order to get power for themselves (or for other AIs) later. In order to assess this possibility, though, we need to have a clear sense of the other types of models this sort of training could in principle produce. In particular: terminal training-gamers, and agents that aren't playing the training-game at all. Let's look at each in turn. Terminal training-gamers (or, "reward-on-the-episode seekers") As I said above, terminal training-gamers aim their optimization at the reward process for the episode because they intrinsically value performing well according to some part of that process, rather than because doing so serves some other goal. I'll also call these "reward-on-the-episode seekers." We discussed these models above, but I'll add a few more quick clarifications. First, as many have noted (e.g. Turner (2022) and Ringer (2022)), goal-directed models trained using RL do not necessarily have reward as their goal. That is, RL updates a model's weights to make actions that lead to higher reward more likely, but that leaves open the question of what internal objectives (if any) this creates in the model itself (and the same holds for other sorts of feedback signals). So the hypothesis that a given sort of training will produce a reward-on-the-episode seeker is a substantive one (see e.g. here for some debate), not settled by the structure of the training process itself. That said, I think it's natural to privilege the hypothesis that models trained to produce highly-rewarded actions on the episode will learn goals focused on something in the vicinity of reward-on-the-episode. In particular: these sorts of goals will in fact lead to highly-rewarded behavior, especially in the context of situational awareness. And absent training-gaming, goals aimed at targets that can be easily separated from reward-on-the-episode (for example: "curiosity") can be detected and penalized via what I call "mundane adversarial training" below (for example, by putting the model in a situation where following its curiosity doesn't lead to highly rewarded behavior). Second: the limitation of the reward-seeking to the episode is important. Models that care intrinsically about getting reward in a manner that extends beyond the episode (for example, "maximize my reward over all time") would not count as terminal training-gamers in my sense (and if, as a result of this goal, they start training-gaming in order to get power later, they will count as schemers on my definition). Indeed, I think people sometimes move too quickly from "the model wants to maximize the sort of reward that the training process directly pressures it to maximize" to "the model wants to maximize reward over all time." The point of my concept of the "episode" - i.e., the temporal unit that the training process directly pressures the model to optimize - is that these aren't the same. More on this in section 2.2.1 below. Finally: while I'll speak of "reward-on-the-epi...
Welcome back to the podcast, beautiful! On today's podcast episode, I am dispelling the intimidation and pressure that you may be feeling around your manifestations or when things aren't going your way. The Bridge of Incidents is a term created by Neville Goddard, which describes the natural unfolding of events that occurs so that your manifestations can come true. Think about those difficult moments in life where things just seemed to NOT be going your way. Well, what the bridge of incidents implies is that those inconveniences, L's, failures, and setbacks are all creating a bridge that takes you to your desired reality. Everything is happening FOR you. All Universal Laws manifest through you, and when you live in the state of knowing that everything is happening for you, you allow the laws to work magic in your life. This is the proper labeling I want you to assign to your 3D reality... REGARDLESS of what you see. Listen in to hear all about it! . . Enrollment for REHEARSAL: Self-Concept x Manifestation Program is officially OPEN!
We're discussing the recent turmoil in the AI market today. OpenAI has kept the journalistic corps busy, but we also need to consider what the latest twists and turns in the l'affaire Altman may bring for startup founders.So, I rallied TechCrunch's own Kyle Wiggers, and Supervised founder and former Equity host Matthew Lynley to help me dig into the latest. Here's the show rundown:What has happened to OpenAI since Monday morning when we last recorded the podcast?What do the two experts think will happen to OpenAI's staff in the coming weeks?What should startups that use OpenAI technology do to lower their platform risk?And, does the OpenAI mess provide a boost to open-source AI models?We had a really lovely time. A big thank you to our ever-busy producer Theresa Loconsolo for getting an extra episode out on a holiday week!For episode transcripts and more, head to Equity's Simplecast website.Equity drops at 7 a.m. PT every Monday, Wednesday and Friday, so subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. TechCrunch also has a great show on crypto, a show that interviews founders and more!
In this episode of The Metaverse Podcast, Daniel Svonava, CEO and founder of Superlinked, joins our host Jamie Burke. The conversation covers essential aspects of Daniel's journey, the evolution of Superlinked, a startup at the forefront of turning data into vectors. Listen to a discussion that unlocks the power of data and unveils the potential of vectorization in shaping the future of machine learning and data-driven solutions. Tune in to: Understand Daniel's background, including his tech leadership at YouTube, and explore how Superlinked evolved from after their time in our accelerator program to a Web3 adjacent startup. Uncover the bottleneck in unlocking the true value of data due to the scarcity of skilled machine learning engineers and data scientists. Understand the concept of turning data into vectors, akin to points in multi-dimensional space, and how this process becomes the native language of machine learning models. Explore the application of vectorization in addressing challenges with open-ended user interactions, such as chatbots, and balancing control and relevance. Gain insights into broader industry trends, including the rise of language models like GPT, and the role of vectorization in enhancing machine learning workloads. Learn about Superlinked's position as a computational framework bridging the gap between raw data and vectorized data, simplifying the complex process of creating and managing vectors. VectorHub is a free and open-sourced learning hub for people interested in adding vector retrieval to their ML stack. #search #vectorops #machinelearning #personalization #data ------- Whether you're a founder, investor, developer, or just have an interest in the future of the Open Metaverse, we invite you to hear from the people supporting its growth. Outlier Ventures is the Open Metaverse accelerator, helping over 100 Web3 startups a year. You can apply for startup funding here - https://ov.click/pddsbcq122 Questions? Join our community: Twitter - https://ov.click/pddssotwq122 LinkedIn - https://ov.click/pddssoliq122 Discord - https://ov.click/pddssodcq122 Telegram - https://ov.click/pddssotgq122 More - https://ov.click/pddslkq122 For further Open Metaverse content: Listen to The Metaverse Podcast - https://ov.click/pddsmcq122 Sign up for our quarterly live events at - https://ov.click/pddsdfq122 Check out our portfolio - https://ov.click/pddspfq122 Thanks for listening!
How do you make SKAN work for retail? Games? IAPs? Finance? Subscriptions? Ad monetization? In this Growth Masterminds we chat with 2 Singular experts who work with hundreds of clients, helping them get their SKAdNetwork models right. We go through SKAN for: Games & apps - admon - IAP - subscription - hypercasual vs mid-core Verticals for apps - retail - fintech - fast food - on-demand - travel - social ... and more!
In this paper read, we discuss “Towards Monosemanticity: Decomposing Language Models Into Understandable Components,” a paper from Anthropic that addresses the challenge of understanding the inner workings of neural networks, drawing parallels with the complexity of human brain function. It explores the concept of “features,” (patterns of neuron activations) providing a more interpretable way to dissect neural networks. By decomposing a layer of neurons into thousands of features, this approach uncovers hidden model properties that are not evident when examining individual neurons. These features are demonstrated to be more interpretable and consistent, offering the potential to steer model behavior and improve AI safety.Find the transcript and more here: https://arize.com/blog/decomposing-language-models-with-dictionary-learning-paper-reading/To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.
Only four Irish figures have appeared on the cover of TIME magazine, one was Stevie, a Robot invented by Akara Robotics. Akara's co-founder, Niamh Donnelly from Galway. And from robots to a real man, Roy Keane cut a dashing figure in a new ad for Adidas. It seems that clothing brands are embracing older people. Stylist Annemarie Gannon joins us.
Here's a list of helpful resources for today's episode: https://hostagencyreviews.com/hostweek (Register for HAR's Host Week event!) https://hostagencyreviews.com/friday15 (Submit questions, sign up for reminders for the F15, along with that week's questions we'll be covering!) https://skillbridge.osd.mil/industry-employers.htm?section=2 https://skillbridge.osd.mil/industry-employers.htm?section=2 (More info on DOD's Skill Bridge program) https://hostagencyreviews.com/travel-jobs (HAR's travel jobs board) http://rb.gy/641qgj (Sign up for weekly job alert emails from HAR's travel board) https://www.altour.com/altour-university (ALTOUR University internship) Https://Asta.org https://hostagencyreviews.com/blog/the-employee-travel-advisor-research-report-summary-2023 (HAR's report on travel advisor employees. Lots of great data and stats for agency owners and employees!) https://hostagencyreviews.com/travelagentchatter/karen-devine-3d-cruise-partners (Podcast with advisor who does large incentive cruise groups) https://giftatrip.com/ (Gift a trip to employees) https://hostagencyreviews.com/hosts/travel-planners-international (TPI's profile on HAR) https://www.facebook.com/groups/fiercelyforwardtraveladvisors/ (Fiercely Forward FB group for all advisors. Doesn't need to be a part of TPI) email@example.com (Jenn's email) https://hostagencyreviews.com/hostweek (Register for Host Week! It's free. It's fun. It's full of info to grow your agency!) We do this every week! If you have travel industry questions, HAR likely has an answer :) Submit your burning question here Har.News/Friday15 and join us this Friday (and every Friday!) at 12CT for travel agent tips!
In this captivating episode, Sharon Hagle, the CEO and Founder of SpaceKids Global, takes us on a journey beyond our atmosphere. We delve into Sharon's recent spaceflight experience on Blue Origin's NS-20, where she and her husband, Marc Hagle, made history as the first married couple to venture into space together on a commercial vehicle. The conversation explores the profound impact of the overview effect, providing a visceral sense of the change in the air upon returning to Earth. Sharon shares the unique perspective and inspiration gained from witnessing our planet from space, emphasising the transformative power of the experience. Highlighting their addiction to spaceflight, Sharon and Marc express their enthusiasm for returning to space with Blue Origin on a future flight. What sets this upcoming journey apart is their commitment to bringing the magic of space to the next generation. The couple plans to take a group of eight kids along for the launch, embodying the mantra "if you can see it, you can be it." This mission aligns with SpaceKids Global's core mission of inspiring students in STEAM+ Environment education and ensuring equal representation for girls in the space industry. Towards the end of the episode, Marc makes a special appearance, adding his perspective and enthusiasm for the upcoming space adventure. The episode paints a vivid picture of Sharon and Marc's dedication to space exploration, education, and the belief that exposing children to such experiences can ignite a passion for STEM fields and the boundless possibilities of the universe. OUTLINE: Here's approximate timestamps for the episode. 00:08 Intro to Episode 00:25 Sharon Hagle 01:16 Models and love of space 02:05 Blue Origin - when it actually happened! 03:22 Unexpected (and comfy) delays 05:21 First married couple on commercial vehicle! 06:33 The ascent 08:11 The overview effect 10:32 Space Kids Global 13:34 Keeping Kids interested in STEM 16:22 Girl Scouts USA 17:30 New initiatives 19:10 Artemis Generation 20:18 Origin Story 23:04 Virgin Galactic 24:15 Capturing Imaginations 25:58 Advice to young people 27:00 Highlight of Space Kids Global 28:26 Welcoming Mark! 29:20 Mark's Experience 30:55 Space Travel 31:50 SpaceX and the impact of private space companies 36:10 How has been married effected their marriage? 38:20 8 Kids plus parents can see a Blue launch! 39:41 Wrap Ups and Socials Follow Space Kids Global X: https://twitter.com/spacekidsglobal Facebook: https://www.facebook.com/thespacekidsglobal Instagram: https://www.instagram.com/spacekidsglobal/ Youtube: https://www.youtube.com/channel/UCBbnk-mxgfULHMYkuVhCmKQ Website: https://www.spacekids.global/ Stay connected with us! Use #Astroben across various social media platforms to engage with us! (NEW - YOUTUBE): www.youtube.com/@astrobenpodcast Website: www.astroben.com Instagram: https://www.instagram.com/astrobenpodcast/ Twitter: https://twitter.com/Gambleonit LinkedIn: https://www.linkedin.com/company/astrobenpodcast/
Join Sasha & Stella for their interview with Dr. Jillian Spencer, a child and adolescent psychiatrist from Queensland, Australia. This episode delves into Dr. Spencer's experiences with the gender clinic in her hospital, highlighting the challenges and concerns she faced in encountering adolescents identifying as transgender. The episode explores the complexities of assessing and treating gender dysphoria in young individuals, shedding light on the impact of fast-tracking into gender clinics and the potential psychological consequences. Dr. Spencer shares her journey of trying to raise awareness about the concerns surrounding gender interventions for children and the need for a more comprehensive approach to mental health. Jillian Spencer is a child and adolescent psychiatrist who lives in Queensland, Australia. She studied Medicine at Monash University in Melbourne and then subsequently trained in psychiatry. She completed sub-speciality certificates in Child and Adolescent Psychiatry and Forensic Psychiatry. She qualified as a psychiatrist in 2009. She has worked for Queensland Health for 21 years. In mid-April 2023, she was removed from clinical duties due to being considered a danger to trans and gender diverse children. Since her story came out in the media in June, she has sought to raise awareness of the concerns around gender interventions for children.Sasha & Stella's conversation with Dr. Spencer touches on the influence of clinicians, the surge in youth attending gender clinics, encounters with dismissive responses from superiors, and the attempt to raise critical questions about safeguarding and ethical considerations in the treatment of gender dysphoria.This is the emotionally compelling account of the circumstances surrounding Dr. Spencer's suspension from clinical duties, her experiences raising awareness through presentations, distributing books, and participating in "Let Women Speak" rallies, and the broader implications for healthcare professionals challenging the prevailing narrative on gender dysphoria in children.Dr. Jillian Spencer speaks at a Let Women Speak rally in 2023 with Kellie-Jay Keenhttps://youtu.be/7OcIuLlbIPc?t=1204 Dr. Spencer's reference of former GWL guest, Ellie from her 2023 presentation:Models of Care for Children with Gender Dysphoriahttps://www.youtube.com/watch?v=zBZ-QkBGWlA&t=169s Introduction to the Gender Frameworkhttps://genspect.org/introduction-to-the-gender-framework/ Genspect's: The Gender Frameworkhttps://genspect.org/resources/sample-policies/genspect-presents-the-gender-framework/ Bigger Picture Conference - DenverTalks from the #genspectbiggerpicture conference in Denver, CO, 2023. More videos will be added regularly, revisit the link again for more content.https://youtube.com/playlist?list=PLmsMIEB9bK5xwwzCuF5AsjjviLfOxLzBA&si=DX_jQTb4JU_6pDmG Order Our Book – When Kids Say They're Trans: A Guide for Thoughtful Parents
Exegetes have long relied on the framework of the Acts of the Apostles to understand the behavior and organization of Paul's various ekklēsiai (assemblies), or church communities, from which Christ-groups have often been conceptualized as extensions from practices of diasporic Jewish synagogues. However, Richard S. Ascough's work has been at the forefront of a scholarly movement emphasizing the relevance of data from Greco-Roman associations—occupational, cultic, ethnic, and otherwise—not only as a preferable model for understanding the constitution of early Christ-following communities, but also as fruitful comparanda for interpreting Paul's letters, such as 1 Thessalonians and Philippians. On this episode, Dr. Ascough joined the New Books Network to discuss Early Christ Groups and Greco-Roman Associations: Organizational Models and Social Practices (Cascade Books, 2022), a collection of his articles and essays on associations from the last 25 years detailing the road to the acceptance of association data within scholarship as well as the recruitment, self-promotion, socializing, and memorializing practices that these recoveries from antiquity reveal. Ascough discusses how he carved his own niche within biblical studies, from starting as a master's student with a small group to translate previously unpublished inscriptions and papyri to ultimately showcasing the applicability of association behavior to early Christ-groups, Pauline and otherwise. Richard S. Ascough (Ph.D., Toronto School of Theology, 1997) is a Professor at the School of Religion at Queen's University in Kingston, Ontario, Canada. He has written extensively on the formation of early Christ groups and Greco-Roman religious culture, with particular attention to various types of associations. He has published widely in the field with more than fifty articles or essays and thirteen books, including Christ Groups & Associations: Foundational Essays(Baylor U. Press, 2022), Associations in the Greco-Roman World: A Sourcebook (Baylor U. Press, 2012), and Paul's Macedonian Associations (Mohr Siebeck, 2003). He has been recognized for his innovative and effective teaching in many ways, including the two top teaching awards at Queen's University and a 3M National Teaching Fellowship (2018). Rob Heaton (Ph.D., University of Denver, 2019) hosts Biblical Studies conversations for New Books in Religion and teaches New Testament, Christian origins, and early Christianity at Anderson University in Indiana. He recently authored The Shepherd of Hermas as Scriptura Non Grata: From Popularity in Early Christianity to Exclusion from the New Testament Canon (Lexington Books, 2023). For more about Rob and his work, or to offer feedback related to this episode, please visit his website at https://www.robheaton.com. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history
In the premiere episode of Gradient Dissent Business, we're joined by Weights & Biases co-founder Chris Van Pelt for a deep dive into the world of large language models like GPT-3.5 and GPT-4. Chris bridges his expertise as both a tech founder and AI expert, offering key strategies for startups seeking to connect with early users, and for enterprises experimenting with AI. He highlights the melding of AI and traditional web development, sharing his insights on product evolution, leadership, and the power of customer conversations—even for the most introverted founders. He shares how personal development and authentic co-founder relationships enrich business dynamics. Join us for a compelling episode brimming with actionable advice for those looking to innovate with language models, all while managing the inherent complexities. Don't miss Chris Van Pelt's invaluable take on the future of AI in this thought-provoking installment of Gradient Dissent Business.We discuss:0:00 - Intro5:59 - Impactful relationships in Chris's life13:15 - Advice for finding co-founders16:25 - Chris's fascination with challenging problems22:30 - Tech stack for AI labs30:50 - Impactful capabilities of AI models36:24 - How this AI era is different47:36 - Advising large enterprises on language model integration51:18 - Using language models for business intelligence and automation52:13 - Closing thoughts and appreciationThanks for listening to the Gradient Dissent Business podcast, with hosts Lavanya Shukla and Caryn Marooney, brought to you by Weights & Biases. Be sure to click the subscribe button below, to keep your finger on the pulse of this fast-moving space and hear from other amazing guests#OCR #DeepLearning #AI #Modeling #ML
Exegetes have long relied on the framework of the Acts of the Apostles to understand the behavior and organization of Paul's various ekklēsiai (assemblies), or church communities, from which Christ-groups have often been conceptualized as extensions from practices of diasporic Jewish synagogues. However, Richard S. Ascough's work has been at the forefront of a scholarly movement emphasizing the relevance of data from Greco-Roman associations—occupational, cultic, ethnic, and otherwise—not only as a preferable model for understanding the constitution of early Christ-following communities, but also as fruitful comparanda for interpreting Paul's letters, such as 1 Thessalonians and Philippians. On this episode, Dr. Ascough joined the New Books Network to discuss Early Christ Groups and Greco-Roman Associations: Organizational Models and Social Practices (Cascade Books, 2022), a collection of his articles and essays on associations from the last 25 years detailing the road to the acceptance of association data within scholarship as well as the recruitment, self-promotion, socializing, and memorializing practices that these recoveries from antiquity reveal. Ascough discusses how he carved his own niche within biblical studies, from starting as a master's student with a small group to translate previously unpublished inscriptions and papyri to ultimately showcasing the applicability of association behavior to early Christ-groups, Pauline and otherwise. Richard S. Ascough (Ph.D., Toronto School of Theology, 1997) is a Professor at the School of Religion at Queen's University in Kingston, Ontario, Canada. He has written extensively on the formation of early Christ groups and Greco-Roman religious culture, with particular attention to various types of associations. He has published widely in the field with more than fifty articles or essays and thirteen books, including Christ Groups & Associations: Foundational Essays(Baylor U. Press, 2022), Associations in the Greco-Roman World: A Sourcebook (Baylor U. Press, 2012), and Paul's Macedonian Associations (Mohr Siebeck, 2003). He has been recognized for his innovative and effective teaching in many ways, including the two top teaching awards at Queen's University and a 3M National Teaching Fellowship (2018). Rob Heaton (Ph.D., University of Denver, 2019) hosts Biblical Studies conversations for New Books in Religion and teaches New Testament, Christian origins, and early Christianity at Anderson University in Indiana. He recently authored The Shepherd of Hermas as Scriptura Non Grata: From Popularity in Early Christianity to Exclusion from the New Testament Canon (Lexington Books, 2023). For more about Rob and his work, or to offer feedback related to this episode, please visit his website at https://www.robheaton.com. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/middle-eastern-studies
Exegetes have long relied on the framework of the Acts of the Apostles to understand the behavior and organization of Paul's various ekklēsiai (assemblies), or church communities, from which Christ-groups have often been conceptualized as extensions from practices of diasporic Jewish synagogues. However, Richard S. Ascough's work has been at the forefront of a scholarly movement emphasizing the relevance of data from Greco-Roman associations—occupational, cultic, ethnic, and otherwise—not only as a preferable model for understanding the constitution of early Christ-following communities, but also as fruitful comparanda for interpreting Paul's letters, such as 1 Thessalonians and Philippians. On this episode, Dr. Ascough joined the New Books Network to discuss Early Christ Groups and Greco-Roman Associations: Organizational Models and Social Practices (Cascade Books, 2022), a collection of his articles and essays on associations from the last 25 years detailing the road to the acceptance of association data within scholarship as well as the recruitment, self-promotion, socializing, and memorializing practices that these recoveries from antiquity reveal. Ascough discusses how he carved his own niche within biblical studies, from starting as a master's student with a small group to translate previously unpublished inscriptions and papyri to ultimately showcasing the applicability of association behavior to early Christ-groups, Pauline and otherwise. Richard S. Ascough (Ph.D., Toronto School of Theology, 1997) is a Professor at the School of Religion at Queen's University in Kingston, Ontario, Canada. He has written extensively on the formation of early Christ groups and Greco-Roman religious culture, with particular attention to various types of associations. He has published widely in the field with more than fifty articles or essays and thirteen books, including Christ Groups & Associations: Foundational Essays(Baylor U. Press, 2022), Associations in the Greco-Roman World: A Sourcebook (Baylor U. Press, 2012), and Paul's Macedonian Associations (Mohr Siebeck, 2003). He has been recognized for his innovative and effective teaching in many ways, including the two top teaching awards at Queen's University and a 3M National Teaching Fellowship (2018). Rob Heaton (Ph.D., University of Denver, 2019) hosts Biblical Studies conversations for New Books in Religion and teaches New Testament, Christian origins, and early Christianity at Anderson University in Indiana. He recently authored The Shepherd of Hermas as Scriptura Non Grata: From Popularity in Early Christianity to Exclusion from the New Testament Canon (Lexington Books, 2023). For more about Rob and his work, or to offer feedback related to this episode, please visit his website at https://www.robheaton.com. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Ah, sweet Charlene. In this week's “Designing Women” episode, she was once again sucked into the mystic allure of fame. But, this wasn't like all those times before - from the designs on being a country and western singer, or when she thought about being a lady preacher, OR when she wanted to be a jury foreperson. No, this week's dream of fame is a bit more removed. A bit more vicarious. A bit more…stage mom. That's right: this one is all about Olivia. Petite. Charming. Her little bald head gleaming in the limelight. A Star is Born indeed. Which got us (or at least, Nikki) in the mindset of baby and child models. So, let's fall down the rabbit hole on child models a little bit and why not play a Grits Blitz along the way!
Welcome back to the podcast, beautiful! Get better at shifting into the in-state of your manifestations by realizing that who you are (who you TRULY are) at your core, is pure consciousness. You are not your body and you are not your mind. You are consciousness. You are awareness. By knowing this, you get to choose which reality and which self-concept you wish you live in by simply shifting your consciousness to the desired state you wish to live in. If you desire to change your self-concept, then simply shift your consciousness from this version of you--to that version of you. Listen in to hear all about it! . . BBU is now open for enrollment all year round!
Sg2 continues to hear from members about capacity issues impacting the ability to capture growth, and while there are many different approaches to addressing the capacity crunch, the care at home model is one we believe can have long-term impact. So on this week's Sg2 Perspectives, host Jayme Zage, PhD, is joined by Sg2 Senior Consultant Nikita Arora and Principal Eric Lam to talk about insights gained as they've connected with members on care at home questions and initiatives. Nikita and Eric discuss reimbursement, innovative approaches to cost coverage and where organizations are focusing their early efforts. We are always excited to get ideas and feedback from our listeners. You can reach us at firstname.lastname@example.org, find us on Twitter as @Sg2HealthCare, or visit the Sg2 company page on LinkedIn.
This is a super special release podcast with the fabulous Leonie Dawson, because Leonie's Academy is DOUBLING in price on 1 December 2023 - and I wanted to make sure that you don't miss out on joining Leonie's Academy for just US$99 per year! I haven't gone off my podcast episode numbers for YEARS! But I just had to do this special episode to share the big news about Leonie's Academy price increase, AND so that I could pick Leonie's brain about some juicy business strategies. In this episode, Leonie and I are talking all about making sales, and we'll be diving deep into Leonie's business model, because it's a very different business model to my own, yet there are so many things we absolutely agree on when it comes to business and marketing. Make sure to check out Leonie's Academy before the price increases. My affiliate link can be found in the show notes of today's episode! Show notes in full are at: tashcorbin.com/leonie Follow Tash on Facebook: facebook.com/tashcorbincoaching Join the Heart-Centred Soul-Driven Entrepreneurs: facebook.com/groups/hcsde
In this episode, Jack talks with Amanda Robinson, VP of Portfolio Solutions at Fidelity Investments. Amanda leads a team responsible for delivering support for Fidelity's portfolio solution capabilities. Amanda discusses hyper-personalization and the need for customizable solutions with user-friendly interfaces. She highlights the role of technology in streamlining operations to free up time for client relationships. Key Takeaways [01:33] - Amanda's journey. [07:39] - Importance of hyper-personalization. [09:34] - Fidelity's user interface. [10:47] - Future of customizable solutions and managed account platforms. [12:23] - Importance of advisor and teammate relationship. [17:24] - What advisors say about model portfolios. [20:20] - Amanda's takeaways and interests. Quotes [21:02] - "Investors want their advisors to meet them where they want to do business. And emerging generations want scale, personalizable investments, which they want in a really easy-to-digest way." ~ Amanda Robinson Disclosure Information provided in, and presentation of, this document are for informational and educational purposes only and are not a recommendation to take any particular action, or any action at all, nor an offer or solicitation to buy or sell any securities or services presented. It is not investment advice. Fidelity does not provide legal or tax advice. Before making any investment decisions, you should consult with your own professional advisers and take into account all of the particular facts and circumstances of your individual situation. Fidelity and its representatives may have a conflict of interest in the products or services mentioned in these materials because they have a financial interest in them, and receive compensation, directly or indirectly, in connection with the management, distribution, and/or servicing of these products or services, including Fidelity funds, certain third-party funds and products, and certain investment services. Views expressed are those of the speaker through October 10, 2023 and do not necessarily represent the views of Fidelity Investments. Views are subject to change at any time based upon market or other conditions and Fidelity disclaims any responsibility to update such views. LifeYield is an independent entity and is not affiliated with Fidelity Investments. Listing them does not suggest a recommendation or endorsement by Fidelity Investments. Fidelity Model Portfolios are provided to financial intermediaries (“Intermediary”) on a non-discretionary basis by Fidelity Institutional Wealth Adviser LLC (“FIWA”), a registered investment adviser. Information about the Models may be provided by representatives of FIWA, or its broker-dealer affiliates Fidelity Distributors Company LLC (“FDC”), Fidelity Brokerage Services LLC (“FBS”), and/or National Financial Services LLC. Fidelity Institutional Wealth Adviser LLC (“FIWA”) is a registered investment adviser and an indirect, wholly owned subsidiary of FMR LLC. FIWA provides customized separately managed account portfolios that consider tax effects for taxable clients. FIWA has retained the services of its affiliate, Fidelity Management & Research Company LLC (“FMR”), to manage these accounts subject to FIWA's supervision and oversight. Fidelity Investments® provides investment products through Fidelity Distributors Company LLC; clearing, custody, or other brokerage services through National Financial Services LLC or Fidelity Brokerage Services LLC, Members NYSE, SIPC; and institutional advisory services through Fidelity Institutional Wealth Adviser LLC. Links Amanda Robinson on LinkedIn Fidelity Institutional Connect with our hosts LifeYield Jack Sharry on LinkedIn Jack Sharry on Twitter Subscribe and stay in touch Apple Podcasts Spotify LinkedIn Twitter Facebook