POPULARITY
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Cliff talks about the legacy of author, speaker, and influential Christian leader Dr. Tony Compolo, who recently passed away at the age of 89.Watch this clip from Dr. Campolo's message at Urbana '87Watch on YouTube: youtube.com/sunrisecommunitychurchWatch live on Mondays at 10am: www.facebook.com/sunrisecommunityonline/liveSong: Fredji - Happy Life (Vlog No Copyright Music)Music provided by Vlog No Copyright Music.Video Link: https://youtu.be/KzQiRABVARk
Mike Erre joins Phil and Skye to discuss how to survive the holidays. What's the best way to engage (or disengage) a contentious relative who wants to argue about politics, and how do we remain united as churches and families in these divided times? Professor, author, and preacher Tony Campolo died last week. Shane Claiborne returns to discuss his friendship with Campolo and his legacy of challenging the American church with the “red letters” of the Bible. Also this week—what's the real goal behind protecting public nativity displays, and disappointed political witches. 0:00 - Intro 1:33 - Show Starts 2:50 - Theme Song 3:12 - Sponsor - Wheaton Graduate School - Learn in a rich, rigorous Christian environment - https://www.wheaton.edu/holypost 4:20 - Sponsor - BioLogos - Go to https://biologos.org/podcast/language-of-god/ and check out the Language of God podcast! 5:24 - Interview 6:42 - Political Witches 12:54 - The Liberty Council's Annual Report 17:19 - Tony Campolo's Passing 26:35 - Being Grateful 46:03 - Favorite Conflict-Deflection Topic 54:05 - Sponsor - Aura Frames - Exclusive $45-off Carver Mat at https://www.AuraFrames.com. Use code HOLYPOST at checkout to save! 55:43 - Sponsor - Glorify - Sign up for the #1 Christian Daily Devotional App to help you stay focused on God. Go to https://glorify-app.com/en/HOLYPOST to download the app today! 56:48 - Interview 1:06:34 - Dealing with Pushback 1:12:00 - Where “Red Letter Christians” Comes From 1:20:46 - What Tony Was Like to Be Around 1:29:15 - End Credits Links Mentioned in the News Segment: Witches Report Their Spells Against Trump Aren't Working: “He Has a Shield” https://cbn.com/news/us/witches-report-their-spells-against-trump-arent-working-he-has-shield Liberty Council's Friend or Foe Campaign: https://lc.org/newsroom/details/111124-friend-or-foe-christmas-campaign-2025 Tony Campolo's Story https://www.youtube.com/watch?v=DRBM_YY_YX0 Other resources: With God Daily with Skye Jethani: https://www.withgoddaily.com/ Voxology Podcast with Mike Erre: https://pod.link/1049250910 Holy Post website: https://www.holypost.com/ Holy Post Plus: www.holypost.com/plus Holy Post Patreon: https://www.patreon.com/holypost Holy Post Merch Store: https://www.holypost.com/shop The Holy Post is supported by our listeners. We may earn affiliate commissions through links listed here. As an Amazon Associate, we earn from qualifying purchases.
This episode features a conversation originally recorded in May 2020 for the podcast Baptist Without An Adjective. In it, Word&Way President Brian Kaylor interviewed author and sociologist Tony Campolo. The author of 35 books and a longtime professor at Eastern University, Campolo died on Nov. 19 at the age of 89. This conversation is being rebroadcast to honor this influential and important Christian thinker. Note: Don't forget to subscribe to our award-winning e-newsletter A Public Witness that helps you make sense of faith, culture, and politics. And order a copy of Baptizing America: How Mainline Protestants Helped Build Christian Nationalism by Brian Kaylor and Beau Underwood. If you buy it directly from Chalice Press, they are offering 33% off the cover price when you use the promo code "BApodcast."
This is a rebroadcast of my 2012 interview with Tony Campolo. Dr. Campolo died this week at the age of 89.
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Tony Campolo, the Baptist preacher, sociologist, Red Letter Christian and ceaseless campaigner for a Christian vision of social justice, has died. In this episode, we pay tribute to Tony in the best way we know how: by letting him speak for himself. Eventually. First, we talk about our own memories of him, and his influence on our theology. But once we've done making it about us, we share an interview with Tony from the vaults. In 2012, Tony Campolo was concerned about many of the things that are now even more pressing and horrifying. The interview feels prophetic in multiple understandings of the word, and Campolo is solid on everything from American militarism and imperialism to the incompatibility of Christian spirituality with right-wing ideological selfishness. His identification of the central problem with evangelical engagement with culture -- the idea that everything gets better if people become Christians -- is expressed particularly poignantly, considering where we are with the Church in the West. We hope you enjoy hearing from Tony again, and that you remember his family and friends at this time. Full disclosure: we also talk a lot about why we're excited about our new magazine, S(h)ibboleth. We hope it doesn't come across as a sales pitch and more as the genuine excitement we're feeling about connecting more Christians with similar beliefs and making the world better as a result. Naive? Maybe. Worth a shot? Sure. Fun? Hell yeah. Find out deets at shibbolethmag.com and beerchristianity.co.uk
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Joe Campolo LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Agents that act for reasons: a thought experiment, published by Michele Campolo on January 24, 2024 on The AI Alignment Forum. Posted also on the EA Forum. In Free agents I've given various ideas about how to design an AI that reasons like an independent thinker and reaches moral conclusions by doing so. Here I'd like to add another related idea, in the form of a short story / thought experiment. Cursed Somehow, you have been cursed. As a result of this unknown curse that is on you now, you are unable to have any positive or negative feeling. For example, you don't feel pain from injuries, nothing makes you anxious or excited or sad, you can't have fun anymore. If it helps you, imagine your visual field without colours, only with dull shades of black and white that never feel disgusting or beautiful. Before we get too depressed, let's add another detail: this curse also makes you immune to death (and other states similar to permanent sleep or unconsciousness). If you get stabbed, your body magically recovers as if nothing happened. Although this element might add a bit of fun to the story from our external perspective, keep in mind that the cursed version of you in the story doesn't feel curious about anything, nor has fun when thinking about the various things you could do as an immortal being. No one else is subject to the same curse. If you see someone having fun and laughing, the sentence "This person is feeling good right now" makes sense to you: although you can't imagine nor recall what feeling good feels like, your understanding of the world around you remained intact somehow. (Note: I am not saying that this is what would actually happen in a human being who actually lost the capacity for perceiving valence. It's a thought experiment!) Finally, let's also say that going back to your previous life is not an option. In this story, you can't learn anything about the cause of the curse or how to reverse it. To recap: You can't feel anything You can't die You can't go back to your previous state The curse only affects you. Others' experiences are normal. In this situation, what do you do? In philosophy, there is some discourse around reasons for actions, normative reasons, motivating reasons, blah blah blah. Every philosopher has their own theory and uses words differently, so instead of citing centuries of philosophical debates, I'll be maximally biased and use one framework that seems sensible to me. In Ethical Intuitionism, contemporary philosopher Michael Huemer distinguishes "four kinds of motivations we are subject to": Appetites: examples are hunger, thirst, lust (simple, instinctive desires) Emotions: anger, fear, love (emotional desires, they seem to involve a more sophisticated kind of cognition than appetites) Prudence: motivation to pursue or avoid something because it furthers or sets back one's own interests, like maintaining good health Impartial reasons: motivation to act due to what one recognises as good, fair, honest, et cetera. You can find more details in section 7.3 of the book. We can interpret the above thought experiment as asking: in the absence of appetites and emotions - call these two "desires", if you wish - what would you do? Without desires and without any kind of worry about your own death, does it still make sense to talk about self-interest? What would you do without desires and without self-interest? My guess is that, in the situation described in Cursed, at least some, if not many, would decide to do things for others. The underlying intuition seems to be that, without one's own emotional states and interests, one would prioritise others' emotional states and interests, simply due to the fact that nothing else seems worth doing in that situation. In other words, although one might need emotional states to first develop an accu...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free agents, published by Michele Campolo on December 27, 2023 on The AI Alignment Forum. Posted also on the EA Forum. Shameless attempt at getting your attention: If you've heard of AI alignment before, this might change your perspective on it. If you come from the field of machine ethics or philosophy, this is about how to create an independent moral agent. Introduction The problem of creating an AI that understands human values is often split into two parts: first, expressing human values in a machine-digestible format, or making the AI infer them from human data and behaviour; and second, ensuring the AI correctly interprets and follows these values. In this post I propose a different approach, closer to how human beings form their moral beliefs. I present a design of an agent that resembles an independent thinker instead of an obedient servant, and argue that this approach is a viable, possibly better, alternative to the aforementioned split. I've structured the post in a main body, asserting the key points while trying to remain concise, and an appendix, which first expands sections of the main body and then discusses some related work. Although it ended up in the appendix, I think the extended Motivation section is well worth reading if you find the main body interesting. Without further ado, some more ado first. A brief note on style and target audience This post contains a tiny amount of mathematical formalism, which should improve readability for maths-oriented people. Here, the purpose of the formalism is to reduce some of the ambiguities that normally arise with the use of natural language, not to prove fancy theorems. As a result, the post should be readable by pretty much anyone who has some background knowledge in AI, machine ethics, or AI alignment - from software engineers to philosophers and AI enthusiasts (or doomers). If you are not a maths person, you won't lose much by skipping the maths here and there: I tried to write sentences in such a way that they keep their structure and remain sensible even if all the mathematical symbols are removed from the document. However, this doesn't mean that the content is easy to digest; at some points you might have to stay focused and keep in mind various things at the same time in order to follow. Motivation The main purpose of this research is to enable the engineering of an agent which understands good and bad and whose actions are guided by its understanding of good and bad. I've already given some reasons elsewhere why I think this research goal is worth pursuing. The appendix, under Motivation, contains more information on this topic and on moral agents. Here I point out that agents which just optimise a metric given by the designer (be it reward, loss, or a utility function) are not fit to the research goal. First, any agent that limits itself to executing instructions given by someone else can hardly be said to have an understanding of good and bad. Second, even if the given instructions were in the form of rules that the designer recognised as moral - such as "Do not harm any human" - and the agent was able to follow them perfectly, then the agent's behaviour would still be grounded in the designer's understanding of good and bad, rather than in the agent's own understanding. This observation leads to an agent design different from the usual fixed-metric optimisation found in the AI literature (loss minimisation in neural networks is a typical example). I present the design in the next section. Note that I give neither executable code nor a fully specified blueprint; instead, I just describe the key properties of a possibly broad class of agents. Nonetheless, this post should contain enough information that AI engineers and research scientists reading it could gather at least some ideas on how to cre...
Terri Alessi-Miceli and Joe Campolo with HIA-LI LIVE on LI in the AM w/ Jay Oliver by JVC Broadcasting
Have you ever said or done something that you regret? Maybe in the heat of the moment, you've lashed out at a co-worker because of something relatively insignificant. The feelings of guilt that are generated by regret don't feel good. We long for ways to take back what was said or done. Author Tony Campolo once said that “the past has a way of draining energy from the present.” What's been done in the past, while regrettable, needs to be forgiven and forgotten. Ask for forgiveness then, move ahead in life. Put the past behind you. Campolo continued to say that “the One who is the ground of all being forgives and forgets.” Let your regrets go and find freedom in God.
In episode 122 of Jamstack Radio, Brian catches up with Anthony Campolo of Edgio. In this conversation they recap Anthony's career journey and explore many technical topics including edge computing, the current state of the JavaScript ecosystem, insights on leveraging Jamstack for infrastructure ambitions, and combatting imposter syndrome in DevRel.
In episode 122 of Jamstack Radio, Brian catches up with Anthony Campolo of Edgio. In this conversation they recap Anthony's career journey and explore many technical topics including edge computing, the current state of the JavaScript ecosystem, insights on leveraging Jamstack for infrastructure ambitions, and combatting imposter syndrome in DevRel.
In this episode, Anthony Campolo returns to PodRocket to talk about the Edge, what it is, and why you are hearing about it so much. Links https://twitter.com/ajcwebdev https://dev.to/ajcwebdev https://www.linkedin.com/in/ajcwebdev https://dev.to/opensauced https://docs.edg.io Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Anthony Campolo.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On value in humans, other animals, and AI, published by Michele Campolo on January 31, 2023 on The AI Alignment Forum. This will be posted also on the EA Forum, and included in a sequence containing some previous posts and other posts I'll publish this year. Introduction Humans think critically about values and, to a certain extent, they also act according to their values. To the average human, the difference between increasing world happiness and increasing world suffering is huge and evident, while goals such as collecting coins and collecting stamps are roughly on the same level. It would be nice to make these differences obvious to AI as they are to us. Even though exactly copying what happens in the human mind is probably not the best strategy to design an AI that understands ethics, having an idea of how value works in humans is a good starting point. So, how do humans reason about values and act accordingly? Key points Let's take a step back and start from sensation. Through the senses, information goes from the body and the external environment to our mind. After some brain processing — assuming we've had enough experiences of the appropriate kind — we perceive the world as made of objects. A rock is perceived as distinct from its surrounding environment because of its edges, its colour, its weight, the fact that my body can move through air but not through rocks, and so on. Objects in our mind can be combined with each other to form new objects. After seeing various rocks in different contexts, I can imagine a scene in which all these rocks are in front of me, even though I haven't actually seen that scene before. We are also able to apply our general intelligence — think of skills such as categorisation, abstraction, induction — to our mental content. Other intelligent animals do something similar. They probably understand that, to satisfy thirst, water in a small pond is not that different from water flowing in a river. However, an important difference is that animals' mental content is more constrained than our mental mental content: we are less limited by what we perceive in the present moment, and we are also better at combining mental objects with each other. For example, to a dog, its owner works as an object in the dog's mind, while many of its owner's beliefs do not. Some animals can attribute simple intentions and perception, e.g. they understand what a similar animal can and cannot see, but it seems they have trouble attributing more complex beliefs. The ability to compose mental content in many different ways is what allows us to form abstract ideas such as mathematics, religion, and ethics, just to name a few. Key point 1: In humans, mental content can be abstract. Now notice that some mental content drives immediate action and planning. If I feel very hungry, I will do something about it, in most cases. This process from mental content to action doesn't have to be entirely conscious. I can instinctively reach for the glass of water in front of me as a response to an internal sensation, even without moving my attention to the sensation nor realising it is thirst. Key point 2: Some mental content drives behaviour. Not all mental content drives action and planning. The perception of an obstacle in front of me might change how I carry out my plans and actions, but it is unlikely to change what I plan and act for. Conversely, being very hungry directly influences what I'm going to do — not just how I do it — and can temporarily override other drives. It is in this latter sense that some mental content drives behaviour. In humans, the mental content that does drive behaviour can be roughly split in two categories. The first one groups what we often call evolutionary or innate drives, like hunger and thirst in the examples above, and works similarly i...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism of the main framework in AI alignment, published by Michele Campolo on January 31, 2023 on The AI Alignment Forum. Originally posted on the EA Forum for the Criticism and Red Teaming Contest. Will be included in a sequence containing some previous posts and other posts I'll publish this year. 0. Summary AI alignment research centred around the control problem works well for futures shaped by out-of-control misaligned AI, but not that well for futures shaped by bad actors using AI. Section 1 contains a step-by-step argument for that claim. In section 2 I propose an alternative which aims at moral progress instead of direct risk reduction, and I reply to some objections. I will give technical details about the alternative at some point in the future, in section 3. The appendix clarifies some minor ambiguities with terminology and links to other stuff. 1. Criticism of the main framework in AI alignment 1.1 What I mean by main framework In short, it's the rationale behind most work in AI alignment: solving the control problem to reduce existential risk. I am not talking about AI governance, nor about AI safety that has nothing to do with existential risk (e.g. safety of self-driving cars). Here are the details, presented as a step-by-step argument. At some point in the future, we'll be able to design AIs that are very good at achieving their goals. (Capabilities premise) These AIs might have goals that are different from their designers' goals. (Misalignment premise) Therefore, very bad futures caused by out-of-control misaligned AI are possible. (From previous two premises) AI alignment research that is motivated by the previous argument often aims at making misalignment between AI and designer, or loss of control, less likely to happen or less severe. (Alignment research premise). Common approaches are ensuring that the goals of the AI are well specified and aligned with what the designer originally wanted, or making the AI learn our values by observing our behaviour. In case you are new to these ideas, two accessible books on the subject are [1,2]. 5. Therefore, AI alignment research improves the expected value of bad futures caused by out-of-control misaligned AI. (From 3 and 4). By expected value I mean a measure of value that takes likelihood of events into account, and follows some intuitive rules such as "5% chance of extinction is worse than 1% chance of extinction". It need not be an explicit calculation, especially because it might be difficult to compare possible futures quantitatively, e.g. extinction vs dystopia. I don't claim that all AI alignment research follows this framework; just that this is what motivates a decent amount (I would guess more than half) of work in AI alignment. 1.2 Response I call this a response, and not a strict objection, because none of the points or inferences in the previous argument is rejected. Rather, some extra information is taken into account. 6. Bad actors can use powerful controllable AI to bring about very bad futures and/or lock-in their values (Bad actors premise) For more information about value lock-in, see chapter 4 of What We Owe The Future [3]. 7. Recall that alignment research motivated by the above points makes it easier to design AI that is controllable and whose goals are aligned with its designers' goals. As a consequence, bad actors might have an easier time using powerful controllable AI to achieve their goals. (From 4 and 6) 8. Thus, even though AI alignment research improves the expected value of futures caused by uncontrolled AI, it reduces the expected value of futures caused by bad human actors using controlled AI to achieve their ends. (From 5 and 7) This conclusion will seem more, or less, relevant depending on the beliefs you have about its different components. An example: if you think t...
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
Joe Campolo, “Off The Record, On The Record" LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
In episode 106 of JAMstack Radio, Brian speaks with Anthony Campolo, a Developer Advocate at QuickNode. This conversation explores blockchain infrastructure and tooling, including built-in governance mechanisms, NFTs, dApps, and cryptography.
In episode 106 of JAMstack Radio, Brian speaks with Anthony Campolo, a Developer Advocate at QuickNode. This conversation explores blockchain infrastructure and tooling, including built-in governance mechanisms, NFTs, dApps, and cryptography.
In episode 106 of JAMstack Radio, Brian speaks with Anthony Campolo, a Developer Advocate at QuickNode. This conversation explores blockchain infrastructure and tooling, including built-in governance mechanisms, NFTs, dApps, and cryptography.
In episode 106 of JAMstack Radio, Brian speaks with Anthony Campolo, a Developer Advocate at QuickNode. This conversation explores blockchain infrastructure and tooling, including built-in governance mechanisms, NFTs, dApps, and cryptography.
Joe Campolo, “Off The Record, On The Record" LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
“Off The Record, On The Record" With Joe Campolo LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
Joe Campolo “Off The Record, On The Record" LIVE on LI in the AM w/ Jay Oliver! by JVC Broadcasting
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some alternative AI safety research projects, published by Michele Campolo on June 28, 2022 on The AI Alignment Forum. These are some "alternative" (in the sense of non-mainstream) research projects or questions related to AI safety that seem both relevant and underexplored. If instead you think they aren't, let me know in the comments, and feel free to use the ideas as you want if you find them interesting. Access-to-the-internet scenario and related topics A potentially catastrophic scenario that appears somewhat frequently in AI safety discourse involves a smarter-than-human AI which gets unrestricted access to the internet, and then bad things happen. For example, the AI manages to persuade or bribe one or more humans so that they perform actions which have a high impact on the world. What are the worst (i.e. with worst consequences) examples of similar scenarios that already happened in the past? Can we learn anything useful from them? Considering these scenarios, why is it the case that nothing worse has happened yet? Is it simply because human programmers with bad intentions are not smart enough? Or because the programs/AIs themselves are not agentic enough? I would like to read well-thought arguments on the topic. Can we learn something from the history of digital viruses? What's the role played by cybersecurity? If we assume that slowing down progress in AI capabilities is not a viable option, can we make the above scenario less likely to happen by changing or improving cybersecurity? Intuitively, it seems to me that the relation of AI safety with cybersecurity is similar to the relation with interpretability: even though the main objective of the other fields is not the reduction of global catastrophic risk, some of the ideas in those fields are likely to be relevant for AI safety as well. Cognitive and moral enhancement in bioethics A few days ago I came across a bioethics paper that immediately made me think of the relation between AI safety and AI capabilities. From the abstract: "Cognitive enhancement [...] could thus accelerate the advance of science, or its application, and so increase the risk of the development or misuse of weapons of mass destruction. We argue that this is a reason which speaks against the desirability of cognitive enhancement, and the consequent speedier growth of knowledge, if it is not accompanied by an extensive moral enhancement of humankind." As far as I understand, some researchers in the field are pro cognitive enhancement—sometimes even instrumentally as a way to achieve moral enhancement itself. Others, like the authors above, are much more conservative: they see research into cognitive enhancement as potentially very dangerous, unless accompanied by research into moral enhancement. Are we going to solve all our alignment problems by reading the literature on cognitive and moral enhancement in bioethics? Probably not. Would it be useful if at least some individuals in AI safety knew more than the surface-level info given here? Personally, I would like that. Aiming at “acceptably safe” rather than “never catastrophic” Let's say you own a self-driving car and you are deciding whether to drive or give control to the car. If all you care about is safety of you and others, what matters for your decision is the expected damage of you driving the car versus the expected damage of self-driving. This is also what we care about on a societal level. It would be great if self-driving cars were perfectly safe, but what is most important is that they are acceptably safe, in the sense that they are safer than the human counterpart they are supposed to replace. Now, the analogy with AI safety is not straightforward because we don't know to what extent future AIs will replace humans, and also because it will be a matter of “coexistence" (...
Off the record on the record W/ Joe Campolo LIVE on LI in the Am W/ Jay Oliver by JVC Broadcasting
Joe Campolo, CMM, “Off The Record, On The Record" LIVE On LI In The AM W/Jay Oliver! 5-20-22 by JVC Broadcasting
Joe Campolo, CMM, “Off The Record, On The Record" LIVE On LI In The AM W/Jay Oliver! 5-6-22 by JVC Broadcasting
Anthony Campolo and Noah Hein come on to talk about QuickNode, what it's like to be on the DevRel team, how QuickNode provides an infrastructure for blockchains, and more. Links https://www.quicknode.com/ https://twitter.com/ajcwebdev https://twitter.com/NHeinDev https://lu.ma/QuickNode https://www.quicknode.com/guides https://www.quicknode.com/docs https://twitter.com/dabit3 https://www.youtube.com/naderdabit https://podrocket.logrocket.com/nader-dabit https://www.youtube.com/watch?v=M576WGiDBdQ https://podrocket.logrocket.com/graphql https://podrocket.logrocket.com/web3-101 Review us Reviews are what help us grow and tailor our content to what you want to hear. Give us a review here (https://ratethispodcast.com/podrocket). Contact us https://podrocket.logrocket.com/contact-us @PodRocketpod (https://twitter.com/PodRocketpod) What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guests: Anthony Campolo and Noah Hein.
"Off The Record, On The Record" With Joe Campolo LIVE On LI In The AM W/Jay Oliver! 4-20-22 by JVC Broadcasting
Part Two of this week's amazing road trip to D/FW found us in Cowtown at the historic T&P Tavern, situated in the 1930's-era Texas And Pacific Railway building (which is now condos, but those are real trains going by!). We had the pleasure of talking with Dr. Allison Campolo, chair of the Tarrant County Democratic Party and founder of Tarrant Together (not to mention an infectious disease doctor and heavyweight Kung Fu fighter!), and Congressman Marc Veasey, currently representing Texas House District 33 - two people who know the ins and outs of the Democratic fight in Fort Worth and Tarrant County better than anyone. Join us for a deep dive into a part of Texas regarded by many as a bellwether for the political future of the Lone Star State.
Managing Partner of CMM Joe Campolo Live On LI In The AM W/Jay Oliver! 4 - 7-22 by JVC Broadcasting
Joe Campolo from "Off the Record, On the Record" LIVE on LI in the AM w/Jay Oliver! 3-23-22 by JVC Broadcasting
On the show today, Alex and I talked about how she, as a 22-year-old and a hectic work-life with nursing school & night shifts, was able to teach herself and build up to massive DIY projects in her home. Alex touches on her first ever DIY project where she had zero tools at the time and completely failed, but turned her mistakes into lessons learned and has come out the other side stronger and better. Any excuse NOT to start, Alex has it. As a young homeowner with zero experience and a crazy work-life, she refused to quit and is a true inspiration to any DIYer that's looking to start or take on bigger projects. Give Alex a follow @diybydna!Support the show (https://www.buymeacoffee.com/realifereno)