POPULARITY
Join hosts Sara Sloman and Sam Clarke on this week's episode of The EV Café Takeaway as they welcome Amy Carter, EV Operations Manager at DAF Trucks. Amy shares her inspiring journey from mechanic to leading EV initiatives, while also opening up about her personal experience with transition and the resilience it required. Dive into discussions on key projects like the Z HIT program, advancements in electric heavy-duty vehicles, and overcoming challenges in a male-dominated industry. Amy highlights the importance of support networks and diversity, offering valuable insights into building a sustainable, zero-emission future with authenticity and strength. Amy Carter https://www.linkedin.com/in/amy-carter-bb0215143/ DAF Trucks https://www.daf.com/en/
About the episode: EV industry pioneer Sam Clarke, a record-holding entrepreneur with over two decades in electric mobility shares his journey from importing electric motorbikes in the early 2000s to founding a zero-emissions logistics company and advising on national EV infrastructure. We discuss debunking myths around electric vehicles and explore how the concept of 'grazing' can reset our expectations of electric vehicle charging. Guest Name: Sam Clarke Sam Clarke is a life-long entrepreneur, industry advisor, EV owner/driver for over 20 years and a multiple Guinness World Record Holder for EV driving. His EV journey started back in 2002 with electric motorbikes before founding a zero-emission logistics firm (Gnewt) which he sold to John Menzies Plc in 2017. He now works on national charging infrastructure needs for GRIDSERVE in particular leading on the £100M+ Government funded (ZEHID) eHGV Electric Freightway Project. He is also a founding member of The EV Café webinar and news channel. In 2015, Sam was a Great British Entrepreneur's Award winner. By 2022 he was made a GreenFleet EV Champion for services to the industry and in 2024 he was voted #1 in the Motor Transport Power Players 2 list and #14 in the greenfleet.net 1 top 100 most Influential list. He also holds two Guinness World Records for the longest distances ever driven in an electric car and van on a single charge 7 (569 and 311 miles respectively). In 2009 he founded the all-electric last-mile logistics business Gnewt which subsequently won multiple awards nationally and globally and was acquired by John Menzies Plc in 2017. During his tenure, he created the UK's largest fully electric commercial fleet, the largest privately-owned smart charging infrastructure and UK's most advanced private V2G (Vehicle to Grid) network. With Sam at the helm, the business was able to deliver over 10 million parcels in its first 10 years of trading 3, mainly in Central London, courtesy of a trail-blazing fully zero-emission fleet, the first of its kind anywhere in the UK. Most recently, Sam has extended his role from Board advisor at GRIDSERVE to become their Chief Vehicle Officer 4. His role is to drive forward mass uptake of electric vehicles through the creation of a net-zero EV leasing division, commercial charging infrastructure build-outs and support the nationwide roll out of high-powered Electric Forecourts and upgrade of the GRIDSERVE Electric Highway network that resides on 85% of the service stations nationally. In 2023 he led the successful bid for a £100M+ Innovate UK Government fund (ZEHID 6) whose purpose is to roll out eHGV charging infrastructure nationwide, the second largest award ever issued by Innovate UK to private enterprise. Sam is a high-profile member of the EV community, a regular public speaker for the industry and has previously been invited to speak at the Transport Select Committee in Westminster and at the European Commission in Brussels. He is also a founding member and Company Director/Shareholder of The EV Café Ltd 5. which is a popular webinar series discussing all aspects of EV adoption on regular weekly and monthly sessions. The EV Café has a number of high-profile sectoral sponsors including companies such as, but not limited to, The AA, Europcar, Bridgestone and Webfleet. Links: LinkedIn: Sam Clarke www.gridserve.com I hope you enjoy the show and if you have any comments or suggestions, please write to me at: toby@wickedproblems.fm. About Adaptavis Adaptavis is a Business Performance Management and Transformation consultancy aimed at forward-thinking leaders, based in London UK. The company specialises in helping organisations to enhance operational efficiency, drive business growth, and navigate complex transformations. From strategy to execution, they focus on providing insights and practical solutions to improve the overall performance of businesses, ensuring they can adapt to changing market conditions and achieve sustainable success. Toby Corballis is a Partner at Adaptavis. You can find out more about their work by visiting: www.adaptavis.com Enjoy, Toby Corballis
Let's Talk Gardening 2 November 2024 with Andrea Whitely, Jude Scott and Sam Clarke
Join Sara Sloman as she sits down with Guinness World Record holders Sam Clarke, Kevin Booker, and Richard Parker. Discover how Sam and Kevin achieved not one but two EV world records: the longest journey by an electric vehicle on a single charge and the greatest distance traveled by an electric van. Learn about the meticulous planning, efficient driving techniques, and cutting-edge telematics from Webfleet that made this possible. The team shares their insights on maximising EV range, overcoming challenges, and what these achievements mean for the future of electric mobility. Whether you're an EV enthusiast or just curious, this episode showcases the art of the possible in electric transportation. Webfleet https://www.webfleet.com/ Kevin Booker https://www.linkedin.com/in/kevin-booker-442b53b1/ Sam Clarke https://www.linkedin.com/in/samclarke946/ Richard Parker https://www.linkedin.com/in/richardjparker1/
We were lucky enough to be joined by Mike Cutts of Iveco, Stewart Murphy of Royal Mail and Paul McCormack of Decarbon Logistics to discuss the growing area of zero emission deliveries and last mile deliveries in particular. From Postman Pat to the Iveco eDaily, there is something here for everyone! The chat was steered by Jonny Berry ably supported by Paul Kirby, Sam Clarke, and Sara Sloman. Watch the video here: https://www.evcafe.org/videos/last-mile-ebikes-cargo-bikes-and-small-evs Mike Cutts https://www.linkedin.com/in/mike-cutts/ Stewart Murphy https://www.linkedin.com/in/stuart-murphy-4400a547/ Paul McCormack https://www.linkedin.com/in/paul-mccormack-a3818918/
Today on the podcast the lads chat to Sam Clarke. He is an experienced trainer in active armed offender response & risk management training, offering tailored programs designed for high-traffic areas & the owner of ResponseRise Getting into the Armed Defenders squad. 8.40 - First callout. 15.50 - Auckland city shotgun incident. 29.40 - Re-integration. 41.30 - New business: ResponseRise. 46.40 - How to respond to an armed offender. Support Sam's business here: https://www.responserise.com/ Give us a follow if you haven't already ~ Jay and Dunc. Want to get in touch? Hit us up, here: https://linktr.ee/notforradioSee omnystudio.com/listener for privacy information.
Sam Clarke with the visualsJoin the Patreon: https://www.patreon.com/jdfmccannBuy the books: https://www.jdfmccann.com/books Hosted on Acast. See acast.com/privacy for more information.
SAM CLARKE On the morning of July 20th 2023, New Zealand was shocked by a shooting at Commercial Bay in Auckland's CBD.A man on electronically monitored bail took a gun to the construction site where he was working. He killed two colleagues with a shotgun, wounded seven other people, including a police officer, and then killed himself.Sam Clarke, a husband and father, was shot that day. But he is not included in that list of victims. Sam was working his dream job as a member of the NZ police Armed Offenders Squad when he was hit in the helmet by a bullet from the gunman. It was a job he loved, but a job he had to turn his back on because it nearly cost him his life. This is part of his story. In vivid detail he talks about the morning of July 20 and how it played out from the perspective of an on-duty cop.Dealing with a concussion and struggling to reassure his loved ones, he was ok in the immediate aftermath.The comprehensive support provided by the police force, including psychological and medical help.The emotional and physical toll the incident took on him. And the transformative power of therapy and vulnerability. Some of Sam's personal experiences about overcoming anxiety, dealing with flashbacks and much more.It's the first time Sam has done anything like this, and I can't thank him enough for his courage in being so open and vulnerable with sharing his experience.Follow Sam's new business, Response Rise:He is using the skills he learned in the Police to teach others what to do in armed offender incidents, confrontational situations, retail crime and more.https://www.responserise.com/IG: response_rise Hosted on Acast. See acast.com/privacy for more information.
Out now for Patreon members: https://www.patreon.com/jdfmccannWatch the director's cut of GOD SAVE THE KING by Sam Clarke, the new James Donald Forbes McCann Comedy Special/Tour Documentary/Art-house Film. Sam Clarke's website:https://samclarkestudio.com/ Hosted on Acast. See acast.com/privacy for more information.
Sam Clarke is a Brooklyn-based artist, distinguished by his uncompromising style which he translates across various disciplines. Known for his visual work, Clarke has also developed a voice of his own as an organiser and DJ, fusing his affection for noise, industrial, and dark ambient, with rhythmic, tribal, unrelenting techno. In 2022 he co-founded the ephemeral, yet influential Dust Radio, which revived a sorely missed artist-run edge to New York's music scene. Coinciding with Dust was his beloved monthly rave series, Nausea, which he co-curated alongside DJ Valentimes.LinktreeSoundcloudInstagramWe also requested Sam to share with us some of his favorite things.Catch them all in our newsletter: https://putf.substack.com/The PUTF show is an interview series, dedicated to showcasing inspiring creatives from the PUTF community and beyond. Guests are invited to share their unique career journeys, stories, and visions.The PUTF show is produced by WAVDWGS, a video production company based in NYC.https://wavdwgs.com/Pick Up The Flow, is an online resource based in NYC striving to democratize access to opportunities. Opportunities are shared daily on this page and website, and weekly via our newsletter.More on https://putf.substack.com/Listen to this episode on audio platforms:Spotify: https://tinyurl.com/spotify-putfApple: https://tinyurl.com/putf-applepodcast Hosted on Acast. See acast.com/privacy for more information.
Study Guide Bava Metzia 15 Today's daf is sponsored by Barbara Goldschlag in honor of the engagement of Aliza Goldschlag and Sam Clarke. If one sold a field that he/she stole, when the owner takes back the land and the buyer returns to the seller to retrieve the money from the sale, Shmuel holds that the seller does not need to reimburse the buyer for improvements to the field. The second difficulty raised against Shmuel is resolved in three possible ways. A third difficulty is raised as Shmuel himself said that the buyer receives a guarantee of the enhancements. To resolve this, Rav Yosef suggests a possible way that the buyer of stolen property can demand the value of the enhancements from the seller after the property is taken away. There are two different versions of Rav Yosef's answer. In the context of this discussion, they mentioned a different opinion of Shmuel that a creditor who seizes liened property for a loan can take the enhancements as well. Rava proves this from the language of a sale document which includes a guarantee for the enhancements. Why would there be a guarantee for enhancements for a sale and not for a gift? If one buys property knowing it is stolen and the owner takes back the land, Rav and Shmuel debate whether or not the buyer can get his/her money back from the seller. The basis of their argument is discussed and compared to another case where they also debate the same issue. Why is there a need to show they disagreed in both cases?
Study Guide Bava Metzia 15 Today's daf is sponsored by Barbara Goldschlag in honor of the engagement of Aliza Goldschlag and Sam Clarke. If one sold a field that he/she stole, when the owner takes back the land and the buyer returns to the seller to retrieve the money from the sale, Shmuel holds that the seller does not need to reimburse the buyer for improvements to the field. The second difficulty raised against Shmuel is resolved in three possible ways. A third difficulty is raised as Shmuel himself said that the buyer receives a guarantee of the enhancements. To resolve this, Rav Yosef suggests a possible way that the buyer of stolen property can demand the value of the enhancements from the seller after the property is taken away. There are two different versions of Rav Yosef's answer. In the context of this discussion, they mentioned a different opinion of Shmuel that a creditor who seizes liened property for a loan can take the enhancements as well. Rava proves this from the language of a sale document which includes a guarantee for the enhancements. Why would there be a guarantee for enhancements for a sale and not for a gift? If one buys property knowing it is stolen and the owner takes back the land, Rav and Shmuel debate whether or not the buyer can get his/her money back from the seller. The basis of their argument is discussed and compared to another case where they also debate the same issue. Why is there a need to show they disagreed in both cases?
Nothing in the world can take the place of persistence. Talent will not: Nothing is more common than unsuccessful people with talent. Genius will not: Unrewarded genius is almost a proverb. Education alone will not: The world is full of educated derelicts. PERSISTENCE AND DETERMINATION ALONE ARE OMNIPOTENT. In this episode we dive deep into the power of persistence and determination and finding opportunity in adversity. Sam Clarke shares his journey to IBJJF Worlds. Sam sustained an injury prior to competing in Worlds and we also dive into the power of process in self-care to support healing and growth. There are a ton of gems in this episode on mindset and motivation, check it out!!!
Sam Clarke, MD, MAS, and Jon Ilgen, MD, PhD, join host Toni Gallo to discuss the importance of teaching adaptive expertise to prepare learners for the types of complex cases they will encounter in clinical practice. This conversation also covers what adaptive expertise is, how simulation can be used to foster this skill in learners, and the complementary relationship between performance-oriented cases and adaptive cases in health professions education. A transcript of this episode is available at academicmedicineblog.org.
The EV Café's very own Sam Clarke (and of course, Chief Vehicle Officer at GRIDSERVE) shares how his competitive edge helped propel him into business at a young age. The super successful entrepreneur reflects on his achievements, the cost paid to get there as well his personal journey to prioritise the sustainability agenda and the legacy he hopes to leave. https://www.linkedin.com/in/samclarke946/ https://www.gridserve.com/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A summary of current work in AI governance, published by constructive on June 17, 2023 on LessWrong. A summary of current work in AI governance If you'd like to learn more about AI governance, apply to the AI Safety Fundamentals: Governance Track, a 12-week, part-time fellowship before June 25. Context For the past nine months, I spent ~50% of my time upskilling in AI alignment and governance alongside my role as a research assistant in compute governance. While I discovered great writing characterizing AI governance on a high level, few texts covered which work is currently ongoing. To improve my understanding of the current landscape, I began compiling different lines of work and made a presentation. People liked my presentation and suggested I could publish this as a blog post. Disclaimers: I've only started working in the field ~9 months ago I haven't run this by any of the organizations I am mentioning. My impression of their work is likely different from their intent behind it. I'm biased toward the work by GovAI as I engage with that most. My list is far from comprehensive. What is AI governance? Note that I am primarily discussing AI governance in the context of preventing existential risks. Matthijs Maas defines AI long-term governance as “The study and shaping of local and global governance systems—including norms, policies, laws, processes, politics, and institutions—that affect the research, development, deployment, and use of existing and future AI systems in ways that positively shape societal outcomes into the long-term future.” Considering this, I want to point out: AI governance is not just government policy, but involves a large range of actors. (In fact, the most important decisions in AI governance are currently being made at major AI labs rather than at governments.) The field is broad. Rather than only preventing misalignment, AI governance is concerned with a variety of ways in which future AI systems could impact the long-term prospects of humanity. Since "long-term" somewhat implies that those decisions are far away, another term used to describe the field is “governance of advanced AI systems.” Threat Models Researchers and policymakers in AI governance are concerned with a range of threat models from the development of advanced AI systems. For an overview, I highly recommend Allan Dafoe's research agenda and Sam Clarke's "Classifying sources of AI x-risk". To illustrate this point, I will briefly describe some of the main threat models discussed in AI governance. Feel free to skip right to the main part. Takeover by an uncontrollable, agentic AI system This is the most prominent threat model and the focus of most AI safety research. It focuses on the possibility that future AI systems may exceed humans in critical capabilities such as deception and strategic planning. If such models develop adversarial goals, they could attempt and succeed at permanently disempowering humanity. Prominent examples of where this threat model has been articulated: Is power-seeking AI an existential risk?, Joe Carlsmith, 2022 AGI Ruin: A list of lethalities, Eliezer Yudkowsky, 2022 (In a very strong form, see also this in-depth response from Paul Christiano) The alignment problem from a deep learning perspective, Ngo et al., 2022 Loss of control through automation Even if AI systems remain predominantly non-agentic, the increasing automation of societal and economic decision-making, driven by market incentives and corporate control, could pose the risk of humanity gradually losing control - e.g., if the optimized measures are only coarse proxies of what humans value and the complexity of emerging systems is incomprehensible to human decision-makers. This threat model is somewhat harder to convey but has been articulated well in the following texts: Will Humanit...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A summary of current work in AI governance, published by constructive on June 17, 2023 on LessWrong. A summary of current work in AI governance If you'd like to learn more about AI governance, apply to the AI Safety Fundamentals: Governance Track, a 12-week, part-time fellowship before June 25. Context For the past nine months, I spent ~50% of my time upskilling in AI alignment and governance alongside my role as a research assistant in compute governance. While I discovered great writing characterizing AI governance on a high level, few texts covered which work is currently ongoing. To improve my understanding of the current landscape, I began compiling different lines of work and made a presentation. People liked my presentation and suggested I could publish this as a blog post. Disclaimers: I've only started working in the field ~9 months ago I haven't run this by any of the organizations I am mentioning. My impression of their work is likely different from their intent behind it. I'm biased toward the work by GovAI as I engage with that most. My list is far from comprehensive. What is AI governance? Note that I am primarily discussing AI governance in the context of preventing existential risks. Matthijs Maas defines AI long-term governance as “The study and shaping of local and global governance systems—including norms, policies, laws, processes, politics, and institutions—that affect the research, development, deployment, and use of existing and future AI systems in ways that positively shape societal outcomes into the long-term future.” Considering this, I want to point out: AI governance is not just government policy, but involves a large range of actors. (In fact, the most important decisions in AI governance are currently being made at major AI labs rather than at governments.) The field is broad. Rather than only preventing misalignment, AI governance is concerned with a variety of ways in which future AI systems could impact the long-term prospects of humanity. Since "long-term" somewhat implies that those decisions are far away, another term used to describe the field is “governance of advanced AI systems.” Threat Models Researchers and policymakers in AI governance are concerned with a range of threat models from the development of advanced AI systems. For an overview, I highly recommend Allan Dafoe's research agenda and Sam Clarke's "Classifying sources of AI x-risk". To illustrate this point, I will briefly describe some of the main threat models discussed in AI governance. Feel free to skip right to the main part. Takeover by an uncontrollable, agentic AI system This is the most prominent threat model and the focus of most AI safety research. It focuses on the possibility that future AI systems may exceed humans in critical capabilities such as deception and strategic planning. If such models develop adversarial goals, they could attempt and succeed at permanently disempowering humanity. Prominent examples of where this threat model has been articulated: Is power-seeking AI an existential risk?, Joe Carlsmith, 2022 AGI Ruin: A list of lethalities, Eliezer Yudkowsky, 2022 (In a very strong form, see also this in-depth response from Paul Christiano) The alignment problem from a deep learning perspective, Ngo et al., 2022 Loss of control through automation Even if AI systems remain predominantly non-agentic, the increasing automation of societal and economic decision-making, driven by market incentives and corporate control, could pose the risk of humanity gradually losing control - e.g., if the optimized measures are only coarse proxies of what humans value and the complexity of emerging systems is incomprehensible to human decision-makers. This threat model is somewhat harder to convey but has been articulated well in the following texts: Will Humanit...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Aptitudes for AI governance work, published by Sam Clarke on June 14, 2023 on The Effective Altruism Forum. I outline 8 “aptitudes” for AI governance work. For each, I give examples of existing work that draws on the aptitude, and a more detailed breakdown of the skills I think are useful for excelling at the aptitude. How this might be helpful: For orienting to the kinds of work you might be best suited to For thinking through your skill gaps for those kinds of work Offering an abstraction which might help those thinking about field-building/talent pipeline strategy Epistemic status: I've spent ~3 years doing full-time AI governance work. Of that, I spent ~6 months FTE working on questions related to the AI governance talent pipeline, with GovAI. My work has mostly been fairly foundational research—so my views about aptitudes for research-y work (i.e. the first four aptitudes in this post) are more confident than for more applied or practical work (i.e. the latter three aptitudes in this post). I've spent ~5 hours talking with people hiring in AI governance about the talent needs they have. See this post for a write-up of that work. I've spent many more hours talking with AI governance researchers about their work (not focused specifically on talent needs). This post should be read as just one framework that might help you orient to AI governance work, rather than as making strong claims about which skills are most useful. Some AI governance-relevant aptitudes Macrostrategy What this is: investigating foundational topics that bear on more applied or concrete AI governance questions. Some key characteristics of this kind of work include: The questions are often not neatly scoped, such that generating or clarifying questions is part of the work. It involves balancing an unusually wide or open-ended range of considerations. A high level of abstraction is involved in reasoning. The methodology is often not very clear, such that you can't just plug-and-play with some standard methodology from a particular field. Examples: Descriptive work on estimating certain ‘key variables' E.g. reports on AI timelines and takeoff speeds. Prescriptive work on what ‘intermediate goals' to aim for E.g. analysis of the impact of US govt 2022 export controls. Conceptual work on developing frameworks, taxonomies, models, etc. that could be useful for structuring future analysis E.g. The Vulnerable World Hypothesis. Useful skills: Generating, structuring, and weighing considerations. Being able to generate lots of different considerations for a given question and weigh up these considerations appropriately. For example, there are a lot of considerations that bear on the question “Would it reduce AI risk if the US government enacted antitrust regulation that prevents big tech companies from buying AI startups?” Some examples of considerations are: “How much could this accelerate or slow down AI progress?”, “How much could this increase or decrease Western AI leadership relative to China?”, “How much harder or easier would this make it for the US government to enact safety-focused regulations?” “How would this affect the likelihood that a given company (e.g., Alphabet) plays a leading role in transformative AI development?” etc. Each of these considerations is also linked to various other considerations. For instance, the consideration about the pace of AI progress links to the higher-level consideration “How does the pace of AI progress affect the level of AI risk?” and the lower-level consideration “How does market structure affect the pace of AI progress?” That lower-level consideration can then be linked to even lower levels, like “What are the respective roles of compute-scaling and new ideas in driving AI progress?” and “Would spreading researchers out across a larger number of startups ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some talent needs in AI governance, published by Sam Clarke on June 14, 2023 on The Effective Altruism Forum. I carried out a short project to better understand talent needs in AI governance. This post reports on my findings. How this post could be helpful: If you're trying to upskill in AI governance, this post could help you to understand the kinds of work and skills that are in demand. If you're a field-builder trying to find or upskill people to work in AI governance, this post could help you to understand what talent search/development efforts are especially valuable. Key takeaways I talked with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they're looking for. Those hiring needs can be summarised as follows: All the organisations/teams I talked to are interested in hiring people to do policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well. There's currently high demand for this kind of work, because windows of opportunity to implement useful policies have begun arising more frequently. There's also a limited supply of people who can do it, partly because it requires the ability to do both (a) high-level strategising about the net value of different policies and (b) tactical implementation analysis about what, concretely should be done by people at the government/AI lab/etc. to implement the policy. This is an unusual combination of skills, but one which is highly valuable to develop. AI governance research organisations (specifically, GovAI and Rethink Priorities) are also interested in hiring people to do other kinds of AI governance research—e.g. carrying out research projects in compute governance or corporate governance, or writing touchstone pieces explaining important ideas. AI governance teams at policy think tanks and AI labs are interested in hiring people whose work would substantially involve engaging with people to do stakeholder management, consensus building and other activities to help with the implementation of policy actions. Also, there is a lot of work requiring technical expertise (e.g. hardware engineering, information security, machine learning) that would be valuable for AI governance. Especially undersupplied are technical researchers who can answer questions that are not yet well-scoped (i.e. where the questions require additional clarifying before they are crisp and well-specified). Doing this well requires an aptitude for high-level strategic thinking, along with technical expertise. Method I conducted semi-structured interviews with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they're looking for. I also talked with two people about talent needs in technical work for AI governance. Findings Talent needs I report on the kinds of work that people I interviewed are looking to hire for, and outline some useful skills for doing this work. Note: when I say things like “organisation X is interested in hiring people to do such-and-such,” this doesn't imply that they are definitely soon going to be hiring for exactly these roles. It should instead be read as a claim about the broad kind of talent they are likely to be looking for when they next open a hiring round. AI governance research organisations Currently, GovAI is especially interested in research agendas that contribute to policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well. There's high demand for this kind of work and very few people who can do it. Researchers who can write touchstone pieces explaining, clarifying, and justifying important ideas...
I carried out a short project to better understand talent needs in AI governance. This post reports on my findings.How this post could be helpful:If you're trying to upskill in AI governance, this post could help you to understand the kinds of work and skills that are in demand.If you're a field-builder trying to find or upskill people to work in AI governance, this post could help you to understand what talent search/development efforts are especially valuable.Key takeawaysI talked with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they're looking for. Those hiring needs can be summarised as follows:All the organisations/teams I talked to are interested in hiring people to do policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well.There's currently high demand for this kind of work, because windows of opportunity to implement useful policies have begun arising more frequently.There's also a limited supply of people who can do it, partly because it requires the ability to do both (a) high-level strategising about the net value of different policies and (b) tactical implementation analysis about what, concretely should be done by people at the government/AI lab/etc. to implement the policy.[1] This is an unusual combination of skills, but one which is highly valuable to develop.AI governance research organisations (specifically, GovAI and Rethink Priorities) are also interested in hiring people to do other kinds of AI governance research—e.g. carrying out research projects in compute governance or corporate governance, or writing touchstone pieces explaining important ideas.AI governance teams at policy think tanks and AI labs are interested in hiring people whose work would substantially involve engaging with people to do stakeholder management, consensus building and other activities to help with the implementation of policy actions.Also, there is a lot of work requiring technical expertise (e.g. hardware engineering, information security, machine learning) that would be valuable for AI governance. Especially undersupplied are technical researchers who can answer questions that are not yet well-scoped (i.e. where the questions require additional clarifying before they are crisp and well-specified). Doing this well requires an aptitude for high-level strategic thinking, along with technical expertise.MethodI conducted semi-structured interviews with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they're looking for.I also talked with two people about talent needs in technical work for AI governance.FindingsTalent needsI report on the kinds of work that people I interviewed are looking to hire for, and outline some useful skills for doing this work.Note: when I say things like “organisation X is interested in hiring people to do such-and-such,” this doesn't imply that they are definitely soon going to be hiring for exactly these roles. It should instead be read as a claim about the broad kind of talent they are likely to be looking for when they next open a hiring round.AI governance research organisationsGovAI is particularly [...]--- First published: June 13th, 2023 Source: https://forum.effectivealtruism.org/posts/gsPmsdXWFmkwezc5L/some-talent-needs-in-ai-governance --- Narrated by TYPE III AUDIO. Share feedback on this narration.
In today's episode I chat with recent Accelerator program graduate, Sam Clarke. Sam Clarke stepped into a role with his wife in a church where they'd have to tackle some technical issues, grow the team signficantly, and change the direction of the worship culture entirely, but I'll let him tell you more about that in the episode, so stay tuned. Apply to Join Worship Ministry School: https://churchfront.me/apply Free Worship and Production Toolkit: https://churchfront.me/toolkit Shop Our Online Courses: https://churchfront.me/courses Join us at the Churchfront Live Conference: https://churchfront.me/conference Beginner Church Sound Course: https://churchfront.me/church-sound Follow Churchfront on Instagram or TikTok: @churchfront Follow on Twitter: @realchurchfront Gear we use to make videos at Churchfront: https://kit.co/churchfront/youtube-setup • • • • •
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deference on AI timelines: survey results, published by Sam Clarke on March 30, 2023 on The Effective Altruism Forum. Crossposted to LessWrong. In October 2022, 91 EA Forum/LessWrong users answered the AI timelines deference survey. This post summarises the results. Context The survey was advertised in this forum post, and anyone could respond. Respondents were asked to whom they defer most, second-most and third-most, on AI timelines. You can see the survey here. Results This spreadsheet has the raw anonymised survey results. Here are some plots which try to summarise them. Simply tallying up the number of times that each person is deferred to: The plot only features people who were deferred to by at least two respondents. Some basic observations: Overall, respondents defer most frequently to themselves—i.e. their “inside view” or independent impression—and Ajeya Cotra. These two responses were each at least twice as frequent as any other response. Then there's a kind of “middle cluster”—featuring Daniel Kokotajlo, Paul Christiano, Eliezer Yudkowsky and Holden Karnofsky—where, again, each of these responses were ~at least twice as frequent as any other response. Then comes everyone else. There's probably something more fine-grained to be said here, but it doesn't seem crucial to understanding the overall picture. What happens if you redo the plot with a different metric? How sensitive are the results to that? One thing we tried was computing a “weighted” score for each person, by giving them: 3 points for each respondent who defers to them the most 2 points for each respondent who defers to them second-most 1 point for each respondent who defers to them third-most. If you redo the plot with that score, you get this plot. The ordering changes a bit, but I don't think it really changes the high-level picture. In particular, the basic observations in the previous section still hold. We think the weighted score (described in this section) and unweighted score (described in the previous section) are the two most natural metrics, so we didn't try out any others. Don't some people have highly correlated views? What happens if you cluster those together? Yeah, we do think some people have highly correlated views, in the sense that their views depend on similar assumptions or arguments. We tried plotting the results using the following basic clusters: Open Philanthropy cluster = {Ajeya Cotra, Holden Karnofsky, Paul Christiano, Bioanchors} MIRI cluster = {MIRI, Eliezer Yudkowsky} Daniel Kokotajlo gets his own cluster Inside view = deferring to yourself, i.e. your independent impression Everyone else = all responses not in one of the above categories Here's what you get if you simply tally up the number of times each cluster is deferred to: This plot gives a breakdown of two of the clusters (there's no additional information that isn't contained in the above two plots, it just gives a different view). This is just one way of clustering the responses, which seemed reasonable to us. There are other clusters you could make. Limitations of the survey Selection effects. This probably isn't a representative sample of forum users, let alone of people who engage in discourse about AI timelines, or make decisions influenced by AI timelines. The survey didn't elicit much detail about the weight that respondents gave to different views. We simply asked who respondents deferred most, second-most and third-most to. This misses a lot of information. The boundary between [deferring] and [having an independent impression] is vague. Consider: how much effort do you need to spend examining some assumption/argument for yourself, before considering it an independent impression, rather than deference? This is a limitation of the survey, because different respondents may have been using different b...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ‘Old AI': Lessons for AI governance from early electricity regulation, published by Sam Clarke on December 19, 2022 on The Effective Altruism Forum. Note: neither author has a background in history, so please take this with a lot of salt. Sam thinks this is more likely than not to contain an important error. This was written in April 2022 and we're posting now as a draft, because the alternative is to never post. Like electricity, AI is argued to be a general purpose technology, which will significantly shape the global economic, military and political landscapes, attracting considerable media attention and public concern. Also like electricity, AI technology has the property that whilst some use cases are innocuous, others pose varying risks of harm. Due to these similarities, one might wonder if there are any lessons for AI governance today to be learned from the development of early electricity regulation and standards. We looked into this question for about two weeks, focusing on early electrification in the US from the late 1800s to the early 1900s, and on the UK's nationalisation of the electricity sector during the 20th century. This post identifies and examines lessons we found particularly interesting and relevant to AI governance. We imagine many of them will be fairly obvious to many readers, but we found that having concrete historical examples was helpful for understanding the lessons in more depth and grounding them in some empirical evidence.In brief, the lessons we found interesting and relevant are: Accidents can galvanise regulation People co-opt accidents for their own (policy) agendas (to various degrees of success) Technology experts can have significant influence in dictating the direction of early standards and regulation Technology regulation is not inherently anti-innovation The optimal amount and shape of regulation can change as a technology matures The need for interoperability of electrical devices presented a window of opportunity for setting global standards The development of safety regulation can be driven by unexpected stakeholders Pervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance tool There's a lot more that could be investigated here—if you're interested in this topic, and especially if you're a historian interested in electricity or the early development of technology standards and regulations, we think there are a number of threads of inquiry that could be worth picking up. Accidents can galvanise regulation In the early days of electrification, there were several high-profile accidents resulting in deaths and economic damage: A lineman being electrocuted in a tangle of overhead electrical wires, above a busy lunchtime crowd in Manhattan, which included many influential New York aldermen. There were a number of other deaths for similar reasons, which occurred somewhat less publicly and so were less influential but still important. Pearl Street Station—the first commercial central power plant in the United States—burned down in 1890. The 1888 blizzard in New York City tore down many power lines and led to a power blackout. Despite electric companies like Western Union and US Illuminating Company protesting regulation with court injunctions, [Hargadon & Doglas 2021] these accidents spurred government and corporate regulation around electrical safety, including: Various governments began to require high voltage electrical lines to be buried underground, one of the first (if not the first) governmental regulations on electricity to be introduced [Stross 2007]. Thomson-Houston electric company developed lighting arrestors for power lines and blowout switches to shut down systems in case of a power surge [Davis 2012]. Concerned about risks of installing AC e...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussing how to align Transformative AI if it's developed very soon, published by elifland on November 28, 2022 on LessWrong. Coauthored with equal contribution from Eli Lifland and Charlotte Siegmann. Thanks to Holden Karnofsky, Misha Yagudin, Adam Bales, Michael Aird, and Sam Clarke for feedback. All views expressed are our own. Introduction Background Holden Karnofsky recently published a series on AI strategy "nearcasting”: What would it look like if we developed Transformative AI (TAI) soon? How should Magma, the hypothetical company that develops TAI, align and deploy it? In this post we focus on How might we align transformative AI if it's developed very soon. The nearcast setup is: A major AI company (“Magma,” [following the setup and description of this post from Ajeya Cotra]) has good reason to think that it can develop transformative AI very soon (within a year), using what Ajeya calls “human feedback on diverse tasks” (HFDT) - and has some time (more than 6 months, but less than 2 years) to set up special measures to reduce the risks of misaligned AI before there's much chance of someone else deploying transformative AI. Summary For time-constrained readers, we think the most important sections are: Categorizing ways of limiting AIs Clarifying advanced collusion Disincentivizing deceptive behavior likely requires more than a small chance of catching it We discuss Magma's goals and strategy. The discussion should be useful for people unfamiliar or familiar with Karnofsky's post and can be read as a summary, clarification and expansion of Karnofsky's post. We describe a potential plan for Magma involving coordinating with other AI labs to deploy the most aligned AI in addition to stopping other misaligned AIs. more We define the desirability of Magma's AIs in terms of their ability to help Magma achieve its goals while avoiding negative outcomes. We discuss desirable properties such as differential capability and value-alignment, and describe initial hypotheses regarding how Magma should think about prioritizing between desirable properties. more We discuss Magma's strategies to increase desirability: how Magma can make AIs more desirable by changing properties of the AIs and the context in which they're applied, and how Magma should apply (often limited) AIs to make other AIs more desirable. more We clarify that the chance of collusion depends on whether AIs operate on a smaller scale, have very different architectures and orthogonal goals. We outline strategies to reduce collusion conditional on whether the AIs have indexical goals and follow causal decision theory or not. more We discuss how Magma can test the desirability of AIs via audits and threat assessments. Testing can provide evidence regarding the effectiveness of various alignment strategies and the overall level of misalignment. more We highlight potential disagreements with Karnofsky, including: We aren't convinced that a small chance of catching deceptive behavior by itself might make deception much less likely. We argue that in addition to having a small chance of catching deceptive behavior, the AI's supervisor needs to be capable enough to (a) distinguish between easy-to-catch and hard-to-catch deceptive behaviors and (b) attain a very low “false positive rate” of harshly penalizing non-deceptive behaviors. The AI may also need to be inner-aligned, i.e. intrinsically motivated by the reward. more We are more pessimistic than Karnofsky about the promise of adjudicating AI debates. We aren't convinced there's much theoretical reason to believe that AI debates robustly tend toward truth, and haven't been encouraged by empirical results. more We discuss the chance that Magma would succeed: We discuss the promise of "hacky" solutions to alignment. If applied alignment techniques that feel br...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussing how to align Transformative AI if it's developed very soon, published by elifland on November 28, 2022 on LessWrong. Coauthored with equal contribution from Eli Lifland and Charlotte Siegmann. Thanks to Holden Karnofsky, Misha Yagudin, Adam Bales, Michael Aird, and Sam Clarke for feedback. All views expressed are our own. Introduction Background Holden Karnofsky recently published a series on AI strategy "nearcasting”: What would it look like if we developed Transformative AI (TAI) soon? How should Magma, the hypothetical company that develops TAI, align and deploy it? In this post we focus on How might we align transformative AI if it's developed very soon. The nearcast setup is: A major AI company (“Magma,” [following the setup and description of this post from Ajeya Cotra]) has good reason to think that it can develop transformative AI very soon (within a year), using what Ajeya calls “human feedback on diverse tasks” (HFDT) - and has some time (more than 6 months, but less than 2 years) to set up special measures to reduce the risks of misaligned AI before there's much chance of someone else deploying transformative AI. Summary For time-constrained readers, we think the most important sections are: Categorizing ways of limiting AIs Clarifying advanced collusion Disincentivizing deceptive behavior likely requires more than a small chance of catching it We discuss Magma's goals and strategy. The discussion should be useful for people unfamiliar or familiar with Karnofsky's post and can be read as a summary, clarification and expansion of Karnofsky's post. We describe a potential plan for Magma involving coordinating with other AI labs to deploy the most aligned AI in addition to stopping other misaligned AIs. more We define the desirability of Magma's AIs in terms of their ability to help Magma achieve its goals while avoiding negative outcomes. We discuss desirable properties such as differential capability and value-alignment, and describe initial hypotheses regarding how Magma should think about prioritizing between desirable properties. more We discuss Magma's strategies to increase desirability: how Magma can make AIs more desirable by changing properties of the AIs and the context in which they're applied, and how Magma should apply (often limited) AIs to make other AIs more desirable. more We clarify that the chance of collusion depends on whether AIs operate on a smaller scale, have very different architectures and orthogonal goals. We outline strategies to reduce collusion conditional on whether the AIs have indexical goals and follow causal decision theory or not. more We discuss how Magma can test the desirability of AIs via audits and threat assessments. Testing can provide evidence regarding the effectiveness of various alignment strategies and the overall level of misalignment. more We highlight potential disagreements with Karnofsky, including: We aren't convinced that a small chance of catching deceptive behavior by itself might make deception much less likely. We argue that in addition to having a small chance of catching deceptive behavior, the AI's supervisor needs to be capable enough to (a) distinguish between easy-to-catch and hard-to-catch deceptive behaviors and (b) attain a very low “false positive rate” of harshly penalizing non-deceptive behaviors. The AI may also need to be inner-aligned, i.e. intrinsically motivated by the reward. more We are more pessimistic than Karnofsky about the promise of adjudicating AI debates. We aren't convinced there's much theoretical reason to believe that AI debates robustly tend toward truth, and haven't been encouraged by empirical results. more We discuss the chance that Magma would succeed: We discuss the promise of "hacky" solutions to alignment. If applied alignment techniques that feel br...
If you require video work, I recommend Sam. He is my first mate. Go to his website here: https://www.samclarkestudio.comPartake of my #1 bestselling book of poems, Marlon Brando 9/11: https://www.amazon.com.au/dp/B0B92NWWDCGet the audiobook and join the Patreon, and more: https://www.patreon.com/jdfmccann Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When reporting AI timelines, be clear who you're (not) deferring to, published by Sam Clarke on October 10, 2022 on The Effective Altruism Forum. It's fashionable these days to ask people about their AI timelines. And it's fashionable to have things to say in response. But relative to the number of people who report their timelines, I suspect that only a small fraction have put in the effort to form independent impressions about them. And, when asked about their timelines, I don't often hear people also reporting how they arrived at their views. If this is true, then I suspect everyone is updating on everyone else's views as if they were independent impressions, when in fact all our knowledge about timelines stems from the same (e.g.) ten people. This could have several worrying effects: People's timelines being overconfident (i.e. too resilient), because they think they have more evidence than they actually do. In particular, people in this community could come to believe that we have the timelines question pretty worked out (when we don't), because they keep hearing the same views being reported. Weird subgroups forming where people who talk to each other most converge to similar timelines, without good reason. People using faulty deference processes. Deference is hard and confusing, and if you don't discuss how you're deferring then you're not forced to check if your process makes sense. So: if (like most people) you don't have time to form your own views about AI timelines, then I suggest being clear who you're deferring to (and how), rather than just saying "median 2040" or something. And: if you're asking someone about their timelines, also ask how they arrived at their views. (Of course, the arguments here apply more widely too. Whilst I think AI timelines is a particularly worrying case, being unclear if/how you're deferring is a generally poor way of communicating. Discussions about p(doom) are another case where I suspect we could benefit from being clearer about deference.) Finally: if you have 30 seconds and want to help work out who people do in fact defer to, take the timelines deference survey! Thanks to Daniel Kokotajlo and Rose Hadshar for conversation/feedback, and to Daniel for suggesting the survey. This sort of thing may not always be bad. There should be people doing serious work based on various different assumptions about timelines. And in practice, since people tend to work in groups, this will often mean groups doing serious work based on various different assumptions about timelines. Here are some things you might say, which exemplify clear communication about deference: "I plugged my own numbers into the bio anchors framework (after 30 minutes of reflection) and my median is 2030. I haven't engaged with the report enough to know if I buy all of its assumptions, though" - "I just defer to Ajeya's timelines because she seems to have thought the most about it" - "I don't have independent views and I honestly don't know who to defer to" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
The Lasses have the weekend to themselves as Sunderland take on Charlton Athletic at Eppleton on Sunday morning (11.30am kick off). We did a preview twitter space on Tuesday but didn't hit "record", so here is our chat with Charlton fan Sam Clarke as well as Ant & Fieldo's takes on the changes they want to see in Sunderland's starting line-up.Vote for Roker Report in the Football Content Awards before 9 October.You can listen for FREE on Acast, iTunes, YouTube and across your favourite podcast platforms - the podcast is brought to you in association with Sunderland Community Soup Kitchen and HerGameToo - get stuck in!Our new regular pod theme music is "Science" by bigfatbig - stream their music across all platforms - https://linktr.ee/bigfatbigDiscover our articles on rokerreport.com social media and network with other Lasses fans - go to linktr.ee/rrlasses#SAFC #HerGameToo Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Survey of the Potential Long-term Impacts of AI, published by Sam Clarke on July 18, 2022 on The Effective Altruism Forum. Aim: survey the potential long-term impacts of AI, striking a balance between comprehensiveness and parsimony. Where this fits with similar work: as far as we know, the best existing materials on this topic are this slide deck and sections 2 and 3 of this forum post. In some sense, this paper is aiming to be a more comprehensive version of these pieces. It also includes discussion of the long-term opportunities from AI, as well as the risks. Audience: people who want an overview of the various ways that AI could have long-term impacts. This was written for an audience that isn't necessarily familiar with longtermism, so some framing points will be obvious to certain readers. Work done collaboratively with Jess Whittlestone. Also available here as a PDF. Summary Based on surveying literature on the societal impacts of AI, we identify and discuss five areas in which AI could have long-term impacts: in science, cooperation, power, epistemics, and values. Considering both possible benefits and harms, we review the state of existing research in these areas and highlight priority questions for future research. Some takeaways: Advanced AI could be very good or very bad for humanity, and it is not yet determined how things will go. AGI is not necessary for AI to have long-term impacts. Many long-term impacts we consider could happen with "merely" comprehensive AI services, or plausibly also with non-comprehensive AI services (e.g. Sections 3.2 and 5.2). There are several different pathways through which AI could have long-term impacts, each of which could be sufficient by itself. These takeaways are not original, but we hope we have added some depth to the arguments and to this community's understanding of the long-term impacts of AI more broadly. 1 Introduction Artificial intelligence (AI) is already being applied in and impacting many important sectors in society, including healthcare [Jiang et al. 2017], finance [Daníelsson et al. 2021], and law enforcement [Richardson et al. 2019]. Some of these impacts have been positive—such as the ability to predict the risk of breast cancer from mammograms more accurately than human radiologists [McKinney et al. 2020]—whilst others have been extremely harmful—such as the use of facial recognition technology to surveil Uighur and other minority populations in China [Hogarth & Benaich 2019]. As investment into AI research continues, we are likely to see substantial progress in AI capabilities and their potential applications, precipitating even greater societal impacts. What is unclear is just how large and long-lasting the impacts of AI will be, and whether they will ultimately be positive or negative for humanity. In this paper we are particularly concerned with impacts of AI on the far future:[1] impacts that would be felt not only by our generation or the next, but by many future generations who could come after us. We will refer to such impacts as long-term impacts. Broadly speaking, we might expect AI to have long-term impacts because of its potential as a general purpose technology: one which will probably see unusually widespread use, tend to spawn complementary innovations, and have a large inherent potential for technical improvement. Historically, general purposes technologies—such as the steam engine and electricity—have tended to precipitate outsized societal impacts [Garfinkel 2022]. In this paper, we consider potential long impacts of AI which could: Make a global catastrophe more or less likely (i.e. a catastrophe that poses serious damage to human well-being on a global scale, for example by enabling the discovery of a pathogen that kills hundreds of millions of people).[2] Make premature human e...
Thorn-Clarke represents the union of two families and six generations of grape growing in Australia's Barossa valley. The Thorn family were some of the earliest settlers in the region and have been growing grapes since the 1870s. Their first bottled vintage was a 1998 Shiraz. Today, the family has 600 acres of vineyards in the Barossa Valley. Sam Clarke, co-proprietor and son of cofounders, Cheryl Thorn and David Clarke, discusses the estate's history and wines. US importer: Kysela Pere & Fils.The Connected Table Live is broadcast live Wednesdays at 2PM ET.The Connected Table Live Radio Show is broadcast on W4CY Radio (www.w4cy.com) part of Talk 4 Radio (www.talk4radio.com) on the Talk 4 Media Network (www.talk4media.com). The Connected Table Live Podcast is also available on Talk 4 Podcasting (www.talk4podcasting.com), iHeartRadio, Amazon Music, Pandora, Spotify, Audible, and over 100 other podcast outlets.
Thorn-Clarke represents the union of two families and six generations of grape growing in Australia's Barossa valley. The Thorn family were some of the earliest settlers in the region and have been growing grapes since the 1870s. Their first bottled vintage was a 1998 Shiraz. Today, the family has 600 acres of vineyards in the Barossa Valley. Sam Clarke, co-proprietor and son of cofounders, Cheryl Thorn and David Clarke, discusses the estate's history and wines. US importer: Kysela Pere & Fils.The Connected Table Live is broadcast live Wednesdays at 2PM ET.The Connected Table Live Radio Show is broadcast on W4CY Radio (www.w4cy.com) part of Talk 4 Radio (www.talk4radio.com) on the Talk 4 Media Network (www.talk4media.com). The Connected Table Live Podcast is also available on Talk 4 Podcasting (www.talk4podcasting.com), iHeartRadio, Amazon Music, Pandora, Spotify, Audible, and over 100 other podcast outlets.
Kathmandu Coast to Coast: WRAP UP brought to you by CP MEDIA Bobby Dean – We didn't see this year's 5th seed on the start line in 2021, as he sat out the year due to injury. However we saw Bobby Dean back on the start line for 2022 and his 6th KMDC2C. Bobby was one of the youngest within the Top 10 field and made his presence known coming in 3rd behind Dougal Allan and race winner, Braden Currie. Vicky Jones – Vicky Jones the final person to get to the finish line in New Brighton on Saturday by beating all of the cut offs and in doing so completing an epic Longest Day Kathmandu Coast to Coast. She had a point to prove after breaking her boat in the paddle a couple of years ago which made getting to the finish line even sweeter this time. Ethan Halliwell – Completed this years Mountain Run in 3:34:42, just under 20 mins ahead of Ben O'Carroll. Ethan breaks down his race for us. Elina Usher – Lining up for her 17th Coast to Coast, Elina is no stranger to the Coast to Coast experience - with 4 Womens Longest Day titles and now 6th time placing 2nd. It was a tight race out front within the Elite women within only minutes separating race winner, Simone Maier, Elina and 3rd place Fiona Dowling. Braden Currie – Braden is this year's Coast to Coast Longest day champion and adding a 4th C2C title to his name. After COVID spoiled the plans for the now cancelled IRONMAN NZ, Braden was spurred on by Coast to Coast owner Mike Davies to hit the Kathmandu coast to coast starting lineup. So with a mere 36hrs of prep time, Braden dusted off his kayak and his multisport roots and headed for Greymouth! Braden began his professional racing career within C2C, racing 5 consecutive C2C's, with his last being in 2016, when he placed second to Sam Clarke, hanging up his kayak to enter the world of Ironman. Hitting the finishing chute with a 20 minute lead on defending Champion, Dougal Allan on Saturday, it was an epic race to watch. CP MEDIA HOSTS Matt Sherwood Richard Greer – @ric.greer www.coasttocoast.co.nz www.kathmandu.co.nz www.teamcp.co.nz @teamcpnz https://www.facebook.com/teamcpnz richard@teamcp.co.nz
Sam Clarke is the son of David Clarke and Cheryl Thorn who's families came together to create the well-known Thorn-Clarke wines in Barossa & Eden Valley. Jill talks to Sam about his experiences overseas (especially Spain!), these influences on his wine-making, the differences between South Australia and other wine regions around the world, and his focus on sustainability. #thorn-clarke
G'day legends, the lads have well and truly hit the ground running for the new year. With more talent lined up then you could poke a stick at, tonights episode is definitely no exception. We are joined by one of the biggest names in WA's road racing scene and local prodigy Sam Clarke. From beginning his life adventure on 2 wheels back in the dag racing BMX, locally nationally and internationally, getting a taste of motocross, to then travelling the world racing road bikes, landing his dream job with Three Chillies and Playing a big part in shaping MTB & Motos future in WA as well as sharing some bloody hilarious stories at the end. So it's time crack a froth and enjoy this one fam, we already know you guys are gonna love this one as much as we did! Thanks so much to all of our sponsors: Empire Cycles - https://empirecycles.com.au Westeffex - https://westeffex.com The Underclass - https://www.shoptheunderclass.com Karradale Meats - https://m.facebook.com/KarradaleMeats Perth Husqvarna - GasGas - https://www.perthhusqvarna.com.au Concept Coatings Design Co. - https://www.conceptcoatingsdesignco.com Elite Automotive Care - https://www.facebook.com/eliteautomotivecare Pro Pleat - https://pro-pleat.com Roleystone Brewing Co. - https://www.roleystonebrewingco.com Smarter Outdoors - https://www.facebook.com/smarteroutdoors 50/Fifty Productions - https://www.facebook.com/50Fiftyproductions Mundijong Mechanical & Mobile - https://www.facebook.com/mundijongmechanical TD Modula Granny Flats - https://tdgrannyflats.com.au Maxxis - https://www.maxxismoto.com.au Motorex - https://www.motorexoil.com.au Make sure to check us out on our socials as well: Instagram - https://www.instagram.com/thebeersandbikesshow Facebook - https://www.facebook.com/thebeersandbikesshow YouTube - https://www.youtube.com/channel/UCqLrBN20co96VNFXBfR9Dow Cheers Legends!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You are probably underestimating how good self-love can be, published by CharlieRS on the AI Alignment Forum. I am very grateful to the following people, in general, and for their helpful feedback on this post: Nick Cammarata, Kaj Sotala, Miranda Dixon-Luinenburg, Sam Clarke, Mrinank Sharma, Matej Vrzala, Vlad Firoiu, Ollie Bray, Alan Taylor, Max Heitmann, Rose Hadshar, and Michelle Hutchinson. This is a cross-post from LessWrong. I almost didn't post here, since this type of content is a little unfamiliar to the forum. But it saddens me to see my friends pour their hearts into the flourishing of humanity, and yet hurt so badly. I write later in the post: A lot of people go their whole lives making their self-worth conditional in order to act better: they take damage--dislike or judge themselves--whenever they act imperfectly or realise they are imperfect or don't achieve the things they want to. In a world as unfair and uncontrollable as this one, I think taking so much damage is often not that functional. Moreover, I claim that you can care deeply while feeling worthwhile and suffused with compassion and affection and joy. It is hard to do the most good when depressed, burned out, or feeling worthless. Even if this is not you, I think self-love might be worth aiming for--especially if you want to do something as difficult as saving the world. I was on a plane to Malta when I realised I had lost something precious. I was struggling to meditate. I knew there was some disposition that made meditation easier for me in the past, something to do with internal harmony and compassion and affection. Alas, these handles failed to impact me. On a whim, I decided to read and meditate on some of my notes. 3h later, I had recovered the precious thing. It was one of the most special experiences of my life. I felt massive relief, but I was also a little scared--I knew that this state would likely pass. I made a promise to myself to not forget what I felt like, then, and to live from that place more. This post is, in part, an attempt to honour that promise. I spent most of my holiday in Malta reading about and meditating on the precious thing, and I now feel like I'm in a place where I can share something useful. This post is about self-love. Until recently, I didn't know that self-love was something I could aim for; that it was something worth aiming for. My guess is that I thought of self-love as something vaguely Good, a bit boring, a bit of a chore, a bit projection-loaded (I'm lovable; I love me so you can love me too), and lumped together with self-care (e.g. taking a bath). Then I found Nick Cammarata on Twitter and was blown away by the experiences he was describing. Nick tweeted about self-love from Sep 2020 to May 2021, and then moved on to other things. His is the main body of work related to self-love that I'm aware of, and I don't want it to be lost to time. My main intention with this post is to summarise Nick's work and build on it with my experiences; I want to get the word out on self-love, so that you can figure out whether it's something you want to aim for. But I'm also going to talk a little about how to cultivate it and the potential risks to doing that. One caveat to get out of the way is that I'm a beginner--I've been doing this stuff for under a year, for way less than 1h/day. Another is that I expect that my positive experiences with self-love are strongly linked to me being moderately depressed before I started. What is self-love? Self-love is related to a lot of things and I'm not sure which are central. But I can point to some experiences that I have when I'm in high self-love states. While my baseline for well-being and self-love is significantly higher than it used to be, and I can mostly access self-love states when I want to, most of the time I am n...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying “What failure looks like”, published by Sam Clarke on the AI Alignment Forum. Thanks to Jess Whittlestone, Daniel Eth, Shahar Avin, Rose Hadshar, Eliana Lorch, Alexis Carlier, Flo Dorner, Kwan Yee Ng, Lewis Hammond, Phil Trammell and Jenny Xiao for valuable conversations, feedback and other support. I am especially grateful to Jess Whittlestone for long conversations and detailed feedback on drafts, and her guidance on which threads to pursue and how to frame this post. All errors are my own. Epistemic status: My Best Guess Epistemic effort: ~70 hours of focused work (mostly during FHI's summer research fellowship), talked to ~10 people. Introduction “What failure looks like” is the one of the most comprehensive pictures of what failure to solve the AI alignment problem looks like, in worlds without discontinuous progress in AI. I think it was an excellent and much-needed addition to our understanding of AI risk. Still, if many believe that this is a main source of AI risk, I think it should be fleshed out in more than just one blog post. The original story has two parts; I'm focusing on part 1 because I found it more confusing and nebulous than part 2. Firstly, I'll summarise part 1 (hereafter “WFLL1”) as I understand it: In the world today, it's easier to pursue easy-to-measure goals than hard-to-measure goals. Machine learning is differentially good at pursuing easy-to-measure goals (assuming that we don't have a satisfactory technical solution to the intent alignment problem[1]). We'll try to harness this by designing easy-to-measure proxies for what we care about, and deploy AI systems across society which optimize for these proxies (e.g. in law enforcement, legislation and the market). We'll give these AI systems more and more influence (e.g. eventually, the systems running law enforcement may actually be making all the decisions for us). Eventually, the proxies for which the AI systems are optimizing will come apart from the goals we truly care about, but by then humanity won't be able to take back influence, and we'll have permanently lost some of our ability to steer our trajectory. WFLL1 is quite thin on some important details: WFLL1 does not envisage AI systems directly causing human extinction. So, to constitute an existential risk in itself, the story must involve the lock-in of some suboptimal world.[2] However, the likelihood that the scenario described in part 1 gets locked-in (especially over very long time horizons) is not entirely clear in the original post. It's also not clear how bad this locked-in world would actually be. I'll focus on the first point: how likely is it that the scenario described in WFLL1 leads to the lock-in of some suboptimal world. I'll finish with some rough thoughts on the second point - how bad/severe that locked-in world might be - and by highlighting some remaining open questions. Likelihood of lock-in The scenario described in WFLL1 seems very concerning from a longtermist perspective if it leads to humanity getting stuck on some suboptimal path (I'll refer to this as “lock-in”). But the blog post itself isn't all that clear about why we should expect such lock-in --- i.e. why we won't be able to stop the trend of AI systems optimising for easy-to-measure things before it's too late -- a confusion which has been pointed out before. In this section, I'll talk through some different mechanisms by which this lock-in can occur, discuss some historical precedents for these mechanisms occurring, and then discuss why we might expect the scenario described in WFLL1 to be more likely to lead to lock-in than for the precedents. The mechanisms for lock-in Summary: I describe five complementary mechanisms by which the scenario described in WFLL1 (i.e. AI systems across society optimizing for simple proxies at the expense...
Recorded by Robert Miles: http://robertskmiles.com More information about the newsletter here: https://rohinshah.com/alignment-newsletter/ YouTube Channel: https://www.youtube.com/channel/UCfGGFXwKpr-TJ5HfxEFaFCg HIGHLIGHTS The "most important century" series (Holden Karnofsky) (summarized by Rohin): In some sense, it is really weird for us to claim that there is a non-trivial chance that in the near future, we might build transformative AI and either (1) go extinct or (2) exceed a growth rate of (say) 100% per year. It feels like an extraordinary claim, and thus should require extraordinary evidence. One way of cashing this out: if the claim were true, this century would be the most important century, with the most opportunity for individuals to have an impact. Given the sheer number of centuries there are, this is an extraordinary claim; it should really have extraordinary evidence. This series argues that while the claim does seem extraordinary, all views seem extraordinary -- there isn't some default baseline view that is “ordinary” to which we should be assigning most of our probability. Specifically, consider three possibilities for the long-run future: 1. Radical: We will have a productivity explosion by 2100, which will enable us to become technologically mature. Think of a civilization that sends spacecraft throughout the galaxy, builds permanent settlements on other planets, harvests large fractions of the energy output from stars, etc. 2. Conservative: We get to a technologically mature civilization, but it takes hundreds or thousands of years. Let's say even 100,000 years to be ultra conservative. 3. Skeptical: We never become technologically mature, for some reason. Perhaps we run into fundamental technological limits, or we choose not to expand into the galaxy, or we're in a simulation, etc. It's pretty clear why the radical view is extraordinary. What about the other two? The conservative view implies that we are currently in the most important 100,000-year period. Given that life is billions of years old, and would presumably continue for billions of years to come once we reach a stable galaxy-wide civilization, that would make this the most important 100,000 year period out of tens of thousands of such periods. Thus the conservative view is also extraordinary, for the same reason that the radical view is extraordinary (albeit it is perhaps only half as extraordinary as the radical view). The skeptical view by itself does not seem obviously extraordinary. However, while you could assign 70% probability to the skeptical view, it seems unreasonable to assign 99% probability to such a view -- that suggests some very strong or confident claims about what prevents us from colonizing the galaxy, that we probably shouldn't have given our current knowledge. So, we need to have a non-trivial chunk of probability on the other views, which still opens us up to critique of having extraordinary claims. Okay, so we've established that we should at least be willing to say something as extreme as “there's a non-trivial chance we're in the most important 100,000-year period”. Can we tighten the argument, to talk about the most important century? In fact, we can, by looking at the economic growth rate. You are probably aware that the US economy grows around 2-3% per year (after adjusting for inflation), so a business-as-usual, non-crazy, default view might be to expect this to continue. You are probably also aware that exponential growth can grow very quickly. At the lower end of 2% per year, the economy would double every ~35 years. If this continued for 8200 years, we'd need to be sustaining multiple economies as big as today's entire world economy per atom in the universe. While this is not a priori impossible, it seems quite unlikely to happen. This suggests that we're in one of fewer than 82 centuries that will have growth rates at 2% or larger, making it far less “extraordinary” to claim that we're in the most important one, especially if you believe that growth rates are well correlated with change and ability to have impact. The actual radical view that the author places non-trivial probability on is one we've seen before in this newsletter: it is one in which there is automation of science and technology through advanced AI or whole brain emulations or other possibilities. This allows technology to substitute for human labor in the economy, which produces a positive feedback loop as the output of the economy is ploughed back into the economy creating superexponential growth and a “productivity explosion”, where the growth rate increases far beyond 2%. The series has summarizes and connects together many (AN #105), past (AN #154), Open (AN #121), Phil (AN #118) analyses (AN #145), which I won't be summarizing here (since we've summarized these analyses previously). While this is a more specific and “extraordinary” claim than even the claim that we live in the most important century, it seems like it should not be seen as so extraordinary given the arguments above. This series also argues for a few other points important to longtermism, which I'll copy here: 1. The long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between. 2. The long-run future could come much faster than we think, due to a possible AI-driven productivity explosion. (I briefly mentioned this above, but the full series devotes much more space and many more arguments to this point.) 3. We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, we aren't ready for this. Read more: 80,000 Hours podcast on the topic Rohin's opinion: I especially liked this series for the argument that 2% economic growth very likely cannot last much longer, providing quite a strong argument for the importance of this century, without relying at all on controversial facts about AI. At least personally I was previously uneasy about how “grand” or “extraordinary” AGI claims tend to be, and whether I should be far more skeptical of them as a result. I feel significantly more comfortable with these claims after seeing this argument. Note though that it does not defuse all such uneasiness -- you can still look at how early we appear to be (given the billions of years of civilization that could remain in the future), and conclude that the simulation hypothesis is true, or that there is a Great Filter in our future that will drive us extinct with near-certainty. In such situations there would be no extraordinary impact to be had today by working on AI risk. TECHNICAL AI ALIGNMENT PROBLEMS Why AI alignment could be hard with modern deep learning (Ajeya Cotra) (summarized by Rohin): This post provides an ELI5-style introduction to AI alignment as a major challenge for deep learning. It primarily frames alignment as a challenge in creating Saints (aligned AI systems), without getting Schemers (AI systems that are deceptively aligned (AN #58)) or Sycophants (AI systems that satisfy only the letter of the request, rather than its spirit, as in Another (outer) alignment failure story (AN #146)). Any short summary I write would ruin the ELI5 style, so I won't attempt it; I do recommend it strongly if you want an introduction to AI alignment. LEARNING HUMAN INTENT B-Pref: Benchmarking Preference-Based Reinforcement Learning (Kimin Lee et al) (summarized by Zach): Deep RL has become a powerful method to solve a variety of sequential decision tasks using a known reward function for training. However, in practice, rewards are hard to specify making it hard to scale Deep RL for many applications. Preference-based RL provides an alternative by allowing a teacher to indicate preferences between a pair of behaviors. Because the teacher can interactively give feedback to an agent preference-based RL has the potential to help address this limitation of Deep RL. Despite the advantages of preference-based RL it has proven difficult to design useful benchmarks for the problem. This paper introduces a benchmark (B-Pref) that is useful for preference-based RL in various locomotion and robotic manipulation tasks. One difficulty with designing a useful benchmark is that teachers may have a variety of irrationalities. For example, teachers might be myopic or make mistakes. The B-Pref benchmark addresses this by emphasizing measuring performance under a variety of teacher irrationalities. They do this by providing various performance metrics to introduce irrationality into otherwise deterministic reward criteria. While previous approaches to preference-based RL work well when the teacher responses are consistent, experiments show they are not robust to feedback noise or teacher mistakes. Experiments also show that how queries are selected has a major impact on performance. With these results, the authors identify these two problems as areas for future work. Zach's opinion: While the authors do a good job advocating for the problem of preference-based RL I'm less convinced their particular benchmark is a large step forward. In particular, it seems the main contribution is not a suite of tasks, but rather a collection of different ways to add irrationality to the teacher oracle. The main takeaway of this paper is that current algorithms don't seem to perform well when the teacher can make mistakes, but this is quite similar to having a misspecified reward function. Beyond that criticism, the experiments support the areas suggested for future work. ROBUSTNESS Redwood Research's current project (Buck Shlegeris) (summarized by Rohin): This post introduces Redwood Research's current alignment project: to ensure that a language model finetuned on fanfiction never describes someone getting injured, while maintaining the quality of the generations of that model. Their approach is to train a classifier that determines whether a given generation has a description of someone getting injured, and then to use that classifier as a reward function to train the policy to generate non-injurious completions. Their hope is to learn a general method for enforcing such constraints on models, such that they could then quickly train the model to, say, never mention anything about food. FORECASTING Distinguishing AI takeover scenarios (Sam Clarke et al) (summarized by Rohin): This post summarizes several AI takeover scenarios that have been proposed, and categorizes them according to three main variables. Speed refers to the question of whether there is a sudden jump in AI capabilities. Uni/multipolarity asks whether a single AI system takes over, or many. Alignment asks what goals the AI systems pursue, and if they are misaligned, further asks whether they are outer or inner misaligned. They also analyze other properties of the scenarios, such as how agentic, general and/or homogenous the AI systems are, and whether AI systems coordinate with each other or not. A followup post investigates social, economic, and technological characteristics of these scenarios. It also generates new scenarios by varying some of these factors. Since these posts are themselves summaries and comparisons of previously proposed scenarios that we've covered in this newsletter, I won't summarize them here, but I do recommend them for an overview of AI takeover scenarios. MISCELLANEOUS (ALIGNMENT) Beyond fire alarms: freeing the groupstruck (Katja Grace) (summarized by Rohin): It has been claimed that there's no fire alarm for AGI, that is, there will be no specific moment or event at which AGI risk becomes sufficiently obvious and agreed upon, so that freaking out about AGI becomes socially acceptable rather than embarrassing. People often implicitly argue for waiting for an (unspecified) future event that tells us AGI is near, after which everyone will know that it's okay to work on AGI alignment. This seems particularly bad if no such future event (i.e. fire alarm) exists. This post argues that this is not in fact the implicit strategy that people typically use to evaluate and respond to risks. In particular, it is too discrete. Instead, people perform “the normal dance of accumulating evidence and escalating discussion and brave people calling the problem early and eating the potential embarrassment”. As a result, the existence of a “fire alarm” is not particularly important. Note that the author does agree that there is some important bias at play here. The original fire alarm post is implicitly considering a fear shame hypothesis: people tend to be less cautious in public, because they expect to be negatively judged for looking scared. The author ends up concluding that there is something broader going on and proposes a few possibilities, many of which still suggest that people will tend to be less cautious around risks when they are observed. Some points made in the very detailed, 15,000-word article: 1. Literal fire alarms don't work by creating common knowledge, or by providing evidence of a fire. People frequently ignore fire alarms. In one experiment, participants continued to fill out questionnaires while a fire alarm rang, often assuming that someone will lead them outside if it is important. 2. They probably instead work by a variety of mechanisms, some of which are related to the fear shame hypothesis. Sometimes they provide objective evidence that is easier to use as a justification for caution than a personal guess. Sometimes they act as an excuse for cautious or fearful people to leave, without the implication that those people are afraid. Sometimes they act as a source of authority for a course of action (leaving the building). 3. Most of these mechanisms are amenable to partial or incremental effects, and in particular can happen with AGI risk. There are many people who have already boldly claimed that AGI risk is a problem. There exists person-independent evidence; for example, surveys of AI researchers suggest a 5% chance of extinction. 4. For other risks, there does not seem to have been a single discrete moment at which it became acceptable to worry about them (i.e. no “fire alarm”). This includes risks where there has been a lot of caution, such as climate change, the ozone hole, recombinant DNA, COVID, and nuclear weapons. 5. We could think about building fire alarms; many of the mechanisms above are social ones rather than empirical facts about the world. This could be one out of many strategies that we employ against the general bias towards incaution (the post suggests 16). Rohin's opinion: I enjoyed this article quite a lot; it is really thorough. I do see a lot of my own work as pushing on some of these more incremental methods for increasing caution, though I think of it more as a combination of generating more or better evidence, and communicating arguments in a manner more suited to a particular audience. Perhaps I will think of new strategies that aim to reduce fear shame instead. NEWS Seeking social science students / collaborators interested in AI existential risks (Vael Gates) (summarized by Rohin): This post presents a list of research questions around existential risk from AI that can be tackled by social scientists. The author is looking for collaborators to expand the list and tackle some of the questions on it, and is aiming to provide some mentorship for people getting involved.
This week, Gilly is with surely everyone's food hero, Claudia Roden who brought Jewish and Middle Eastern food to the world. Yotam Ottolenghi and Sam and Sam Clarke at Moro are among the many chefs who say that it all started with Claudia. Now, at 85, she's got a new book out, Med which brings her personal stories and inventive flourish to the flavours of the Mediterranean. And she's still FULL of stories. See acast.com/privacy for privacy and opt-out information.
On this months edition of the FvH Podcast, Sam is joined by Tracy from Chelsea Pride & Paul from Kop Outs to discuss homophobic chanting at football matches. They go into specific detail on the recent incident at Carrow Road & the 'rent boys' chant, the origins of the chant, debunking the excuses and the affects homophobic chanting has on the community. Edited by Sam Clarke.
The Youth Panel Roundtable is back (kinda)! FvH Youth Panel Communications Office Sam is joined by Events Officer Danyal Khan in a South Asian Heritage Month special. Danyal talks about about growing up in football, coming out to his Dad and his career as a journalist. Edited by Sam Clarke
Join Amy Allard-Dunbar, Sharifa James and Annette Nelson for a round table discussion, where they share their experiences of football as young people, finding safe places and talking to people in authority about issues people of colour face. This engaging chat also offers their perspectives around queer and black intersecting identities in football. Music from filmmusic.io "Life of Riley" by Kevin MacLeod (incompetech.com) License: CC BY (creativecommons.org/licenses/by/4.0/) Edited by Sam Clarke
Want to support the podcast so we can make better episodes more often? Become a Patron: https://www.patreon.com/behindtheglassThis week Tony and I are joined by Sam Clarke from GridServe to do a deep dive on the world of electric vehicles and more importantly, the infrastructure around them. Here's what we discussed:0:00 - Intro0:45 - Introducing Our Guest04:00 - Why Are We Doing An EV Special?05:00 - Sam ‘Ruins’ Tony06:00 - Is The National Grid Going To Fry?10:30 - The “Problem" With Electric Cars15:00 - The EV Barrier To Entry18:00 - Is Charging Getting More Expensive?23:30 - Can The Infrastructure Keep Up With Demand?27:00 - Why Is Charging So Complicated?28:15 - https://www.coincorner.com/stg35:00 - Are Teslas Still The Best?38:30 - Battery Life41:30 - Used or Second Hand EVs45:30 - The Cost Of Owning An EV52:30 - Are EVs Really Environmentally Friendly56:00 - Summary & Patreon QuestionsIf you want to follow Sam:https://uk.linkedin.com/in/samclarke946 See acast.com/privacy for privacy and opt-out information.
Sam Clarke gets on the line with Justin and Jon for the hundred-and-sixty-fifth episode of The Hold Up to talk about some of his childhood favorites, most importantly 1997's Hercules from Disney and Pop Rocks. Sam also talks about being involved in organized sports while growing up, discusses trading his allowance for TV time, shares... Continue Reading →
This evening on The Money Show, Bruce focusses on the effects of the #Covid-19 lockdown on the economy in the past year. In this hour he'll speak to Martin Kingston, Leader of the Economic Intervention Work Group at Business for South Africa (B4SA), Martin Kingston, Leader of the Economic Intervention work group at Business for South Africa (B4SA), Isaah Mhlanga, Chief Economist at Alexander Forbes, Tashmia Ismail-Saville, CEO of YES initiative, Sam Clarke, Founder and CEO at Skynamo, Prof Nick Binedell, Professor at Gordon Institute of Business Science (Gibs) and Dr Adrian Enthoven, Deputy Chairperson at Solidarity Fund. See omnystudio.com/listener for privacy information.
Sam Clarke, CEO of Skynamo
This week on the Auto Futures Podcast, Alex Kreetzer and Chris Kirby are joined by 'EV evangelist' Sam Clarke, who has recently joined the team at Gridserve, which is building the UK's first electric forecourts.With a depth of experience in electrification, from building a successful EV-related business from the ground up and ties with the public sector, Sam discusses his journey from China to the UK, whilst analysing the current state of the industry.
This week I caught up with Sam Clarke on the Gold Coast, he has a 1951 Chevy pickup that has had extensive chassis and suspension work, engine, paint, the works. You can check out Sam’s truck via his Instagram, @samgrenade The podcast is sponsored in part by Classic Pickup Supplies. Please support them if you need parts for US build Ford and Chevy vehicles. www.classicpickupsupplies.com.au To get in contact with me please email me at classicpickuppodcast@gmail.com Thank you for listening. Whipps.
Mike chats to Sam Clarke, Youth Minister at St Michael's Church, Stoke Gifford, about lockdown weddings, enjoying mountaintop moments, the pain of forgetting names and ministering through Minecraft. Notes: https://www.kintsugihope.com/youth http://www.themixbristol.co.uk Music by Dan Waine and Mixkit.co
One of the focuses of the Chamber has been providing our local business owners with valuable resources to pivot during the pandemic. And providing our members with the platform to get the word out to other business owners about tools and strategies that can help inform critical decisions.The topic of Business Innovation continues to remain a relevant topic because the framework and mindset is applicable at every stage of a business' lifecycle.For this reason, this episode is longer than our usual 15-20 minute episodes.In this episode:We'll be learning from Sam about how to use the business model canvas to evaluate how we create value for our customers and sell value to our customersHe'll be going over the value in identifying what the process is that we go through as a business to create value propositions and deliver those value propositions to our customersYou'll get clear on what the difference are between cost and price, customers and users, as well as hear about examples of how real businesses are utilizing different models and framework to attract customers, cut costs, and are disrupting their industries As always, you can find all of our episode links in the show notes - located in the episode description as well as on our website at sanmarcoschamber.com/business-model.
Room 501 returns and Rhys is joined by his real life grapple buddy, Sam Clarke. Can Sam get all of his items in the pit of destiny? Only one way to find out. Follow us @Rogue_Opinion
MultimediaLIVE — In this edition of the Business Day Spotlight, we’re examining the journey of technology startup, Skynamo, as the company expands its local operations moves beyond the borders of SA. Our host Mudiwa Gavaza is joined by Sam Clarke, the man behind Skynamo, a business to business retail sales rep platform which recently raised $30m in funding from US group, Five Elms. The discussion explores how the business began, the business model, Clarke’s professional journey, rationale for growing overseas, the impact of Covid-19 on business and an outlook for the economies in the countries that Skynamo operates in.
In this crossover episode, Robert has a chat with Sam Clarke, the founder of Gnewt Cargo and soon to be Chief vehicle electrification officer at Gridserve. They talk about how to juggle charging for over 100 EV's. Then they go onto his plans at Gridserve.
Sam Clarke founded the firm Gnewt in 2009 which has won multiple awards nationally and globally from the electric vehicle, logistics, environmental and wider business sectors during its decade long journey of last-mile logistics using only electric vehicles. It boasts the country’s largest fully EV commercial fleet and the second largest privately-owned smart charging infrastructure network. The business was subsequently acquired by Menzies Distribution Ltd in 2017. Sam is a life-long entrepreneur, EV evangelist, industry advisor and EV commuter for over 15 years. He was a 2015 winner at the Great British Entrepreneurs Awards and is a Fellow and sector chair for the National Institute of Couriers. He has built quality relationships with key clients, the wider industry and Governmental bodies, including the London Mayor’s Office at City Hall. In 2020, he was voted #36 in the greenfleet.net top 100 most influential people in Low Carbon Fleets. Founders365 is hosted by Steven Haggerty and shares 365 insights from 365 founders during 2020.
In this episode of Business over Beers, William and Sam are joined once again by one of the most insightful marketers in the space, Sam Clarke. This time Sam cover off some of the latest trends and changes within the sector, as well as ways to create marketing that matters rather than just noise. They also cover off some of there individual losses and how they took that knowledge forward.
Sam gives a great background on the early days of electric vehicle delivery in London and learning from China. We then discuss the challenges and opportunities as we have to scale on a massive scale to meet the climate challenge in the coming few years. Aimava have been working with global corporates and start ups on what we term Innovative New Value Chain, which is connnecting new technologies, ventures and corporates with new business models in the Transport and Energy sectors as Electric Vehicles will revolutionise many industries. This podcast is one of a series where I speak to leading executives who are making the important transition to sustainable future. We have been working in UK, Europe with programs in leading locations such as Oslo and China where there are high penetration of EVs and massive numbers. Do get in touch if you would like to find out more, and don't forget to subscribe on your platform to hear future interviews. Transcripts of interviews are also available. info@aimava.com
Merry.....Gifting? Our holiday themed adventure DM'd by Sam Clarke continues! For Crits and Giggles is a 5E Dungeons and Dragons podcast starring Po (Sam Clarke), Mordai (Aghilan Newman) and Mithras(Nic Chong). Join us week to week as we spin colorful tales, roam the land and (try) to save the day! Visit us a www.forcritsandgiggles.com Follow us on Twitter: @forcngpodcast Like us on Facebook: For Crits and Giggles Don't forget to subscribe on iTunes!
In this episode of Business over Beers, William and Sam sit down with one of William’s good friends Sam Clarke. Sam is probably one of the most insightful marketers we’ve met and shares his ideas on how to build a long-term brand. Sam Clarke Twitter: www.twitter.com/samcstar
Merry.....Gifting? We take a slip in time and see what some other heroes get up to during the most wonderful time of the Ianis calendar. Our own wonderful Sam Clarke takes the reigns this time, what a good boi he is. For Crits and Giggles is a 5E Dungeons and Dragons podcast starring Po (Sam Clarke), Mordai (Aghilan Newman) and Mithras(Nic Chong). Join us week to week as we spin colorful tales, roam the land and (try) to save the day! Visit us a www.forcritsandgiggles.com Follow us on Twitter: @forcngpodcast Like us on Facebook: For Crits and Giggles Don't forget to subscribe on iTunes!
What mysteries does the town of Tuwhiri hold? For Crits and Cthulhus is a side adventure to the main For Crits and Giggles feed! Not canon, but still fun! This episode stars Hannah Calvert, Aghilan Newman and Sam Clarke and is run by Kieran Bennett (@mrk_bennett). Visit us at www.forcritsandgiggles.com Follow us on Twitter: @forcngpodcast Like us on Facebook: For Crits and Giggles Don’t forget to subscribe on iTunes! And hey! Follow our super cool editors Joshua and David on twitter: @joshuaneverjosh and @spudcam
Sam Clarke, founder of all-electric delivery firm Gnewt Cargo, on using electric vehicles in the last mile: Why Gnewt Cargo was founded and how it expanded Electric vehicles and congestion charges Legislative restrictions affecting diesel vehicles Making electric vehicle delivery working in the real world What the acquisition by John Menzies PLC means for Gnewt's future, including expansion City logistics - consolidation of last mile delivery in city centres The vehicle types in the Gnewt fleet Are electric vehicles a viable alternative to conventionally-fuelled vehicles?
Sam Clarke is a producer, DJ and member of Next Level Vibrations, a creative team about to perform their live monthly comedy gig at Ancient World (Oct 8), preparing for a 2016 Fringe Show and the release of their debut podcast at nlvcool.com. And they have an online store opening up. And they're funny. We chat about Next Level Vibrations, and touch on the recent closure of Supermild ahead of its move to new premises.
AFTERBUZZ TV – Archer edition, is a weekly “after show” for fans of FX’s Archer. In this show, host John Barrett breaks down Archer’s babysitting methods. There to help John are co-hosts Phil Svitek, Sam Clarke, and Greg Goodness. It’s Archer’s “Double Duece” episode!