Podcasts about eas

  • 334PODCASTS
  • 969EPISODES
  • 31mAVG DURATION
  • 1DAILY NEW EPISODE
  • Sep 28, 2022LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about eas

Latest podcast episodes about eas

AVID Learning: EV Technology
#81 | Interview, Isobel Sheldon, Chief Strategy Officer, Britishvolt

AVID Learning: EV Technology

Play Episode Listen Later Sep 28, 2022 53:16


For this episode Ryan Maughan speaks to Isobel Sheldon, the Chief Strategy Officer at UK battery manufacturing company Britishvolt.Isobel is a pioneer of the battery industry, having been involved in the development of battery systems and technology since 2003. She has held roles at Ricardo, Johnsson Mathey and Cummins and then joined the team at the UK Battery Innovation Centre or UKBIC.See podcast #44 for an interview with Isobel when she was Director of business development at UK BIC, here: https://etech49.com/ep44-interview-with-isobel-sheldon-ukbic/Ryan and Isobel discuss the battery industry and her experiences and then talk about the business and strategy at Britishvolt including their acquisition of EAS in Germany, cell format roadmap and much more!Isobel Sheldon LinkedIn profile: https://www.linkedin.com/in/isobel-sheldon-obe-62a0b987/Britishvolt website: https://www.britishvolt.com/Ryan Maughan LinkedIn profile: https://www.linkedin.com/in/ryan-maughan-a2893610/Ryan Maughan on Twitter: https://twitter.com/acexryaneTech website: https://www.etech49.comFollow eTech49 on social media: https://www.linkedin.com/company/etech49-limited

The Nonlinear Library
EA - Acceptance and Commitment Therapy (ACT) 101 by Evie Cottrell

The Nonlinear Library

Play Episode Listen Later Sep 25, 2022 12:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Acceptance and Commitment Therapy (ACT) 101, published by Evie Cottrell on September 24, 2022 on The Effective Altruism Forum. ACT = Acceptance and Commitment Therapy (pronounced as the word ‘act' rather than the letters) CBT = Cognitive Behavioural Therapy Summary I have found Acceptance and Commitment Therapy (ACT) much more useful as a framework than other types of CBT. Traditional CBT focuses on helping you identify and change negative thoughts, whereas ACT focuses on accepting them. It holds that fighting against your emotions usually just leads to being stuck in them. Other types of CBT often ask “is this thought true?” to encourage behaviour change, whereas ACT uses values to encourage behaviour change (explained better in the full text). ACT could be a good fit for people who are especially values-driven. ACT has a strong focus on values and mindfulness. Other key ideas: Psychological flexibility: being in contact with the present moment, fully aware of emotions, and thoughts, welcoming them, including the undesired ones, and moving in a pattern of behaviour in the service of chosen values; Encouraging gentle curiosity towards thoughts, feelings, and sensations, instead of trying to change them; The normal thinking process of a healthy human mind will naturally lead to psychological suffering. We have not evolved to be happy, and we are not deficient for experiencing difficult thoughts and feelings; The goal isn't to chase happiness or otherwise try and control our internal state. The six principles of ACT: Learning the skill of defusing your thoughts and feelings, so they have much less influence over you; Making room for unpleasant feelings instead of trying to push them away (expansion); Connecting with whatever you're doing or experiencing in the present moment; Becoming familiar with the "observing self," the part of you that experiences, sees, touches, and doesn't judge or take responsibility; Clarifying and connecting with your values; Taking action motivated by your values. This leads to a formula of: Mindfulness + Values + Committed Action = Psychological Flexibility If you'd like to learn more, I recommend reading the book The Happiness Trap, or this book summary of it, or watching this Tedx talk. If you want to engage more seriously, I'd really recommend seeking a therapist who practises ACT. Introduction I started exploring ACT about a month ago, and have found it has significantly increased my mental wellbeing. ACT has been much more powerful and effective for me than other types of CBT I have tried in the past (however, I have engaged with ACT to a deeper level – but this is mostly because it felt exciting and helpful from the beginning). I haven't heard EAs talk about it publicly before, and wanted to encourage others to explore it as a framework, if any of it resonates. I have been enthusing to friends about ACT, and I'm also writing this to be able to refer people to in future. My goal here isn't to provide a guide for How To Live According To ACT. It's to offer a brief taster, so that if the ACT framework seems like it could be helpful, you can learn more elsewhere. Some benefits I've experienced Struggling against my internal experience less – and instead approaching it with compassion and acceptance; More ability to be present with my experience in the moment, by developing the ability to notice when I'm “hooked” on a string of thoughts, and skills to reconnect with the present moment; Less self-criticism and more self-compassion (in a way that feels non-coercive and authentic); More accepting of the fact that it's normal to have difficult thoughts and feelings – I'm not bad or deficient for experiencing them; More clarity on my values and how I want to treat myself; Feeling more empowered and less hopeless around mental health. How is ACT...

The Nonlinear Library
EA - The $100,000 Truman Prize: Rewarding Anonymous EA Work by Drew Spartz

The Nonlinear Library

Play Episode Listen Later Sep 23, 2022 6:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The $100,000 Truman Prize: Rewarding Anonymous EA Work, published by Drew Spartz on September 22, 2022 on The Effective Altruism Forum. Harry Truman once said: "It's amazing what you can accomplish if you don't care who gets the credit" The Truman Prize, now live on EA prize platform Superlinear, recognizes Effective Altruists with $5,000-$10,000 prizes for declining credit in order to increase their impact, in ways that can't be publicized directly. Theory of change: EA promotes caring about effectiveness over other goals like getting credit, but wanting credit or recognition for your work is natural. Rewarding people for maximizing impact over credit increases the health and future effectiveness of the community. Example #1: Sam toils behind the scenes and makes a breakthrough on an important problem. Sam suggests the idea to, say, a political figure or other organization who can then take credit, because that leads to the breakthrough being more widely accepted. Anyone that knows what happened, including the person/org that gets credit, can nominate Sam for The Truman Prize on Superlinear. Superlinear passes on the nomination to a committee of well-respected EAs from diverse backgrounds. If one of them verifies that Sam actually did make a breakthrough and allowed someone better placed to take credit to increase impact, Superlinear awards Sam $10,000. The Truman Prize is the brainchild of David Manheim, and the judges are: Eliezer Yudkowsky Peter Wildeford Spencer Greenberg Cate Hall Julia Wise Gregory Lewis Luke Freeman Ozzie Gooen The criteria, generally speaking, is that if you can't make an EA forum post about someone for doing something noteworthy in order to publicize what they have done, they could be eligible for the prize. Example #2: Greg works for the government. There are political or career consequences if it is publicly acknowledged that he's working on something potentially controversial. Greg contributes an important idea to a research field and helps make it happen behind the scenes. Someone nominates him for The Truman Prize, and the committee asks someone in a position to know about what occurred, and confirms Greg's contribution. Superlinear awards Greg $5,000, and announces that a prize was awarded to the originator of the research idea to a recipient to be named in 5 years. Example #3: Claire, a biosecurity researcher, blows the whistle internally on a potentially dangerous research direction which likely violates the Biological Weapons Convention. The organization doesn't want this to be public, but an individual inside the organization could still confidentially nominate them for the Truman Prize, and depending on details and the potential infohazards, the prize might be awarded to Claire, without specifying what was done. Example #4: Max has a criminal record and troubled past. He's reformed now, but his background makes him a liability for any person or org to publicly associate with him. He silently does good work behind the scenes, so someone that knows him nominates him for The Truman Prize on the basis of a specific critical contribution which was made to a now successful larger project. The committee awards the prize, and names Max, likely without naming the specific work done. Example #5: Steve has extreme political beliefs. It is risky for any person or org to work with him due to reputation risks. Steve knows this, but does apolitical high impact work behind the scenes anyways. Someone that knows Steve nominates him for The Truman Prize on the basis of a specific project which was not previously disclosed. The committee awards the prize and discloses the project, but not the individual, or vice-versa, to avoid undermining the project. Example #6: Morgan has recurring depression. Therefore, she does not want to work or associate wi...

The Nonlinear Library
EA - (EA needs) advertising for infrastructure projects by Arepo

The Nonlinear Library

Play Episode Listen Later Sep 23, 2022 3:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (EA needs) advertising for infrastructure projects, published by Arepo on September 23, 2022 on The Effective Altruism Forum. There are a number of EA infrastructure projects that provide free or cheap support to other effective altruists, two of which I'm involved with. Something they all seem to have in common is that a major limiting factor is spreading the word about the availability of the service or product and keeping the word spread. One can justify a handful of posts on here and in Facebook groups about any given project, but after that, unless it has changed in some way, further posts seem increasingly like spam - even when the project is still useful and still limited by EAs' remembering or ever knowing of it. So this is a quick post with a few related objectives: to call for better standardisation of norms for promoting such projects to open discussion of some kind of such advertising on this forum, since it's probably the single place where it would be most visible to discuss the possibility of advertising in other venues. Eg such projects could promote each other - although again a limiter is that not all such projects know about all the others partly to help with 3, partly for its own sake, to provide a list of such projects. I'll mention all the ones that occur to me below with the best description I can easily find. Please let me know any I've missed in the comments, and I'll edit them into this post The last could be quite subjective since anything could qualify as a resource and I don't want the list to become overwhelming, so I'll give some initial guidelines - feel free to persuade me they should be changed: The resource should be free to use, or at available at a substantial discount to relatively poor EAs It should be aimed specifically at EAs It should make the people using its lives better, not just 'enable them to do more good' It should be available to people across the world (ie. not just a local EA group) It should be a service or product that someone is putting ongoing work into (ie not just a list of tips, or Facebook/Discord/Slack groups with no purpose other than discussion of some EA subtopic) At the very least, hopefully this post can then become a useful reference for people looking to see what benefits the community can provide them. Coworking/socialising: EA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversations EAVR - A community for people interested in Effective Altruism who use VR to connect and collaborate EA Anywhere - An online EA community for everyone EA coworking Discord - A Discord server dedicated to online coworking Professional services: Altruistic Agency - provides free tech support and development to organisations Legal advice - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK. 80,0000 Hours career coaching - Speak with us for free about using your career to help solve one of the world's most pressing problems EA mental health navigator - aims to boost people's well-being by connecting them with mental health resources considered to be effective and to have the greatest likelihood of being helpful Financial and other material support: CEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses. Effective Altruism Funds - Whether an individual, organisation or other entity, we're eager to fund great ideas and great people. Nonlinear fund - We incubate longtermist nonprofits by connecting founders with ideas, funding, and mentorship FTX Future Fund - Supports ambitious projects to improve humanity's long-term prospects Survival and Flourishing Fund - A “v...

The Nonlinear Library
EA - The Hundred Billion Dollar Opportunity that EAs mostly ignore by JeremiahJohnson

The Nonlinear Library

Play Episode Listen Later Sep 22, 2022 11:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Hundred Billion Dollar Opportunity that EAs mostly ignore, published by JeremiahJohnson on September 22, 2022 on The Effective Altruism Forum. Epistemic Status: Relatively confident but also willing to take advantage of Cunningham's Law if I'm wrong. May lack deep knowledge of EA efforts in this area. About me: I'm a sometime EA and looking to dive into the EA world more fully, including potentially shifting my career to work in EA. I currently work in US politics and specialize in online movement building and communications. I trend towards near-termist and global health EA causes, although I think the argument below also has long-termist implications. The Central Premise There is a potentially massive method of doing good out there that's mostly ignored. This method is at the absolute heart of the very concept of Effective Altruism, and yet is rarely discussed in EA communities or spaces. We should try harder to influence the average non-EA person's donation. The Current Charitable Landscape A few quick facts: The United States donated almost $500 billion just in 2021. Without listing every individual country, European charitable donations are on the scale of hundreds of billions as well. Overall, yearly charitable donations in rich countries worldwide are in the high hundreds of billions of USD. Most of this money, from an EA perspective, is wildly inefficiently spent. While it's impossible to break down exactly where each of these dollars goes, a little bit of common sense and some basic statistics paints a discouraging picture. Of this giant pile of money, only 6% is donated internationally, despite donations to poor countries usually having a better per-dollar impact than donations inside a rich country. The largest categories for donation are religious organizations. The second largest category is educational donations. Three quarters of that educational money is given to existing 4-year colleges and universities. Much of that is the stereotypical worst kind of donation, a huge donation to an elite school that already has billions in endowment. Beyond the statistics, any casual glance at how normal people donate their money can confirm this. People give to local schools, their friend's charity, or generally whatever they feel a connection to. My parents, who are highly charitable people who gave >10% of their income long before it was an EA idea, have made significant charitable donations to a children's puppetry program. This is the landscape in which the better part of a trillion dollars is being spent. None of this should be surprising to EAs. The core of Effective Altruism is the argument that when you attempt to do good, you should try to be effective in doing so. The equally core fact about the world that EAs recognize is that historically, most people have not been very good at maximizing how much good they do. For the vast majority of charitable dollars, that's still true. The Argument for Impact I believe Effective Altruism should spend more time trying to shift the behavior of the general public. I believe this area has the potential for large impact, and that it's currently neglected as a way to do good. Scale - Enormous. Not going to spend much time on this point, but obviously changes to how hundreds of billions of charitable dollars are given would be huge in scale. Tractability - This problem is likely more difficult and less tractable than many other cause areas. It's very difficult to simply spin wide-ranging cultural changes into existence. But it's not impossible, and the enormous size of the pile of money mitigates the low tractability. Using some relatively low numbers - If you had even a 1% chance of success, and success meant only shifting 5% of US charitable dollars, that's still 250 million dollars of donations going to more effect...

The Nonlinear Library
EA - Why Wasting EA Money is Bad by Jordan Arel

The Nonlinear Library

Play Episode Listen Later Sep 22, 2022 7:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Wasting EA Money is Bad, published by Jordan Arel on September 22, 2022 on The Effective Altruism Forum. The thought crossed my mind today, “should I take the BART or Uber to the airport on the way to EAG DC..?” Among other considerations, I thought “well the BART would be much cheaper, but EA will compensate me for the Uber, so maybe cost shouldn't be much of a consideration.” After thinking this, I thought “wow, what a sketchy line of logic.” Yet I don't think this way of thinking is entirely uncommon among EAs. Shortly after this I came across this article about how EA Berkeley is wasting money in the EA UC Berkeley Slack channel. While I found the article a little bit confused and it seems to have some factual errors, and some of the claims were made somewhat less credible by the fact that the author then proceeded to post some somewhat aggressive comments toward people in the slack, I nonetheless find the criticism that EAs waste money to be alarming and valid and think it is important to address before the issue balloons out of hand. Basically, I think this argument has a few levels. On the first level, you could say that money is really valuable and since we can say that something like $200 (please correct me if this number is inaccurate) could save a year of someone's life via GiveWell top charities, we should take this as a real consideration and have a very high bar for wasting money. Against that you could argue that, well, we have an insane amount of money for the size of the movement, if we very roughly have something like $50 billion and 2000 highly engaged EAs which have both been relatively stable over the past few years, if all of that money was spent by current EAs in our lifetime of ~50 years that's about $500,000 per person, PER YEAR. That's a lot. So even if it makes me only a minuscule amount more efficient, if the work I'm doing is high value enough in contributing to the community, then maybe it's worth it. But then that only makes sense if the work I'm doing is extremely extremely valuable, because I still have to compare it against the bar of $200 equals ~1 year of life saved. So if a $50 Uber ride saves me half an hour, my half an hour must be more valuable than a three months of someone else's life. That's a pretty big claim. But, then, the claims of longtemism are quite big indeed. Bostrom calculates that a one second delay in colonizing space may be equivalent to something like the loss of 100 trillion human lives, due to galaxies we could potentially colonize moving away from us in every direction at fast speeds. Working on existential risk reduction, rather than speeding up technological progress and space colonization, likely increases this expected value by several orders of magnitude.. So if I am one of the very small number of people who is most obsessed with these ideas and competent/privileged enough to make a difference, and in expectation it seems that people explicitly working to reduce existential risk are most likely to succeed at doing so, then yes maybe saving half an hour of my time may actually have, in expectation, an un-intuitively massive positive impact. But then what about the article above and other criticisms? Couldn't the reputation risk to EA from this way of thinking be very dangerous, both because it attracts people who want to mooch money off of the community, and repels potential collaborators who don't want to be seen as wasteful? Yes, maybe it does repel certain people, but then again, perhaps it attracts the type of people who understand and agree with our logic, and if our logic is in fact correct and good, then perhaps the type of people who really look at our ideas and actions and evaluate them carefully, and then decide they agree, are exactly the type of people we are trying to attract. Perhaps w...

The Nonlinear Library
EA - Quantified Intuitions: An epistemics training website including a new EA-themed calibration app by Sage

The Nonlinear Library

Play Episode Listen Later Sep 21, 2022 3:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quantified Intuitions: An epistemics training website including a new EA-themed calibration app, published by Sage on September 20, 2022 on The Effective Altruism Forum. Crossposted to LessWrong TL;DR Quantified Intuitions helps users practice assigning credences to outcomes with a quick feedback loop. Please leave feedback in the comments, join our Discord, or send thoughts to aaron@sage-future.org. Quantified Intuitions Quantified Intuitions currently consists of two apps: Calibration game: Assigning confidence intervals to EA-related trivia questions. Question sources vary but many are from Anki deck for "Some key numbers that (almost) every EA should know" Compared to Open Philanthropy's calibration app, it currently contains less diversity of questions (hopefully more interesting to EAF/LW readers) but the app is more modern and nicer to use in some ways Pastcasting: Forecasting on already resolved questions that you don't have prior knowledge about. Questions are pulled from Metaculus and Good Judgment Open More info on motivation and how it works are in the LessWrong announcement post Please leave feedback in the comments, join our Discord, or send it to aaron@sage-future.org. Motivation There are huge benefits to using numbers when discussing disagreements: see “3.3.1 Expressing degrees of confidence” in Reasoning Transparency by OpenPhil. But anecdotally, many EAs still feel uncomfortable quantifying their intuitions and continue to prefer using words like “likely” and “plausible” which could be interpreted in many ways. This issue is likely to get worse as the EA movement attempts to grow quickly, with many new members joining who are coming in with various backgrounds and perspectives on the value of subjective credences. We hope that Quantified Intuitions can help both new and longtime EAs be more comfortable turning their intuitions into numbers. More background on motivation can be found in Eli's forum comments here and here. Who built this? Sage is an organization founded earlier this year by Eli Lifland, Aaron Ho and Misha Yagudin (in a part-time advising capacity). We're funded by the FTX Future Fund. As stated in the grant summary, our initial plan was to “create a pilot version of a forecasting platform, and a paid forecasting team, to make predictions about questions relevant to high-impact research”. While we build a decent beta forecasting platform (that we plan to open source at some point), the pilot for forecasting on questions relevant to high-impact research didn't go that well due to (a) difficulties in creating resolvable questions relevant to cruxes in AI governance and (b) time constraints of talented forecasters. Nonetheless, we are still growing Samotsvety's capacity and taking occasional high-impact forecasting gigs. Eli was also struggling some personally around this time and updating toward AI alignment being super important but crowd forecasting not being that promising for attacking it. He stepped down and is now advising Sage part-time. Meanwhile, we pivoted to building the apps contained in Quantified Intuitions to improve and maintain epistemics in EA. Aaron wrote most of the software for both apps within the past few months, Alejandro Ortega helped with the calibration game questions and Alina Timoshkina helped with a wide variety of tasks. If you'd like to contact Sage you can message us on EAF/LW or email aaron@sage-future.org. If you're interested in helping build apps similar to the ones on Quantified Intuitions or improving the current apps, fill out this expression of interest. It's possible that we'll hire a software engineer, product manager, and/or generalist, but we don't have concrete plans. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Announcing EA Pulse, large monthly US surveys on EA by David Moss

The Nonlinear Library

Play Episode Listen Later Sep 20, 2022 2:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing EA Pulse, large monthly US surveys on EA, published by David Moss on September 20, 2022 on The Effective Altruism Forum. Rethink Priorities is excited to announce EA Pulse - a large, monthly survey of the US population aimed at measuring and understanding public perceptions of Effective Altruism and EA-aligned cause areas! This project has been made possible by a grant from the FTX Future Fund. What is EA Pulse? EA Pulse aims to serve two primary purposes: Tracking changes in responses to key questions relevant to EA and longtermism over time (e.g. awareness of and attitudes towards EA and longtermism, and support for different cause areas). Running ad hoc questions requested by EA orgs (e.g. support for particular policies, responses to different messages EAs are considering). We welcome requests for questions to include in the survey of either of these types. Please comment below or e-mail david@rethinkpriorities.org, ideally by October 20th. By tracking beliefs and attitudes towards issues related to effective altruism and longtermism, we can better get our finger on the pulse of movement building efforts over time, and potentially identify unforeseen risks to the movement. We will also be able to determine whether particular subgroups of the population appear to be missed or turned off by our outreach efforts. We also believe that surveying the broader public can provide a new window for looking at how the ideas generated by the EA community are being taken up by the wider population. In turn, it can help us communicate more effectively and efficiently about what matters most. Due to space constraints this survey is best suited to asking about relatively short, straightforward questions. If you are interested in surveys with more complex designs, a larger number of questions or experimental manipulations, complex instructions, or which involve asking respondents to read lengthy text or view videos, we are potentially able to accommodate these in separate surveys (funding permitting). Please feel free to reach out to discuss possibilities. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Optimizing seed:pollen ratio to spread ideas by Holly Elmore

The Nonlinear Library

Play Episode Listen Later Sep 20, 2022 10:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Optimizing seed:pollen ratio to spread ideas, published by Holly Elmore on September 20, 2022 on The Effective Altruism Forum. Cross-posted from my blog. As an EA organizer when I was a grad student at Harvard, I developed an implicit model of community organizing at a university that is complimentary to the funnel model of Center for Effective Altruism (CEA) that was super popular a few years ago. I call mine “optimizing the seed:pollen ratio”. I'm not going to justify outreach as a strategy or advocate for any specific means of outreach—the point is to share my model of outreach efficacy. It's a fun bonus that my model is well-explained by analogy to a biology concept, which I hope to explain well as a secondary objective. The funnel model CEA says, “We are trying to build a community, and one aspect of this project is encouraging people to become more deeply engaged with the community. The funnel metaphor helps us to think about the appropriate goals and audiences for our different projects.” The funnel model is focused on the deliberate composition of the community— how many people are entering the funnel, and what share of the community is at what part of the funnel at any given time. It also suggests that movement through the funnel happens in stages, from outer to middle to core, and there's both investment from the community at every stage and friction to progressing from one stage to another. When this model was getting a lot of buzz, there were many discussions about what part of the funnel to be focusing on. This was happening during CEA's big pivot toward endorsing longtermism as a fundamental tenet of EA. With that, many were arguing that there should much more focus on core EAs and the core-EA-development-pipeline, and much less emphasis on the mid- to casual level of community involvement, because most of the value of EA in the long run will be “in the tails” of the distribution, from original work, not from safer but lower value bets like getting average individuals to donate money. This view basically won and is dominant in EA today. Seed:pollen ratio (Source) Now for an interlude on plant reproductive strategies.1 This^ is called a “perfect flower” because it has both male and female reproductive organs. The female reproductive organ is the carpal, and it makes seeds. The male reproductive organs are the anthers and they contain pollen. Seeds are larger and contain cytoplasm, the organelles mitochondria and chloroplasts, and a reserve of nutrients. Pollen is much smaller per grain and, like sperm, basically only contains chromosomes. Whereas seeds can be as big as a coconut (which is a seed!), the biggest pollen grains are only 2.5mm long. In terms of energy and resources, seeds are much more costly and pollen is much cheaper. You may be familiar with human evolutionary psychology ideas about male vs. female reproductive strategies. What's interesting about hermaphroditic plant species is that individuals are not committed to one strategy or the other, but can vary the amount of investment they put into seeds vs. pollen, either over evolutionary time or as a plastic response to environmental conditions. Per expected offspring, seeds are a safer bet. The average seed is FAR more likely to grow into a plant than the average pollen grain, which makes sense because pollen outnumbers seeds by a factor of 100-1000x (depending on the species— this number is for Cannabis sativa, because for some reason it's very easy to get estimates of these quantities online for Cannabis :P) But you can make a LOT of pollen for the cost of one seed. Pollen can be carried by pollinators or the wind (or a number of other clever strategies) to sometimes vast distances (a genetically modified pollen grain fertilized a grass seed 21 kilometers away, using nothing but the wind ...

The Nonlinear Library
EA - The Next EA Global Should Have Safe Air by joshcmorrison

The Nonlinear Library

Play Episode Listen Later Sep 19, 2022 3:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Next EA Global Should Have Safe Air, published by joshcmorrison on September 19, 2022 on The Effective Altruism Forum. One of the coolest EA things I saw during the pandemic was the creation of the microCOVID risk tracker by an EA group house in San Francisco. To me, it was a really inspiring example of the principles of effective altruism in action — using rationality and curiosity to solve a concrete problem to make people's lives better. I was having a dinner party with some friends in last night with a theme of how we could improve indoor air safety, starting with our local community in New York. (Some background here on how my colleagues at 1Day Sooner and i think about the air safety problem). How can we get buildings to clean the air (by filtering it, mixing it with outdoor air, and sterilizing it with ultraviolet light) so that people don't suffer from pollution and pathogens? We were discussing what was feasible to accomplish politically and were struggling because a standard answer to “what air safety interventions are optimal for a space to adopt?” doesn't yet exist. We agreed that it would be uniquely valuable to recruit early adopters (e.g. tech companies, private schools, universities) to try out solutions and test them for effectiveness in reducing disease. If well-designed, this could generate experimental evidence on effectiveness and create a template for later adopters and governments to implement. An obvious place to start would be the EA community and trying to get EA spaces to implement air safety measures (like installing filters and upper-room UV light). There are a number of organizations that could fit the bill, and I'm aware of at least one that is exploring doing this in their own office. One suggestion that uniquely resonated with me was the idea that the next EA Global (after EAG DC) should make its air safe. (That is, it should have a respiratory infection risk level it tries to achieve, some surrogate targets it aims to measure, and a set of indoor air interventions that are reasonably likely to achieve the intended risk level). I don't think this will be easy and in fact I think it might be more likely than not that we fail. But part of what is valuable about EA is our commitment to learning from failure and improve over time. Trying to implement air safety interventions will teach us about the existing gaps that need to be filled, which will get us closer for the next EAG (and EAGx) until we get to a point where we're proud of our community for becoming safer and a better model for achieving good outcomes elsewhere. I recognize it already takes a tremendous amount of effort to run EA Global, and I appreciate the work CEA does putting these events on. So my intention is not to create additional burden. But biosecurity is a cause many EAs are passionate about, and air safety is one of the most promising interventions to achieve deterrence-by-denial of engineered respiratory biothreats. I feel like making our own spaces safe from pathogens is a challenge that our community can and should rise to and that doing so will have outsized benefits on our ability to accomplish future policy. If you're interested in helping with this, let me know. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Being Indispensable
Revolutionise Your Inbox: My Interview with Steuart Snooks

Being Indispensable

Play Episode Listen Later Sep 19, 2022 42:42


Episode 160 is my interview with Steuart Snooks, Based in Melbourne, Australia, he is an Email and Workplace Productivity Expert. Steuart works with business owners, senior executives, their EAs and support teams who are overloaded with email and crying out for practical, affordable solutions to the relentless demands of email and the workload it drives. Steuart has over 25 years' experience in researching and developing best practices for managing incoming email and workload and restoring email to its rightful place as a powerful tool to facilitate improved workplace and personal productivity. Steuart has developed a breakthrough method that revolutionises the way Executives and their EAs/PAs manage their inboxes. It aims to elevate the skills and knowledgeof the EA/PA to free up their time, energy, and headspace so they can offer greater support to their Executive who can then focus on the higher-order thinking, strategic focus, andleadership aspects of their role. Much of this work is done with the EA/PA, minimising the time commitment needed by the Executive. In the episode Steuart shares: - the three pillars of his framework and why they are key to your effectiveness - simple tips to revolutionise your inbox (and your executive's) NOW - how you can find out more about the tactics that will make managing your executive's inbox and your own email less stressful and time consuming. Connect with Steuart on LinkedIn  https://www.linkedin.com/in/steuart-g-snooks-04b39a3/ Find out more about Steuart's online courses: https://www.emailproductivity.com.au/

ConvoCourses
Convocourses Podcast: Cybersecurity Consultant versus ISSO

ConvoCourses

Play Episode Listen Later Sep 18, 2022


http://convocourses.com   All right. I'm testing a new platform called stream yard, and this is convocourse's podcast. I'm gonna do about, I don't know, 20, 30 minutes to test this out and also to inform you guys of  a career move I recently made. I haven't really talked about this.  But about three months ago I was working as a cybersecurity consultant and that's much different from an information system, security officer. So in the past, Three four months.  I made a big Mo well, not really a big move. I I've, it's not a big move for me.  I've done both jobs before, but all I want to do is  compare the two kind of give you an idea of  what the differences are between  cyber security consultant. And what I'm going to be doing with information system security officer work, and  what's the daily life of both of those things. How do they compare and give you an idea of  which one you should choose before I start, you should know that  I own a site called combo courses where I teach cyber security compliance and  how to get in this field as a cyber security person. I've been doing this for 20 years, doing cyber security in  all forms of security, as well as some it information technology stuff  like being a system admin or network. Administrators, stuff like that. I've done a little bit of all that stuff.  But my specialty is really in security compliance. And so that's what I teach people to do. And. People ask me on YouTube, on, on TikTok questions. And I'll just go ahead and answer them and by the way, if you have any questions during this feel free to ask them and I'll do my best to answer. them sometimes we have such a great community that they'll actually answer the questions on my behalf. There're things I don't know. So, somebody, some other subject matter expert will jump in and then answer those questions and. My favorite times on this, on convo courses, because that's what convo courses in my mind is all about is about the community and us coming together, figuring things out. Okay. So, I wanted to tell you recently I made a huge move. I was working at a major telecommunications company that does cybersecurity on the side. They have a branch that does cybersecurity and    I did it because it was a great opportunity. One of my former coworkers.  Gave me a they referred me and brought me into the company. It was a great company. They had great benefits. It was some of the best benefits I've had outside the military.  It was decent pay and the only, probably bad thing was that there was a lot of travel and that eventually was the thing that got me out of there. And it was stressful too. And I was how having too many personal issues that happened at that at the time that I was working there, I worked for there for about two and a half, three years, and I was doing cyber security consulting for them. So, what we would do is we would. We bring our expertise to smaller companies.  We go to, and it's a lot of companies and banks and hospitals and healthcare industries that you probably use to be honest with you.  that? I Some of I was surprised were like, damn, I use this. We're doing security compliance for them. And   the security compliance it wasn't just security compliance.  It was basically, we would do a bunch of We would do a bunch of risk assessments and those risk assessments would be things like be we had 15… different risk assessments. So, 12, 12 to 15 different risk assessments, depending on what they chose. So we would do things like physical security assessments we would do. Of course, network security assessments. There was like three of those. We did cloud-based security assessments. We did…  We did wireless security assessments. We take all of those and we would give them an overall view of what their security looks like. And then we would prioritize where their major risks were. And then we would talk to the sea level or director or upper-level management to say, hey, this is where you should focus your energy because this is where we see the most risk. And the purpose of that was to reduce their. Their security any kind of vulnerabilities they have, and they can focus all their time, money, and energy and resources to that highest level of risk in their organization. That's what I was doing. And it wasn't too bad. I actually liked it.      I fit right in over there. The only I, we would do these reports, which were really easy for me, the. Challenging thing I found was sometimes the clients were a bit difficult to work with and it wasn't that they didn't know what they were doing or something like that. It was just very high strung because cybersecurity.  It could be very stressful because you're dealing with you.   If you have a vulnerability, a major vulnerability and you have to take that to the C CEO and say, Hey, we have. We have a bunch of legacy systems that are   in this area here, there's a lot of stress because you don't want to be the person that to, to barer of bad news, and we'd find those things and we'd say, Hey. You have this stuff going on. And there was just a lot of stress with that.  That's probably the hardest part of the whole thing.  The travel wouldn't have been a big deal if I hadn't had so many personal issues happening with my family, kids and everything that just all happened at once. So, I had to unfortunately had to leave because I actually really loved the people and everything. What did my daily life look like?     We were mostly going off east coast time for me, because that's where most of my clients were. They'd give us like two or three clients.  And then you would work directly with them. So, most of your day was coordinating.  The scans and the assessments that you'd have to do, if you had to go to their site, you'd have to coordinate that. And they expect you to go do that on your own.  It was very self-directed where it's you have the client, like you'd run the meetings with them. You'd coordinate when you're going to go there. You'd coordinate how many hours or  how much time it would take to get there and who you're gonna meet and all of that stuff you'd have to do. And then the scans, we had a, like a separate scan team. We'd work with the scan team. We'd work with the program. Managers we'd work with them and we'd put together this report to deliver. On a quarterly basis and sometimes annually, it depends on what kind of assessment it was. Because obviously you wouldn't do like a physical assessment every quarter. Because I didn't, that wouldn't really make any sense because it stuff doesn't change. But anyway, so that's what we would do. It is mostly meetings and coordination  and doing scans and reviewing the scans and then writing reports  that's your, that was your whole day as a cybersecurity consultant at this organization. I was with  where. The main thing we did was deliver these reports and we would do really, most of it was risk assessment type stuff. And I was very familiar with that because in the department of defense, we do a lot of security assessments and stuff.  So that's very different from where my main  core specialties are, which is security compliance. We would dabble a little bit in security compliance like every now and then. We  I would help them do like a PCI compliant  PCI audit or something like that  or we'd say, okay  here's how you, your system would fit into eight NIST 800 or here's how your system would fit into CIS controls. You do a little bit of that, but that wasn't really what we're, that would, it was separate from what we were doing was mostly risk assessment type stuff. So seeing where their risks are and determining that.  Now that brings us to the next thing, which is information system security officer. So information system security officer is more in compliance. It, the compliance space, security compliance and security compliance is making sure an organization is lined up with regulations, laws, industry standards. That doesn't have to be the federal government, which is mostly what I work with. It can be with  hospitals have a certain standard that they're supposed to meet. One of which is called HIPAA, where they have to make sure that they're protecting their patient's healthcare information and their digital records for the healthcare and stuff like that. Another example of industry standards would be PCI compliance.  That's protection of. Of  credit cards. So whenever  you are at a store and you're using your credit cards, they're supposed to have a separate network for those point of sale devices. So that doesn't touch,  say the wifi that's in the  that's for the staff or for  guest  to log in. So that has to be a separate protected network so that the credit card data has its has, is protected.  So separate from your. Other networks. That's just one of the things you have to do. Another things you have to do for PCI compliance is have the adequate  documentation for the security of the system. Like making sure that net, we have network diagrams and making sure you have  asset  and inventory of all the assets, things like that. Those are all    the types of things that you would have to do for PCI. And that's, those are just two examples, but you've got CIS compliance. You've got. ISO 27,001 compliance. You got many  different countries have their own security compliance and different industries like  have their own compliance. So my, my  specialty is in NIST 800.  Security compliance NIST 800  is what the federal government has created and adopted as the main source of security controls. Sec security controls is a set of security features that protect the organization's. Primary assets. That means like your main server that has all the social security numbers on it. Your  main server that has all the secret  secret data on it, the main server that's holding all  the maps of different parts of the world.  Those, that's what you call an asset. So those are just some of the examples of, and those are some of the difference. Now, one of the things that, what the daily, what it looks like from on a day to day basis for an is.  Just to  compare this versus  versus  the consulting I was doing. So it's also a lot of meetings. Security is a lot  of coordination. Cyber security is a lot of coordination with different organiz because  you're having to meet. Different  subject matter experts like you, you're not necessarily the person who's locking down the, those, that windows server. That's gonna be a server type person.  That's gonna be a person like a system admin who specializes in Linux, red hat, network, administration and windows  2019. Active directory servers  so  you are gonna coordinate with them. So in ISSO, that's what they do. They're coordinating with these different, the firewall guy, the  the privacy person.  They're coordinating with all these different people to make sure that the organization has a certain level of. So it is a lot of meetings. It's a lot of meetings with a lot of different people, and that's probably the main difference between  the meetings. Like an ISSO is gonna have a meeting with all kinds of people throughout the organization.  One organization, whereas a consultant is gonna have a meeting with just a few people at different organizations like me. I had  three or four clients at a, any given time and I would have to coordinate with the there's like a main point of contact. I would talk to big two or three main points of contact and every now and then  I'd meet like a C level exec, but I was talking to three or four different organizations. Whereas an ISSO is talking maybe one organization and there might be other sub organizations, but they're all one you're talking about many people in that organization. So you're going really deep in, in all of the details  and stuff and making sure that all the securities is  is in place. Now it wasn't, it's not like an enforcement role. Typically you are more like a news reporter. What I mean by that is a lot of people think that you're the police and you're gonna come and busting down doors and say, Hey, this, we gotta secure this server. That's not really  your job. Like you might point things out, but the person who has to be the enforcer is gonna be the management, because they're the ones, things come down from management. So they have to be the ones to enforce that stuff. Now  if you happen to be the voice piece, the mouthpiece to tell them, Hey, the CEO just said.  You're just a reporter. You're just reporting to them. Hey, this is what happened. We have to obey what is going on with this organization's policies. Here's what we have to do. So that's the main differences between a security consultant and information system, security officer. The reason why I quit my job as. A consultant and went over to, and now I'm going to back to information to security officers has more to do with. Not the work per se. It was, it is more like the travel, like the organization I was at was paid really good, had great. One of the best benefit packages I've ever had, but it was too much travel and I had too much stuff going on.    And I had too many clients, it was getting a little stressful plus I had family stuff I had to deal with. So that's the reason why  I transitioned over.  And now  I'm going to somewhere where it's a little bit more  It's gonna be  a better fit  for me and my new family situation. So that's  what's going on. Okay. I've got some questions here. Let me see for Mike. Thanks Mike, for your question. I really appreciate that. And Mike says  he says quick question  the ISSM role coming from being an ISSO. What is what's your suggestion? Quick question is S. A ism role coming from, are you gonna be doing an ISSM role from being an is O I'm assuming that's what you mean? So you were an ISSO and now you're about to be an ISS O  sorry. You were an is O you're about to be an ISSM that's I'm trying to interpret your questionnaire.  Any suggestions.  Yeah. So the biggest difference between these two roles is that  one is a manager information systems, creating manager.  You're gonna have more of  you're gonna have even more meetings.  I'm just gonna tell you like the differences. So an ISSO is more like they, they both have a lot of meetings, but an ISSOs has to be more in the weeds because ISSO has to be able to say, give an example of an issue.    A vulnerability comes down the vulnerability.  Is let's make something up.  A vulnerability is a zero day exploit on windows 2019 or something.  And  now the ISSO gets wind into this and that comes from the vulnerability team. Now they have to meet directly with the vulnerability team to figure out what's going on with this thing. And they might have to spend some time researching what the zero day exploit is.  What's the criticality of it. Like how quickly do we need to fix this thing? They have to be in the weed. So they have to go probably go to the CVE.  CVEs and then figure out what type of what this affects. And they have to probably look at  a list of every, all the systems that this is going to touch. And how quickly can we fix this? So there. And if so is more in the weeds in that they have to know  what is going on in a, on a technical level, they have to get more in the weeds and be more technical if you get what I mean.  They might not have to touch the system. A lot of times, they're not the ones implementing the security controls, but they're coordinating with the people who have to implement those security controls. Compared to that, to  an information system, security manager, their meetings are more with upper level people. So they're dealing with stuff that's more broad   and stuff. That's touching the entire organization and making sure you have enough making sure the security team has all the resources in that they need all the time and resources that they need to do their work. So your. Gonna have the same amount of meetings or more, but they're gonna be with upper level management from. Fields like you're gonna be talking to the it manager, the information technology manager who, whom  the network manager, the network engineering manager. You're gonna be talk, coordinate with them. And you guys are gonna be talking about like resources. How many resources do we have to do this work? Okay. We just had this zero date on windows, 2019. Do you guys have the resources and time to do this? How much time do you guys need to actually get this? So  you're talking about like on a broader scale, how do we manage the resources that our team needs to get this job done? And can we get it done and effectively  in a reasonable amount of time? And you're trying to, your main job is managing expectations to upper level management, the C level execs, the directors and all that stuff, managing their expectation. That is your main job, as well as taking care of the people  who are. You work for the ISSOs like your job is working for the, ISSOs managing the expectations of upper level management. So you're still in cyber security, but it's more of a management. You're not in the weeds. You're not having you. You'll never, you're not ever touching any technology. Whereas in ISSO they might have to touch something at some point like, and so they might have to touch the  EMA system where they're inputting information there, they might have to mess around with creating.  They might have to create a security policy, might help create the security policy review, the security policy. They might look at audit logs. They might. Help enable audit logs. They might be the person who's doing threat detection and stuff. The managers, they're not doing that kind of stuff. They're working on resources for the information system, security officers. So it's a great move because it is    is SMS are ma are legit managers. And so they're paid typically paid a lot more. They're paid more. And if you.  If you're a first time manager, you'll get, you should get a pay bump. But if you have been doing a management for a while, you get a significant  pay bump, like if you've been doing it for  a year or two, then you'll be able to like, if forever you move or. Those are the guys who eventually become directors. That's the path directly to directors and see C level execs and things like that who gets paid a lot of money. So  that's really good.  That's a really good move.  If that's the case, if that's what you're doing, then  that's awesome, man. And Mike says  got it. ISSOs  ISSO I worked  with EAs and C  C Sam  and tenable. Yep. Tenable NEIS and all that kind of stuff. That's right. Exactly. You got it. They're more hands on   and touching stuff. Whereas managers, they're not,  they're gonna ask about, Hey, you have access to eMASS. Okay, cool. Great.  They might look in there since, okay. Let's make sure that the system security plan is there. All right.  And any problems with the system security plan. Okay, good. There's no problems. Let's go  or, Hey  Does the new guy have access to EASs. Does the new guy have access to tenable? Okay, cool.  Or  let me help out. Make sure that we have, let me coordinate with the person who controls access to tenable to make sure the new guy has it. Okay. The new guy  we just have some people leave. Let's make sure  that person is not, no longer has access to eMASS or tenable stuff like that. That's the manager. They're not like putting things. Into EASs or running the scans necessarily.  Sometimes  I've been with some managers who did do that kind of stuff, but it was because they wanted to do it. And  they were very sharp, very technical, and they wanted to do it and they, but they te they totally didn't have to. And they had other things to do by the way. All right. Let me shift gears. If you guys have any questions, go ahead and feel free to, to ask me any questions. I'm testing out this new platform. That's why it all looks a little bit different. So if you want, have any questions whatsoever, feel free to ask me in the meantime, let me show you that I have  a book out called R MF is O where walks you through  it's a bird's eye view of what NIST 800 is all. And it's very quick, and this is actually the audio version, which is only like one hour long. And then also I've got  a deeper dive into the NIST 800 security controls, but I'm not hitting every single control. What I do is I hit the families and give you a practical understanding of what the families are and how you navigate those. And interpretation of the families of controls. And I focus from an ISSOs perspective. What parts of that family do you really need to know? That's the kind of stuff that I'm focusing on. And another thing you guys should know, if you didn't know already is I have a podcast here. It is right here. The podcast is, I'm doing the podcast right now. So this the type of stuff that you hear me talk about here is the kind of stuff that I actually is gonna be on the odd. But this, the difference is  on a podcast, you could just be in your car, on your commute and listen to it, or when you're cleaning or something like that, you can actually just listen to it. Listen to our conversation as we're, as you're doing your thing. So, that's the good thing about doing a podcast? I actually really like podcasts. I'm listening to one right now, learning a new language. And I really like it. Okay. Let me see. There's another question here from Mike. He says, can I book you for a consultant for my ISSO role  ISSO role  you know what  I'm actually in the middle of a couple of other consultations, you can email me  feel free to email me and I'll see if I can  find some. For you, I'm not saying no, but let me see what I can do. Here's my I'm gonna send you my contact. My contact is scrolling across the bottom. There is contact@convocourses.com. If you're interested in getting some kind of consulting and stuff like that, I'm  I'm getting back into the work field.  I'm not gonna be able to do as much consulting as I was doing before.  Because my hours are gonna get tapped, but Hey, who knows? Like maybe we can do it before I actually start my job right now. I'm going through the background.  The  background investigation process. Okay. I got another questions from. Mr.  Fernandez. He says, so I'm getting my bachelor's degree  in, in cyber security in December, I'm currently working on physical in wor working in physical security for government contracting. So I'm dealing with classified documents and D O D things  will. Will I be able to, okay, let me see the next rest of this question  to get an entry level is ISS O I think you mean ISS O job  in your opinion, yes or no. Okay. So L Ludwig  let me give you an example and I hope that my example  can give you an idea. First of all, short answer is yes. Okay. I know this because I actually start off in physical security myself. So  I was a. Security forces member in the air force. And basically what  I was really, I was a weapon expert. Like I don't even know if they have that, that it was called 3P0X1. That was my AFSC.  It's a specialty code that they have had in the military at that time.  I don't know if they I've been following it, but basically what I did was I was a weapon specialist  and. I guarded planes. I guarded    if the president came in to our base or whatever, I'd do that, I'd be on that detail.  Not much personnel security, to be honest, it was mostly garden resources. And then I also did some law enforcement. So I knew a lot about the UCMJ  use of force, all that kind of  weapons, training, combat training, all that work with the army and the Marines  and all branches and  different  countries.  Security people, but it was mostly physical security and I trans we call it cross train. I cross trained from physical security to cyber security. There's a lot of crossover. I was surprised to, to learn that.  Some I'll just tell you a few things that are gonna help you going from physical security over into cyber security into it in general. Number one  you are, you're gonna have a very sound understanding of security overall because it's not really that much. When you get into cyber security, it's just a lot of more layers and there's, it's more complex because you got defense in depth. Physical security still applies in cyber security, which is crazy. But when you think about it's common sense, if anybody can touch a system, then they own it. You can own a system. You can take the hard drive out, put it in another device you can use  password crackers you could use.  Oh man, you, you could  do forensics tools on it and then extract all the bits on it and figure out what people try to delete is that as a matter of fact, that's what forensics is all about.  And speaking of forensics  some of the laws that pertain to, to you, like  when you're talking about chain of custody, when you're talking about  Making sure that things that, that  things aren't tampered with during the investigations, all those things apply.  So some of the laws still apply.  What else applies, man?  Physical security checks, physical security assessments is it's. The concept is similar and actually is still used in cyber security. You has to still do physical security to make sure that the facility and the room that the information system resides in is protected so that all that stuff still applies. So it is gonna help you out. And then the main thing is that if you dealt with classified documentation before, and if you have a security clearance, all of that will also help you.   To get an entry level job in cyber security. And if specifically, in information to security officer, but any kind of entry level position, because you have a security clearance, if you have one  that helps. A lot of people confuse like security. They think that if you're in cyber security, you have to have a security clearance. No  that's not the case. Two different things. The security, they should just call it a clearance. It's very confusing. A clearance just does a background check on you to make sure that you are trustworthy to make sure that you don't have any criminal background that might that might. Cause a conflict of interest where you're working like a bank doesn't want somebody who robbed the bank. You know what I mean?    It's stuff like that.  A hospital probably doesn't want somebody who had malpractice it's stuff. Like they don't, there's certain criminal things that not to say that you  if you had some kind of. You had a case on you in the past that you couldn't work in cyber security? It's not what they're saying. It's basically, there's certain things that cause a conflict of interest. So I have to do a background check on you to make sure that there's nothing that might allow you to be exploited.  Or something that deems you as untrustworthy to do that particular job. So if you have a clearance  that really helps out a lot  if you've handled classified information before that actually helps you quite a bit as well, because some people don't have any experience with that and they don't know how that world works, but you knowing that, how that world works,  that helps you quite a bit. The main thing that you need to focus on now is technical. Because me going from physical security over to cyber security, that was the biggest challenge is learning all the terminology, learning information, technology, learning how computer works learning how Ram CPU and storage all works together. Learning how to protect those components of  information system. Those are the main things, all the layers  and the minutia  of learning networks, how to networks work  how you protect those networks, stuff like that. Porch protocols, and services. Those are the things that you need to be really focusing your mind on the security stuff will come very naturally to you. So the answer to your question is, yes, it will help you to get an entry level job when you get your, that bachelor's degree. Only thing I would recommend that you do while you're in school. And this is what I tell everybody is try to get experience. If you. Hands on technical experience, if you can. That means if you're whatever college you're going to, or if you happen to be in the military or wherever, whatever, wherever you're at, try to get hands on.  If you see the, we call them work group managers, fixing a computer, ask if you can help them out. If you can, if they will allow you to help them to fix that computer, whether it's update and virus, definitions, updating the security patches, whatever it is like even the simplest thing possible, even if it's putting the router in and plugging it in or whatever, you'll be able to put that on your resume. And the experience is what they really wanna see a degree is great. Certifications are great, but the experience is what they really wanna see.  Another thing is I would highly recommend that you, if you can, if you have the time, if you have the cycles to do it, some people do not is to get    a certification while you're working on your degree. Degree takes a pretty long time. And sometimes the degree helps you to get the degree. If they, if you're college or wherever you're going to has a degree, a certification program, I will go ahead and take it. It's not a waste of your time, especially if you get the comp Tia, any of the comp Tia ones. If you get any kind of cloud certification, if you get  any kind of networking certifications, those are all gonna help you out a bit, a lot on your resume. So I hope that answers your question. Okay. I've got another question here. It says  Mr. Fernandez says  and I'm a security plus certified I'm security plus certified, but I don't have  the most experience  with physical hardware. Okay. Yeah.  Yeah, that's what I'm saying is  go ahead and get as much. Experiences you can  with any aspect of information technology. And at this point, since you're new, anything will help you out. Like whether it's help desk type stuff, whether you're  Updating, like I said, virus, signatures, whether I, the reason why I keep bringing those up, because those are  the simplest things that kind of come up constantly over time. Like you've probably done it before you just don't it's something we do often so often that we don't even think about it, but that is something you can literally put on your resume. You just need to know  how to articul. Speaking of articulation, just to do a little transition here.  I'm working on a book right now, a new book. That's gonna tell you how to actually break down a resume.  How to, I have a course on this already. So  if you're interested  I'm not trying to cram anything down anybody's throat or anything, but I'm working on a book. That's a lot cheaper that. It'll be about 20 bucks or something like that. It'll have downloadable templates.  It's essentially this right here. This course right here is something  I've been using for a long time. And because of this, I haven't been without a job. I, this thing works like this process  that I've been doing, basically, all I did was to say, okay, how am I getting all these jobs? I literally get like 10 offers a day between LinkedIn. Messages on LinkedIn emails calls I'm literally getting anywhere from, it's not as much as it used to be before COVID and now we have some kind of  a downturn in the economy. So it's not as many as it used to be, but it's at least six messages a day. I get for different jobs and I'm just constantly getting undated with these opportunities. And so all I did was I condensed exactly how I'm able to do this into. Into a course. And I'm gonna make this into a book that tells you how to articulate your, any kind of.  Security, cyber security experience into  a workable template that is marketable to employers. So that is what I'm doing and it's coming, I'm working on it. I actually finished the first draft. I'm getting it edited right now. As we speak the first, book's gonna be a three, the four books series where I'm gonna break down. Not only how to market your resume and not only how to create the resume, not only a template so that you can use my mys as a sample and other people's resume as a sample. But I'm also what I'm gonna do is expand it out into other books that tells you how to get remote jobs. Because people ask me about that a lot and I'm gonna do one where it's talking about  the different categories of cyber security, because that's something I've found. People, the questions that they ask, I can tell they don't really know that there's different aspects of cybersecurity. So that is what I'm doing.  Mike says, I bought this course from you.  You need to update it. Oh, okay.   Yes, updates are on the way.  I'm working on  a whole bunch of stuff right now. So that's  when I'm not on these calls  that's what I'm.  Okay. If there's no more questions, guys, I'm going to, I'm gonna call it quits for the day and I'll see you guys next time. See you on the next one. Thanks for  thanks for jumping on this one. Thanks Mike. For all your questions. Appreciate it.  Appreciate all the questions and  and thanks, Mike. Thanks for the update, Mike.  I will get on that. I appreciate you later.    

The Nonlinear Library
EA - Many therapy schools work with inner multiplicity (not just IFS) by David Althaus

The Nonlinear Library

Play Episode Listen Later Sep 17, 2022 36:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Many therapy schools work with inner multiplicity (not just IFS), published by David Althaus on September 17, 2022 on The Effective Altruism Forum. Cross-posted to LessWrong. Summary The psychotherapy school Internal Family Systems (IFS) is popular among effective altruists and rationalists. Many view IFS as the only therapy school that recognizes that our psyche has multiple ‘parts'. As an alternative perspective, we describe comparatively evidence-based therapy approaches that work with such ‘inner multiplicity' in skillful ways: Compassion-Focused Therapy, which incorporates insights from many disciplines, including neuroscience and evolutionary psychology. Schema Therapy, which integrates concepts from cognitive behavioral therapy (CBT) and psychodynamic approaches, among others. Chairwork, the oldest psychotherapy technique used for working with the minds' multiple parts and which inspired IFS. This post may be especially relevant for people interested in ‘internal multiplicity' and those seeking therapy but who have had disappointing experiences with CBT and/or IFS or are otherwise put off by these approaches. Introduction The psychotherapy school Internal Family Systems (IFS) is based on the idea that our minds have multiple parts. IFS therapy focuses on enabling these parts to “communicate” with each other so that inner conflicts can be resolved and reintegration can take place. For brevity's sake, we won't discuss IFS in detail here. We recommend this post for an in-depth introduction. What is ‘inner multiplicity'? By ‘multiple parts' or ‘inner multiplicity', we don't mean to suggest that the human psyche comprises multiple conscious agents—though IFS sometimes comes close to suggesting that. By ‘parts', we mean something like clusters of beliefs, emotions and motivations, characterized by a (somewhat) coherent voice or perspective. Many forms of self-criticism, for instance, could be described as a dominant part of oneself berating another part that feels inferior. Different parts can also get activated at different times. Most people behave and feel differently during a job interview than with their closest friends. This idea is shared by many theorists of various schools, often using different terminology. Examples include ‘sub-personalities' (Rowan, 1990), ‘social mentalities' (Gilbert, 2000), and ‘selves' (Fadiman & Gruber, 2020). Not only new-agey softies espouse this perspective. The related concept of a modular mind is shared by unsentimental evolutionary psychologists (e.g., Kurzban & Aktipis, 2007; Tooby & Cosmides, 1992). In any case, this is a complex topic about which much more could be written. For a detailed “gears-level model” of inner multiplicity (and for why working with parts can be helpful), see Kaj Sotala's Multiagent Models of Mind. IFS is popular and seen as superior to traditional psychotherapy IFS is very popular among EAs and especially rationalists. If you were to only read LessWrong and the EA forum, you might think that there are only two therapy schools: IFS and cognitive behavioral therapy (CBT). IFS has its own LessWrong Wiki entry. Searching for “internal family systems” on LessWrong yields many more results than any other therapy, besides CBT. IFS is even credited with inspiring influential CFAR techniques like Internal Double Crux. Most of Ewelina's clients (Ewelina is a psychotherapist mostly working with EAs) know and respect IFS; few have heard of other therapy schools besides CBT, IFS or perhaps traditional psychodynamic approaches. Some EAs even believe that IFS can “revolutionize” psychotherapy. IFS is often regarded as superior to standard psychotherapy, i.e., cognitive behavioral therapy (CBT), mainly for two reasons. First, while CBT is viewed as treating the psyche as unitary, IFS acknowledges that we have multip...

The Nonlinear Library
EA - "Doing Good Best" isn't the EA ideal by Davidmanheim

The Nonlinear Library

Play Episode Listen Later Sep 16, 2022 10:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Doing Good Best" isn't the EA ideal, published by Davidmanheim on September 16, 2022 on The Effective Altruism Forum. Holden recently claimed that EA is about maximizing, but that EA doesn't suffer very much because we're often not actually maximizing. I think that both parts are incorrect. I don't think EA requires maximizing, and it certainly isn't about maximizing in the naïve sense that it often occurs. It is also my view that Effective Altruism as a community has in many or most places gone too far towards this type of maximizing view, and it is causing substantial damage. Holden thinks we've mostly avoided the issues, and while I think he's right to say that many possible extreme problems have been avoided, I think we have, in fact, done poorly because of a maximizing viewpoint. Is EA about Maximizing? I will appeal to Will MacAskill's definition, first. Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good' in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. Part (i) is obviously at least partially about maximizing, in Will's view. But it is also tentative and cautious, rather than a binary - so even if there is a single maximum, actually doing part (i) well means we want to be very cautious about thinking we've identified that single peak. I also think it's easy to incorrectly think this appeals to utilitarian notions, rather than benficentric ones. Utilitarianism is maximizing, but EA is about maximizing with resources dedicated to that goal. It does not need to be totalizing, and interpreting it as "just utilitarianism" is wrong. Further, I think that many community members are unaware of this, which I see as a critical distinction. But more importantly, part (ii), the actual practice of effective altruism, is not defined as maximizing. Very clearly, it is instead pragmatic. And pragmatism isn't compatible with much of what I see in practice when EAs take a maximizing viewpoint. That is, even according to views where we should embrace fully utilitarian maximizing - again, views that are compatible with but not actually embraced by effective altruism as defined - optimizing before you know your goal works poorly. Before you know your goal exactly, moderate optimization pressure towards even incompletely specified goals that are imperfectly understood usually improves things greatly. That is, you can absolutely do good better even without finishing part (i), and that is what effective altruism has been and should continue to do. But at some point continuing optimization pressure has rapidly declining returns. In fact, over-optimizing can make things worse, so when looking at EA practice, we should be very clear that it's not about maximizing, and should not be. Does the Current Degree of Maximizing Work? It is possible in theory for us to be benefitting from a degree of maximizing, but in practice I think the community has often gone too far. I want to point to some of the concrete downsides, and explore how maximizing has been and is currently damaging to EA. To show this, I will start from exclusivity and elitism, go on to lack of growth, to narrow vision, and then focusing on current interventions. Given that, I will conclude that the "effective" part of EA is pragmatic, and fundamentally should not lead to maximizing, even if you were to embrace a (non-EA) totalizing view. Maximizing and Elitism The maximizing paradigm of Effective Altruism often pushes individuals towards narrow goals, ranging from earning to give, to AI safety, to academia, to US or international policy. This is a way for individuals to maximize impact, but leads to elitism, because very few people are ideal for any giv...

The Nonlinear Library
EA - Improving "Improving Institutional Decision-Making": A brief history of IIDM by Sophia Brown

The Nonlinear Library

Play Episode Listen Later Sep 13, 2022 21:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving "Improving Institutional Decision-Making": A brief history of IIDM, published by Sophia Brown on September 12, 2022 on The Effective Altruism Forum. Note: I'm undertaking a larger project to assess the potential of improving institutional decision-making (IIDM)—sometimes referred to as “institutional decision-making,” “effective institutions”, etc.—as a cause area or strategy within EA. My goal is to have more public discourse about IIDM. This is my first post in a series, which is the product of my conversations with EAs and research over the last 6 months. My conclusions are all tentative, and I invite all to collaborate with me in understanding and improving institution-focused work in EA. Thank you to Lizka Vaintrob, Konrad Seifert, John Myers, Nathan Young, Nuño Sempere, Jonathan Schmidt, Ian David Moss, Dan Spokojny, and Nick Whitaker for comments during the writing process. Mistakes are my own. Improving institutional decision-making (IIDM) has emerged as a term and point of discussion within EA in the last five years. It's been discussed both as a cause area in itself and as a strategy within cause areas. It's associated with other EA areas of interests, such as forecasting and political engagement. There's an obvious guiding intuition here: institutions are powerful forces in world history and drastically affect global well-being and the longterm future. At the same time, many see EA's emphasis on IIDM, at least in its current state, as highly uncertain, intractable or, if it is tractable, posing downside risks. There's a lot to disentangle. Several long posts and articles take stabs at how to understand IIDM, measure its impact, and where to focus attention. 80,000 Hours considers it among the “second-highest priority areas,” alongside nuclear security and climate change. But in comparison, it has a less certain theoretical basis and has seen few projects involving practice. In this post, I trace the development of IIDM in EA. In brief, I believe that, because of its history in EA, IIDM is thought of too much in group rationality and forecasting terms, deemphasizing other tractable and worthwhile institutional reforms. Tracing IIDM Origins There is a deep literature on how governments function, how companies are organized, etc. Academics, particularly economists, political scientists, and sociologists, have studied institutions. Others in epistemology, psychology, and behavioral science have studied decision-making. The general arc of IIDM within EA begins with early conversations about how to engage with institutions broadly, dating back to at least the early 2010s. Around the same time—though as far as I can tell somewhat separately—the Rationality Community began growing in prominence as they investigated how to make better—more rational—decisions. These discussions began in Overcoming Bias (2006), the blog by Robin Hanson and Eliezer Yudkowsky, and later the group blog LessWrong (2009). I understand current conceptions of IIDM in EA as blends of these two threads of institutional and decision-making work. The 80,000 Hours problem profile on IIDM—the first big piece making the case for IIDM as an area of interest—drew predominantly from the decision-making thread. But broader notions of institutional work persisted. IIDM was further elevated by the IIDM working group of 2019-2020, which evolved into the Effective Institutions Project (EIP). Since then, several organizations and projects have done explicitly IIDM-branded work, and more have done work that is implicitly or related to IIDM. There has also been a massive growth of adjacent work, like policy advocacy and political engagement. I'll now go a bit more deeply into each of the originating threads. Effective Altruism and Institutions Early discussion in EA about institutions seems to have...

The Nonlinear Library
EA - EA architect: Building an EA hub in the Great Lakes region by Tereza Flidrova

The Nonlinear Library

Play Episode Listen Later Sep 13, 2022 10:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA architect: Building an EA hub in the Great Lakes region, published by Tereza Flidrova on September 13, 2022 on The Effective Altruism Forum. EA is growing, and there is a great need for a variety of spaces for the community to live, work and socialize. Traditionally, EA communities formed around hubs like Boston, the Bay Area or London. But, I feel like recently, there has been a growing demand for diversification of EA hubs around the world. This article aims to introduce the Great Lakes region as a potential place for a new EA hub. I will argue that a new hub could be set up in the green and walkable town of Sandusky, located right on Lake Erie, allowing EAs to benefit from the existing thriving community, facilities and infrastructure. My involvement in the project started a few weeks ago, when I spent a week in Sandusky, Ohio, visiting Andy Weber (a great 80k episode with Andy here), a former Assistant Secretary of Defense for Nuclear, Chemical & Biological Defense Programs, and his wife Christine Parthemore, Chief Executive Officer of Council on Strategic Risks. I was on a mission: to explore Sandusky and help figure out whether it would be a suitable location for an EA hub. Those of you living a nomadic lifestyle, working remotely or seeking to improve your mental health, productivity and wellness, stay tuned: Sandusky might be a place for you! So why the Great Lakes region, and why Sandusky in particular? In Andy's words: 'Sandusky is the remote worker's paradise', and after visiting, I have to say I wouldn't disagree! The following paragraphs describe the main reasons why Sandusky is a great place for remote work or perhaps establishing an EA hub. In Sandusky, you get more for less. Sandusky is much cheaper than the usual places where EAs tend to gather. According to Redfin (Redfin n.d.), the median price of Sandusky home prices for July 2022 was $200K, as compared to $825K for NYC and $1,460K for SF. In times of remote working, places like Sandusky might offer EAs a chance to work from one place, offering an alternative to living in the most expensive places in the world. Living costs as well as costs associated with running an EA hub in Sandusky are much more favorable and would buy EAs better housing and healthier work environments for far less money. Life at the water. Living near the water, similarly to living surrounded by greenery, seems to be associated with many positive measures of physical and mental wellbeing (Hunt 2019). Sandusky is located at Lake Erie, a freshwater reservoir offering life right at the waterfront. It's one of the cities of the Green Cities Initiative, an initiative of cities by the Great Lakes that strive to offer a place to do remote work in times of global warming and covid. The initiative wants to offer life at a freshwater resource (the Great Lakes provide 84% of North America's surface freshwater (“Green Cities Initiative” n.d.)), with access to nature. I like the idea from an urban design perspective: it brings cities from the region together to work collaboratively on developing the region as a whole. For a smaller town like Sandusky, this means better connectivity to the communities and facilities surrounding it. Access to nature. People living near green spaces seem to be less likely to have anxiety and depression and are more likely to be physically active (Jennings, Johnson Gaither, and Gragg 2012). Many of us choose to live in cities despite the distance from nature because of all their benefits, like access to jobs, opportunities, and people. However, living in loud, densely populated and congested urban areas can get overwhelming and stressful and can negatively impact our mental health or sleep quality. Sandusky could offer an escape with more immediate access to nature. It is surrounded by parks, and the Magee...

The Nonlinear Library
EA - Bring legal cases to me, I will donate 100% of my cut of the fee to a charity of your choice by Brad West

The Nonlinear Library

Play Episode Listen Later Sep 12, 2022 2:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bring legal cases to me, I will donate 100% of my cut of the fee to a charity of your choice, published by Brad West on September 11, 2022 on The Effective Altruism Forum. I am a lawyer that practices near Chicago, Illinois and I also started a 501c3 charity called the Consumer Power Initiative. CPI is interested in using our economic power to benefit charities. To that end, I would like to donate 100% of my cut of any business I bring in to the charity of the choice to whoever brings in the business. I get a third of our firm's fee for business that I bring in. Thus, in a personal injury case, we would get a third of any money generated, so I would generate my cut (a ninth) to the charity of your choice. In an hourly case, I would get a third of the fees generated. Randall Wolff & Associates is the firm I work for and we do a great job for our clients; you can check out our high score on Google Reviews. We would love to take your personal injury, worker's compensation, bankruptcy, or divorce cases. We also would be interested in seeing if we can help you with commercial disputes. I would like EAs more generally to do stuff like this. Are you a professional where rainmaking could be valuable to your business and are interested in helping effective charities anyway? Make it explicitly known that you will direct your cut of the fees to a specific charity, or a charity of their choice. Earning to Give can be much more effective if you use the fact that you donate your earnings to generate business. If you would like to do this, please initiate any conversations about representation by either emailing brad@rwolfflaw.com or calling 847-222-9465 and speaking with Brad West first. It needs to be clear that I generated the business for me (and thus your charity) to get the fee. If you're interested in learning more about the Consumer Power Initiative, check out our newsletter. Also check out BOAS, a company in the EU that sells sustainable baby products and directs all profits to charities. CPI Newsletter:%40gmail.com&usp=drive_fs BOAS website: boas.co Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - My closing talk at EAGxSingapore by Dion

The Nonlinear Library

Play Episode Listen Later Sep 12, 2022 7:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My closing talk at EAGxSingapore, published by Dion on September 12, 2022 on The Effective Altruism Forum. I gave this talk at the end of EAGxSingapore, which I helped organize. It has been modified for clarity. Before I start the talk proper, I wanted to get a show of hands – if someone in this conference needs help, who would be willing to lend a helping hand, whether it's sending a message, reviewing a draft or hopping onto a call? (The majority, if not all, people raised their hands) For everyone new to EA conferences, take a look around! If there's someone that you wanted to reach out to but didn't send that message, check if they have their hands up. If they do, you should ask them for a call! I'll get back to why I asked you guys to raise your hands later. When preparing for this talk, the best advice I got was that I should be authentic, speak from my own experiences and not try to be someone I'm not. So I'm sharing a little about my first conference, what I took away from it, and what I hope you will take away from EAGxSingapore. My first EA conference was EAG London 2022. Like many of you, I watched Amy Labenz, Will MacAskill, and Benjamin Todd and I was so inspired. I wanted to be an effective altruist and improve the world. So, I applied for the conference and got accepted. A wave of fear hit me a few days before I was scheduled to fly. I have made an insane decision to meet a bunch of online strangers halfway across the world. It was something that I struggled to explain to my parents, and many of my non-EA friends had doubts about what I was doing. It all seemed like a terrible idea. It didn't stop there; scheduling 1-1s for someone introverted and new to the community was horrifyingly scary. All the people on Swapcard seemed so amazing and out of my league- researchers, specialists, and directors working on incredible projects. And scheduling 1-1s is only the first step because once you have a successful 1-1, you need to follow up. This means starting something, applying for a job, looking for collaborators or hiring someone. It also means that you'll constantly be stepping out of your comfort zone to grow, but it can also be challenging and nerve-wracking. So when people tell me that a common criticism of EA is that we are an emotionless bunch that only cares about data and numbers, I disagree because I associate EA with this fear and anxiety. However, beyond this fear and anxiety, there are two more things that I strongly feel when I'm around EAs. The first one is hope. Don't you think it is ridiculously and wondrously hopeful how so many of us think we can stop an existential risk? That something that can wipe out the whole of humanity can be mitigated by us all coming together and working on the world's most pressing issues. I think that that's the magical part of coming to conferences - that somehow so many of you have come together because we all believe we can do some good in the world. This makes me incredibly hopeful for the future. The second thing that I feel is warmth. The 1-1s were scary. However, it was also heartwarming how so many of these fantastic people made time for me, especially when many would not have gained value from it. People were willing to be mentors, guide me along my journey, and, even more importantly, I met lifelong friends. People who were there to check in on my mental health, gossip with me, and people who I hope will stay in my life far beyond my EA commitments. And this is something that I hope you guys will find as well. What I'm trying to say is that EA can be hard. It can be very demanding of you and your worldviews. And I feel kind of bad because I know it's difficult, but I'm asking you to take a leap of faith. Here are my three reasons why. The first reason is that I think you are worth it. EAGxSingapore was...

The Nonlinear Library
EA - Aversion to Happiness Across Cultures: A Review of Where and Why People are Averse to Happiness by timfarkas

The Nonlinear Library

Play Episode Listen Later Sep 11, 2022 3:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Aversion to Happiness Across Cultures: A Review of Where and Why People are Averse to Happiness, published by timfarkas on September 11, 2022 on The Effective Altruism Forum. Context This is a paper I just randomly found again in my saved reading list that seems relevant to the EA Forum. It talks about the concept of 'aversion to happiness' which I have never heard of before in EA circles. This paper might thus be interesting to EAs who consider some form of happiness/good qualia intrinsically valuable and certain measures of happiness as their core unit of intrinsic value (of which there are many in my experience): Abstract "A common view in contemporary Western culture is that personal happiness is one of the most important values in life. For example, in American culture it is believed that failing to appear happy is cause for concern. These cultural notions are also echoed in contemporary Western psychology (including positive psychology and much of the research on subjective well-being). However, some important (often culturally-based) facts about happiness have tended to be overlooked in the psychological research on the topic. One of these cultural phenomena is that, for some individuals, happiness is not a supreme value. In fact, some individuals across cultures are averse to various kinds of happiness for several different reasons. This article presents the first review of the concept of aversion to happiness. Implications of the outcomes are discussed, as are directions for further research." Rough Review and Summary I have skimmed some of the paper, here are my thoughts: A lot of it focuses on the effects of outward expression of happiness on other people's happiness or the fact that embracing happiness might lead to more subsequent suffering, which is still compatible with happiness being a core unit of value/morality. I thus do not think these claims are very relevant to EA. However, the paper also claims that in some cultures, individuals do not consider being happy as morally desirable or even consider being happy as bad - which would be incompatible with EA notions of considering happiness intrinsically valuable: "People aren't just averse to happiness because it might lead to [subsequent unhappiness], however; some individuals and some cultures tend to believe that happiness is worthy of aversion because being happy can make someone a worse person (both morally and otherwise). Again, we found evidence for this belief in both non-Western and Western cultures. First we discuss beliefs that happiness is worthy of aversion because it can make someone a morally worse person, and then we discuss beliefs that happiness is worthy of aversion because it can make someone less creative." It gives a few examples of that and later goes on to summarize and conclude: "It should be noted that [this paper] casts little doubt on the intrinsic value of most kinds of happiness. Indeed, while happiness and the pursuit of certain kinds of happiness are widely believed to have negative effects for some people in some cases, happiness is, in and of itself, still a positive experience for most people and according to most of the common conceptions of happiness. Nevertheless, it should not be in doubt that many individuals and cultures do tend to be averse to some forms of happiness, especially when taken to the extreme, for many different reasons. These considerations show that equating happiness with the supreme universal good is dangerous unless each culture (or individual!) were to create and be assessed by its own definition of happiness." It also makes a few interesting points about international comparisons of happiness across nations being potentially flawed due to response bias caused by aversion to happiness. Thanks for listening. To help us out with The Nonlinear L...

The Nonlinear Library
EA - Zzapp Malaria: More effective than bed nets? (Wanted: CTO, COO & Funding) by Yonatan Cale

The Nonlinear Library

Play Episode Listen Later Sep 9, 2022 8:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zzapp Malaria: More effective than bed nets? (Wanted: CTO, COO & Funding), published by Yonatan Cale on September 9, 2022 on The Effective Altruism Forum. TL;DR: This post describes Zzapp's approach and effectiveness from their own perspective, intended as an intro aimed at the Effective Altruism community, as an invitation to investigate further and maybe fund them. They claim to be 2x more cost effective than bed nets in reducing malaria in urban and semi-urban areas (over 70% of Africa's population). Epistemic status: Based on conversations with Arnon, the CEO of Zzapp Malaria, not cross checked with other info such as Givewell's review of Against Malaria Foundation. Zzapp's approach and theoretical reason to think it would work You can skip to their experiment and how it went, if you prefer. TL;DR: Spray water bodies with larvicide to prevent mosquitoes from reproducing, and do it extra well by managing the considerable ops work of finding and spraying the water bodies using satellite imaging and an app for the people on the ground. Spraying water bodies with larvicide - is tried and works, unrelated to Zzapp Sources [link] [link]. Theoretical advantages compared to bed nets In every place that malaria was eliminated (which happened many times), larvicide (the treatment of standing water bodies) was the main component. Bed Nets only help people indoors during the night. Many people don't use their bed nets. Mosquitos developed resistance to the bes nets' insecticide in many countries Note I think Givewell already took the problems into account in their analysis, and Arnon emphasizes he thinks bed nets are great, and this is a pitch for using larvicide in urban (and semi urban) areas, not for stopping distributing bed nets. Zzapp think the ideal solution would probably combine many interventions. We are writing this as a comparison with bed nets since EAs already think bed nets are great. Problems in existing larvicide approaches Existing solutions: Problems in theory Coverage is important It's important [how many water bodies you find] and [how many of those you spray], and the difference between 95% and 50% is really big, similarly to the situation when vaccinating 95% or 50% of the population, because of the effect on R (reproduction number) - less infected people will infect less other people, it snowballs but in a good way (hopefully), and the same is true about reproduction of mosquitos. Existing solutions have bad coverage People miss water bodies in the areas they are assigned to search People miss entire areas Even when water bodies are found, the spray team sometimes may still skip them or forget to treat them according to schedule Small RCT Ref to a (tiny) randomized controlled trial run by Zzapp and AngloGold Ashanti Malaria Control (AGAMal), where two groups scanned the same square kilometer, one group used the app and one didn't and the group with the app found 28% more water bodies. Scanning an entire town In a different operation, when scanning an entire town with AGAMal, they found 20x more water bodies when using Zzapp's app. (publication in progress, we'll add a link when it's ready). From that they think that on a larger scale the app has an even greater impact.What happened behind the scenes is that without the app - the scanners skipped entire neighborhoods. Not a problem: Poisoning water bodies The larvicide in the relevant quantities (bti) isn't poisonous to humans, animals, or other insects except for mosquitoes and black flies. Zzapp's advantages compared to “manual” larvicide Zzapp has an app they give to the people “on the ground”: The app follows “where do the people go go” and lets the people mark “I checked this house's garden” and “I found a water source here” and “this house didn't let me in” The control room shows a map with ״Here...

The Nonlinear Library
EA - A California Effect for Artificial Intelligence by henryj

The Nonlinear Library

Play Episode Listen Later Sep 9, 2022 7:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A California Effect for Artificial Intelligence, published by henryj on September 9, 2022 on The Effective Altruism Forum. I just finished writing a 50-page document exploring a few ways that the State of California could regulate AI with the goal of producing a de facto California Effect. You can read the whole thing as a Google doc here, as a pdf here, or as a webpage here, or you can read the summary and a few key takeaways below. I'm also including some thoughts on my theory of impact and on opportunities for future research. I built off work by Anu Bradford, as well as a recent GovAI paper by Charlotte Siegmann and Markus Anderljung. This project was mentored by Cullen O'Keefe. I did this research through an existential risk summer research fellowship at the University of Chicago — thank you Zack Rudolph and Isabella Duan for organizing it! Abstract The California Effect occurs when companies adhere to California regulations even outside California's borders because of a combination of California's large market, its capacity to successfully regulate, its preference for stringent standards, and the difficulty of dividing the regulatory target or moving beyond California's jurisdiction. In this paper, I look into three ways in which California could regulate artificial intelligence and ask whether each would produce a de facto California Effect. I find it likely (~80%) that regulating training data through data privacy would produce a California Effect. I find it unlikely (~20%) that regulation based on the number of floating-point operations needed to train a model would produce a California Effect. Finally, I find it likely (~80%) that risk-based regulation like that proposed by the European Union would produce a California Effect. If this seems interesting, please give the full paper a look. There's a more-detailed 1.5-page executive summary, and then (of course) the document itself. Key Takeaways The California Effect is a powerful force-multiplier that lets you have federal-level impact for the low(er) price of state-level effort. There are ways to regulate AI which I argue would produce a California Effect. State government in general and California's government specifically are undervalued by EAs. I believe that EAs interested in politics, structural change, regulation, animal welfare, preventing pandemics, etc, could in some cases have bigger and/or more immediate impacts on a state level than on a federal level. There are still plenty of opportunity for further research. Theory of Impact My hope — and ultimate theory of impact — is that this paper will help policymakers make better-informed decisions about future AI regulations. I hope to encourage those who believe in regulating artificial intelligence to give more attention to the State of California. At the very least, I hope that people with a broader reach than I have in the AI Governance space will read and even build off this work. I hope I can raise their awareness of the California Effect and ensure that they recognize the disproportionate impact it can have in the race to keep artificial intelligence safe. Opportunities for further research Before I list my own thoughts, I will direct readers to the list of further research opportunities that Charlotte Siegmann and Markus Anderljung collected in an announcement for their report on the potential Brussels Effect of the EU AI Act. I'm personally choosing to highlight their fourth and sixth bullet points, which I think would be especially effective (the latter even more so): “Empirical work tracking the extent to which there is likely to be a Brussels Effect. Most of the research on regulatory diffusion focuses on cases where diffusion has already happened. It seems interesting to instead look for leading indicators of regulatory diffusion. For exam...

The Nonlinear Library
EA - Save the Date: EAGx LatAm by LGlez

The Nonlinear Library

Play Episode Listen Later Sep 8, 2022 3:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date: EAGx LatAm, published by LGlez on September 8, 2022 on The Effective Altruism Forum. EAGx LatAm will take place in Mexico City at the Casa de la Bola Museum, 6th-8th of January. One of our goals is to connect the LatAm community with the broader international community. So, although this event is primarily for the Spanish- and Portuguese-speaking EA communities, applications from experienced members of the international community are very welcome! What better way to start 2023? This will overlap with the EA Fellowships and Visitors Programme that we're running in Mexico 1st Nov- 30th Jan with the aim of kickstarting an EA Hub in the city. So expect great EA vibes on top of fantastic local (vegan) cuisine and fun cultural activities like city tours and regional dance lessons. Applications will open in October. Why EAGx LatAm? 1) The EA movement in LatAm has grown over the last couple of years, with new local and university groups in Mexico, Chile and Colombia joining already well-established groups like EA Brazil. This EAGx is an occasion to foster a sense of community and connect people who are facing similar opportunities and challenges. 2) Many people from Latin America experience barriers to attending international EAG(x) conferences in the US or Europe, including visa problems and inconvenient and expensive international connections. As a result, the international community is missing out on talent and insights from a sizable part of the population, while the LatAm community misses on opportunities and valuable networks. With this in mind, the conference aims to: -Raise the profile of EA in LatAm. -Create and strengthen connections between EAs within LatAm, the Spanish- and Portuguese-speaking communities. -Create and strengthen connections between EAs in LatAm and the rest of the world. Who is this conference for? This conference is for you if you.: -Are from LatAm, new to EA and looking forward to learning more and connecting with like-minded people. Most of our talks will be introductory and cover a broad range of topics. -Are from LatAm, very much not new to EA, and excited to meet other members of the community. -Are an experienced member of the community from outside LatAm and keen to connect with EAs who might be under your radar. I don't speak Spanish, is this event for me? Yes! First of all, although the lead organisers (Sandra Malagón and Laura González) are also the coordinators of the Spanish-speaking community, we're in touch with the Brazilian community, and our goal is to welcome people interested in EA across LatAm and the world at large, regardless of language(s) spoken. Most talks will use English as a lingua franca, with the exception of a few topic-specific talks and workshops like “Community building in the Portuguese/Spanish-speaking world”, which will be run in the relevant languages. We will have a system of stickers or colour-coded tags for participants to signal the languages they speak. If you're unsure about whether to apply, err on the side of applying. If you have any questions or comments don't hesitate to write to latam@eaglobalx.org . We accept emails in English, Spanish and Portuguese :) See you in Mexico! The Organising Team Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

ConvoCourses
Convocourses Podcast: Cybersecurity to Study in 2021 SCA Resume

ConvoCourses

Play Episode Listen Later Sep 7, 2022 73:09


http://convocourses.com   Hey, happy new year, everybody.  This is a podcast for combo courses, and today we're gonna be talking about  we got some, a few questions that, that have been  asked of me. I've got a resume to go through. And  I wanna talk to you guys about 20, 21 and what  what I'm gonna be studying this year  as a focus for like certifications or just sharpening my skill and some things that I would recommend that you  look at too. Cause I think it's  looking forward five years ahead.  What I think is gonna happen as far as our industry is concerned, cyber security or  data analysis and things like that. And so let's get started. So the first thing I wanna talk to you guys about is some of the things that I'm gonna study in 2021, the things that I think that  are gonna be relevant going forward in the future. And let me just switch my screen here to show you the very first thing.  that I wanna show you is  blockchain technology. This is  something I think that's gonna be more and more re relevant. If you've been watching the news, you've been seeing cryptocurrency going  off the rails lately. And a lot of this technology  the money is based on blockchain. And I don't think that this technology's going away. It has all the hallmarks of what I saw with cloud computing many years ago, and everybody kept  talking about it and it just kept coming up over and over again. It's really the same  trends I'm seeing  where all these gigantic companies and all these giant organizations are really  dipping their toe in a blockchain technology and very quickly what it  is a basically it's a digital ledger. It's a distributed digital ledger that allows you to basic you, you can essentially  you. , you don't have to have a middleman. It allows you to not have a middleman because there's something there's a, normally, if you like a, with a bank, for example, a bank is a middleman to your money.  Your money is there. You have to go to the bank to get your money, but with a digital ledger, basically, essentially your money is out there on the web and distribute. It's all over the place it's distributed and encrypted  so that you can access it. And it has  it's a cure. It allows you to be anonymous and  and it's something, it validates it so that you can't, you people can't say that they didn't make that a payment or could, or didn't get a payment. It's immutable. That's what that means. So the technology is emerging  slowly but surely  and not just cryptocurrency by the way, but also    for things like logistics. And even  voting can be done with the blockchain, many other  things that we use every day can be used with blockchain technology. And so that's why I'm gonna be studying more on this    the actual technology behind it    as opposed to just  cryptocurrency for the sake of making money and investments and things, that's a whole separate issue. Blockchain itself does much more than just  money and essentially, like what, another thing that you should know about blockchain technology is that  let me see Oracle starting to use it.  Walmart is starting to use it and  many different  other organizations and governments are start.  Dip their toe in this technology. And it looks a lot like  what cloud technology was looking like about 10 years ago. All right. Another thing I'm gonna be studying very heavily is cyber threat intelligence. This is becoming  much more important  to anybody who does cyber security and what this is from a high level is it's. If you have a customer or if you have an org you're in an organization, either one and you're protecting someone's assets, their laptops, their servers, their information, their personnel, you're protecting their assets. Cyber threat intelligence is where you  do recon to see if anyone is. Looking into trying to break into those assets and the way you would, one of the ways that you could do it is to have a cyber threat intelligence  cyber threat intelligence system that goes out and checks the dark web checks  the internet to see who's talking about your organization.  Does anybody have your, the  IPS of your organization or is anybody scanning your organization? So you're looking for where people are trying to get into your organization,  a preemptive you're. You're doing preemptive checks to see if there's anyone trying to get  into your systems. This is gonna be  really more and more important  as technology becomes  even more important in our, in all of our lives. If you looked at the recent gigantic hacks that are going on, state sponsored hacks are happening. And the one of the ways to.  to have some kind of defense against the state  funded  state sponsored actors is to actually do cyber threat intelligence. See if anybody has been CA  casing the joint,  scanning  your network scanning and see if you have any vulnerabilities out there. So cyber threat intelligence is something I'm gonna really dive into this year, and that's gonna start off with  with  things like  ethical hacking, and then I'm gonna get into cyber threat intelligence, cuz you  gotta know a little bit about ethical  hacking and stuff to actually know a deep, have a deeper understanding of what threat intelligence is. And another thing I'm gonna dive into this year and I've  put it off way too long is cloud computing technology. And this is something I talk about a lot  on this channel and  it is just getting more and more important. Like it's not going away. It's just.  it's really become a centerpiece of all of our lives  whether you know it or not. If you've, if you watch Netflix, if you use Gmail, if you use Hotmail , if you, whatever you use, like most of these gigantic technologies are using cloud technologies on the back end. So it's just becoming more and more important. And me as a cyber security person, I need to know have a deeper understanding of  what that is all about. So those are the things that I'm gonna study this year for 2021, and  possibly get certifications in some of these technologies  and actually it's become a required couple. Two of those things on that list that I just mentioned to you are, have become a requirement for the job that I work at, that  I have to actually  get a certification in 'em. So this is  something that, that I'm definitely gonna do.    And I think. These tell those three things are gonna become more and more important in the next five to 10 years. All right. Let me  see if I got anything else. I see a few people watching me. If you guys have any questions, let me know. I'll give you guys time here. If anybody wants to chime in, I've got a few people who've asked me questions and a few people who've asked me to actually look at their resume. So I'm gonna actually do that. Let me see if I can find a good one to look at here. The first one I'm gonna look at is going to be from the, I changed the names, just so you know, change the names and  the addresses and everything on there. So there's no need to worry about that. I'm gonna look at this resume right here. And what I like to do is I will.  get, put my suggestions in there sometimes  the resumes are so good. I don't really have much to say about it, but it's just like little tweaks and stuff of what I've done on my own personal resume to give them some, to give them some extra juice, some Google juice on that resume  and  my mindset is that I market myself. And so I encourage anybody, any of my students, anybody who follows me to do the same thing, you gotta market yourself. It's very important in this day and age, there's just so many people. And there's so many competitors out there for you. There's so many other eyeballs on other different resumes that you gotta put yourself. You gotta set yourself apart by advertising yourself, marketing yourself. Okay. So this is coming from Mike and he's in the DMV area and he is  a senior assessment and authorization engineer. Okay. All right.  I've never heard that. Title before, but  that's good.  If just one suggestion I would make here is if you're Mar if you're looking for a different job, I would, one of the things that I do is I put some more  more common, a more common name out there. So this to me sounds like it's  and I could be wrong here, but one of the things that he could do is say, he's a security, and I'm gonna read through the resume. This might change. I would suggest I'll just say suggestion is to have the title of this, be a security control  assessor. And the reason why I would say that is because the security control assessor is a more common name  for this type of work. But then I, this might be something I've not. I'm not familiar with authorization engineer, but  it is just not something  I've heard people use in my industry. So that's why I I would recommend they do this now. This is good. They put active top secret clearance. That's really, that's excellent.  You, do you definitely wanna put any kind of clearances that you have here?  Up top, because that's  a very marketable thing to have   that immediately eliminates 80% of the people who are gonna compete against you. So that's a  very good thing to put on a resume.  Let me see, I'm gonna read the top part of this qualification profile. This is pretty good to have, like whenever you're marketing yourself  because places like LinkedIn will have an area where you can put stuff like this, but what I normally do is I take advantage of it by putting as many keyword as possible inside of this profile. You don't want it to just be flowery and sound good. You want it  to hit 'em right in their teeth. You know what I mean? You want 'em to grab their attention immediately with a bunch of keywords. So they said concept and execution con concept to execution focus, systematic profe. I would not put any of this stuff in here. Okay. I'm just gonna, I'm just gonna suggest some things here. I'm just gonna suggest some stuff I'm gonna say.  Now I'll have to read the, what I'll do is I'll read through the resume. I'll come back and fix this up, but it's just way too flowery for me. Like I would not, if I was reading this, I would just skip right by it  cuz I want to know what they can do core competencies. These are good. But  another thing that I do personally is I take this and I put it at the end, any kind of listing stuff like that.  I put it at the end.  Cuz it will get picked up by the search engines. That's the reason why I do it.   But when I'm reading through it, I want to very quickly know  know what their education is, cuz that's normally a show stopper or a show  it gets the show on the road if they know, okay, this guy has a bachelor's degree. That's one of the requirements. He has a C I SM certification. That's one of our requirements. So you wanna  very quickly  have all the main things up here. Now   this dude's actually got a great resume here. He's got some great set of skills. So another thing I do is I would put your top certification right up top, like this C S M I would say, is this top certification? I would say I would put it right up here. Not trying to brag or anything, but I am a CI SM. And  maybe you put the number in there cuz this is gonna be. Guaranteed a requirement. That's gonna  this certification right here can replace things like C I  S P and some other  large level high, sorry, large high level security certifications that  that he has. And then the cast is also a really good one. But I think  the C I SM is  a better, has a, is better, is a higher level. It's more, no more people know about the C I S M I should say. Okay. So he's got a ethical hacker certification. That's also a good one. I would, that's another one you might wanna put up here as well. That's a very marketable certification, a lot of pen testers and hackers  really look down upon the C, but I'm telling you it's very marketable cuz  the corporations have not gotten the memo, the government and the corporations have not gotten the memo on, on how bad this certification is. So it's very, still very marketable.    Yeah, I would put that on top. Let's see security plus. Okay. And some other stuff. All right. Let's keep going here. Scott.  Cyber security professionals, Maryland. Oh, okay. Affiliation. I'll put this at the bottom. We wanna get to the meat. The meat  is the actual experience. So I'm gonna take this, I'm gonna put this at the bottom. This is a great resume, by the way  this is right at this point, all I'm doing is putting my own suggestions in here  which he can  take it with a grain of salt. Like I, it, this, he could leave it just how it is and it would still be fine cuz he's got so much good stuff in here. The only thing I would  highly recommend changing is. this right here. Cause you want this to have impact. And this to me, expert at administering desktop printers, and this is not a good impact. This is not tip in my mind if I was reading this and I was trying to hire this guy, I'd be like, eh, whatever next  I'm not trying to be mean or anything, but  just keeping it real with you guys so that you guys don't do the same kind of stuff on your resume.  No flowers, just straight facts  keywords, stuff like that. Okay. Let's see. So job was at K force  to current. All right. Top secret clearance.  Let's see a C Splunk. Okay. This is actually  really good stuff. Support all activities on  as outlined in this 837, 1 37. Okay. All right. Not seeing a lot of impact. But I'm seeing lots of great keyword, so that's good support all outlined in. Okay.  Review and analyze a and a as assessment and authorization.  Security controls missed overlays  experienced using administrative  administration of EAs. Okay.  So this guy, it sounds like he's like a  is O but  I'm not really sure what, cuz he names himself as a senior assessment authorization engineer. That sounds like an ISSO. So another suggestion I would make is to possibly or use IFSO information system security officer.  and then I'll just tell 'em here. That senior, what I'm trying to get at is it's a senior assessment and authorization engineer is uncommon, is an uncommon title is an UN uncommon title. That's all I'm trying to say. So you wanna use like a common ti, if you're gonna put a title up here, it should be a title that people know about. And that also fuels your  your Google juice, your keyword  cuz the, and the thing, the reason why I emphasize on my courses  and whenever I do  these  resume suggestions, these are my suggestions. I'm sure other people have way better ideas than me, but these are just my suggestions. The reason why I focus so much on keywords is because that's really what a lot of employers and a. Technical recruiters use as keywords re technical recruiters and the HR department. Who's looking for jobs and stuff. Typically  they're not a technical person in your field  every now and then a organization has the  resources to cut some technical guys loose and say, Hey, go look through all these resumes and screen some people and have 'em come in. But typically what happens is your resources.  is your guys on the ground. You need them to actually do work.    You don't want them to go looking through a hundred resumes.  You want them to be working on cloud stuff. You want them to be analyzing data. You want them to be doing their job.  You're gonna have. So that's why, what organizations do is they have people who are not low level workers. It's not the right term, but. HR  a screener from a whole, a third party organization, a third party company, they say, okay, look, here's our requirements. Please look through these hundreds and hundreds of different resumes and see if you can find us some good picks, just we gotta make sure that they have us    and CSM. They have to be in information system, security officer  and see the thing is when they say we want a system security officer, they're not gonna know what a senior assessment and authorization engineer is  is that, does that make sense? So you wanna use the same language that people are using if everybody is using cyber security.  The thing is I've been through a few iterations  of this. So first iteration, when I went into security, Everybody called the information assurance, like if you were doing risk management framework, if you were doing certification and accreditation, that's what they called it. We were called either certification and accreditation engineers, or we were called information assurance officers, or we were called like this, just  it's just an odd, that was like information AUR.  What is that?  What they meant was security. You're security guy who does paperwork  essentially   you're a compliance guy that would make more sense, but then it evolved from information assurance to  what did they start calling it? It was information system security, then information assurance, and then they start calling cyber sec, cyber security engineer information.  Change. And now the do D I think they are calling it like cyber surety or something like that. I don't, they keep changing the terminology, but you wanna keep up with the terminology people are using in this industry. So that way  what words to use for those HR guys or those screeners who are who's, who are looking for all these resumes. And they're looking for that one keyword, they don't know what an information system security officer is.  All they know is that the employer said, Hey, we want an information system. And if so  make sure  that's you get this person. And so you gotta use those keywords. Okay. I'm gonna get off my  get off my soapbox here and I'm gonna continue going through some of these. Yeah. Tony, I see your message here. Let me just finish this. Getting through this resume. This resume does not look bad by the way.  I've seen some really bad resumes. If you've been watching these for a while. I've been through a couple who were, that were really bad.  This one's actually pretty good. It's got great keywords. My only main suggestion would be, I'd be really surprised if this doesn't get tons of offers. My only change would be to change this whole, this right here. This is just this just too much fluff.    Just get to the what. Okay. Let me just give you an example of what I would write here. What I would do is I would say something like, cuz this guy has so much awesome skills. Let me just read through  what he's done before.  Let me see.  And now analyze  vulnerability data, multiple sources  using a cast and Splunk.    Okay.  Here's what I  would do. I don't know how many years of experience this person has, but I would start off with my years of experience. I would say it looks like he has years of experience. Look  as a security analyst. Good Lord. Jesus. Why? What are you doing here?  What I would   I'm sorry guys.  I'm just, I'm a little frustrated. Okay.   I would say X years of cyber security analyst work using tools such as is  Splunk. NEIS I don't know how to spell NEIS so he is gonna do a spell check NEIS. He said a castle that's NSUs you wanna use? NSUs that's a real good tool to have.    And  let's see, EMA wait  and a grasp of  No, not grasp, but  we wanna emphasize how much skills this guy has. Cybersecurity analyst work using tools such as eight years of experience or whatever years, experience analyst work, using tools such as Splunk S  with, okay. And okay, here we go. We'll say, and NEIS with a  with solid experience. Implementing  Risk management framework. And we want to get that keyword in there. RMF, I'm gonna say N 800 also  key phrases with solid. Okay. Yeah. See, I would start off.  I wanted someone hit 'em right in their mouth.  I don't want them when they see my resume.  They're gonna stop reading all other resumes when I'm done.  That's your goal. You want them to stop on your resume and not read another resume? Okay.    He, this dude got so much experience, like why is he saying all this fluff? That doesn't oh my God. Okay. So yeah, I would just hit him right in their mouth. Like I, okay. Then he wants to say.   Have I have a active security clearance now you might be thinking, Bruce, why are you saying clearance over? He says it here already because we're using a different keyword. So up here, he said, active top secret clearance right here. We're saying active  security clearance. It's a, there's a difference. And we gotta spell it  by the way, there's a difference because it's a different key word. So somebody's looking for security clearance and they want you to have a  they want you to have a security, a secret clearance instead of top secret clearance. They'll still see that you have a clearance period. They'll go, they'll be looking for a secret clearance. And they find a guy with an active top secret clearance. You know what I mean? So we wanna make the net as broad as possible. This dude's got so much incredible experience. That    there's a lot to choose from here.  I would put something like this in here.  Okay. Okay.  Watch this. So we wanna put more about his in information security officer experience. So we wanna put ISSO with years of experience.  See how I can't spell.  see. It's very important to do a spell check  all right.  Experience. If so with years of experience  getting  authorization to operate and with, for, and for multiple  information systems. So I got a bunch of keyword in here. I got cybersecurity analyst. That's a keyword key phrase. We got Splunk. We got NEIS, we've got risk management framework. We've got N 800. We've got a O  we just want to hit all the buttons.  We don't want fluff. We don't. Oh, bilingual. This is a good one too. This is  really good. And oh, by the way, I'm bilingual. Yeah. Super powerful. Bilingual opens up a ton more jobs for you. If  more than one language, any language it's gonna open up other jobs for you. So that's just something that to keep in mind. All right. So that's it with that one. I I hope that  that's helpful to, whoever's watching this  the idea behind this is  to get yourself in line with the market. that's the whole thing. And you need  to do that. You need to tell people who you are. You gotta show people, Hey, here I am.  That's what marketing is all about. So you wanna market yourself. That's the whole, that's my whole thought process. Okay. Tony says, Hey bro, I have about seven years of compliance experience and I'm bored to say the least I want to move into  security engineering and architect roles. How do  do you suggest I proceed? Wow. Tony  that's  I had the same experience. Like I, I had been doing it for  I don't know, 12 years or something, and I just got so bored with it. It wasn't a challenge anymore for me,  and I know that sounds ridiculous if you're getting paid  and you're, you got a secure job, but you need some kind of a stimulation. I got into it cuz I love technology,  and so I was doing this for like  years and years compliance  and I found myself losing my technical cuz I had technical skills and I started losing that because all I was doing was compliance stuff. So I know how you feel. So what I did was I  I just jumped off a cliff man. Like I, and I don't recommend this to anybody, but this is what I did.  I took a job doing something that I was really excited about. I was looking for another position I was in between jobs and I was looking for another position and somebody off had a job overseas. to do.  They actually, it was risk management framework.  I applied for that and I applied for another position they had for  a system security analyst.    I applied for the system security analyst and I didn't  I  of read about it. And it was talking about  using Sims and talked about using  tools like. McAfee EPO  and  IDSS and IPS. And I was excited. I'm like, oh man, this is so cool. I've never even some of the stuff I never even touched before. So I was really wanting to get into it. So what I did was I applied for that job, as well as the risk management frame, I was fully expecting them to look at my resume for risk management and be like, okay, this is our risk management guy. They didn't do that. They  chose me for cyber security. They looked at all of my old technical skills and they were like, okay, this guy right here  we really need somebody to do this work for cyber security analyst work. And they picked me up and they picked me up as a, like a junior cybersecurity analyst where I was learning   I wasn't like the guy, the main guy on the floor. Doing everything. I was like, one of the people like learning different technologies and actually staring at a monitor, looking at the data, coming in, out of a network and analyzing, they taught me arc site. They taught me, which is  a SIM kind of like Splunk, a little bit of Splunk. They taught us  all these different tools, man. I had a blast, I'm learned so much stuff, but  I had to learn, like I was like, I was fresh outta college.  had to swallow my pride and I had to  take, which I have no problem with, but I know that some older guys, especially if you've been in it for cyber security or it for a while  some of us  we've seen war zones and stuff,  so it's like, why is this kid telling me what to do? But I didn't feel that way. I was like a kid. I was like a little kid learning  like a wide-eyed little kid  oh    yeah.  Really getting into it and.  and then  my work ethic kicked in and I learned everything. I could, I absorbed as much information like a sponge. And so I would, so that's what  what you could do. You don't have to go to another country or anything. Like I did  jump off a cliff or anything, but what you could do is just apply for  a junior level security engineering and architect role to get your beak wet  to get started  but keep in mind,  if you have seven years experience    you can't come in the door with the chip on your shoulder  oh yeah. I already know that I've done it for 15 years and throw your weight around or no, you gotta be like a little kid,  and  that's what I love about it is that I'm learning so many things like you can like right now, if somebody, if I went to a firewall role, even though I've touched them before I know how they work and stuff, I don't know how to configure a fire. I can't do that from scratch.  Somebody would have to sit down and teach. Like from, they'd have to teach me from the ground up. Now I'd learn very quickly cuz I have all this experience and all these other tools and stuff, but you I'd have to be open minded and learn what they're teaching me and not come in there. Like I know everything and not knowing  I have to come in there, like I'm an intern fresh outta college and I'm willing to learn from this Pierce person. Who's more than likely younger than me,  so yeah, that's what I would do, Tony.  I know how you feel. I felt the same thing many years ago,  that path right there for the in terms of my career was a great move because now I have so many other doors and opportunities that have opened up over the years. And because I have this plethora of different experience  that I can pick from  I'm now a consultant. Like  I can consult on all these different things.  I've touched so many different  technologies before, and I don't have to actually be an expert on each one, but I know the concept so well that I'm able to say, okay, I know how this works with this. And I can look at data and say, okay, this is what I'm seeing here    but yeah  what I would do if I was you Tony, and actually  that's what I did in the past.  And I know how you feel. All right. I got some other questions here that some folks have contacted me about and I'm gonna answer them. So let me show you guys what I'm seeing here. Let me show you what I am seeing all. So I've got  a question. From my man. So Solomon H and he says  I received a contingent offer for wait  wait for security control assessor position. And I'm proc I'm in the process of getting my clearance. I don't have a background in risk management framework or any cyber security compliance.  What advice can you give me? I'm relatively new in cyber security and only have one to two years experience as a system administrator. I know that my job will focus on security and privacy controls. As I look over the, as I look over the next 853 documentation. I've enrolled in your course. And  so I can better understand an overview of how risk management framework works.  Is there anything else that you can help me with or give me any kind of guidance? Yeah, actually I really can help with this.    I would say that  if you happen to be watching this, Sawman  as a system administrator, if you guys out there are system administrators, you should know. And especially if you're trying to go into cyber security, you should know that actually  you have many years of security experience. So if you have set up a server before and had to put the patches on that server, that security experience, if you've ever had to do some documentation on the system that you set up  where you had to draw out a diagram, put that together and shop that around to the rest of the. The guys on  on the staff you've, that's cyber security.  That's a little taste of all of these different things are taste of cyber security. If you've ever had to help the compliance guys out  and those guys that contact you and say, Hey, could you give me, could you give me  a blurb or some documentation about  what this security feature of the system is? Guess what that's, you've actually assisted with cyber security compliance. If you've ever put a  secured software on the system, you put the software on there and then you had to update it. That's also cyber security, cuz you're updating the patches that could have been exploited  by a threat actor    so if you've ever put signatures on a system for anti-virus, that's also cyber security.  If you've ever.  Hard in a system like where, okay. Let's say that  the, there is a password protection on there, but it doesn't have upper and lowercase and it doesn't have, it doesn't have password complexity, but you had to go on the back end of the server and ensure that the whole organization    is enforcing  password complexity or enforcing  multifactor authentication or enforcing  audit logs to be enabled for anybody who's failed, a failed login attempts or anything. All of those things. If you are a system, administrator are things that you could put on your, you should put on your resume as a cyber security person, cuz you have done cyber security. In fact, you have, I would argue you have done more cyber security than some. Have quote or quote unquote in cybersecurity who have not done any technical stuff. And all they do is policy.  You've done more than them because  you're go, you're now be able to go deep in policy and deep in technical, the technical side, your skills are very much needed in this field. Now you said that you're going into security control assessments. So this is security control assessors from my interactions with them and having done this myself.  We, the, you need a team of people who can assess different aspects of an organization.  Systems. What I mean by that is you're not just looking at documentation. You're not just looking at their security policy and saying, okay, looks like you've got  you've guys have a policy in place, and it's been updated on this and that date. You're not just doing that. You're also ensuring that the organization is complying with their own security policies. And that means that you have to run things, do things like run scans,  so you might have to Polish up on your ability to run a necess scan or a, I don't know, name, a name, a scanner. And you might have to know a little bit more about that, but I'm sure you'll pick that up pretty fast being a system administrator.  So that's one thing  yeah, learning the nest 800. 37 I would say  is another place to look. But if you're taking my course    that's gonna walk, that's gonna really touch on what you need to know for N 853  and N 837. It's gonna really touch on those things.  And there's perspective of an information system, security officer. That course is actually really good for  for se, especially if you're new to that work.    Yeah, I hope  that helps.  That's a little bit of guidance for you if you're taking the course.  If you happen to see  this  this video, Sawman any questions you have whatsoever, I actually are currently doing assessments for different organizations, so  I can help you out with that. Okay. I've got another question here. And somebody said    oh  wait.  Spade says  do you offer any mentoring  opportunities?  Can you remind us of how.  we could work with you concerning career guidance and resumes if possible. Yes.  So spades, I get this questions like weekly now.  I do not do mentoring because I have a full time job and I really enjoy what I'm doing with teaching online, or I really am getting into it.  I'm starting to meet other people. I'm learning stuff  from other instructors. I'm really excited about it. So I wanna spend my time doing that. But  what I can do if you're interested is I've got a bunch of courses. Let me just    show you what I'm talking about here. I've got a bunch of courses that you can sign up for. Some of this stuff is actually free. So what I do is  I put out a course and I give a portion.  a portion of it free, and some are just completely free. Some from scratch. If you're learning this from the beginning and you want to get into cyber security, then this is a free course for you to shows you what to actually focus on. It's  six hours along, by the way. It's not, it didn't start off free   but  I felt like it's time to help more people out  that really need it to get into this market. I've got something on resume marketing, like how I have been able to have a job  since I got outta the military  I've got so many opportunities all the time because of this meth method that I use, some of which I teach for free on YouTube, by the way, some of the stuff I tell you guys  is in this course, but it's a breakdown. Let me just show you how extensive this is, this  many hours of content and shows you, and you can use it as a reference. You don't have to go through line by line on all this stuff, but  shows you what I do to.  Have so much success in my career  and continuously have offers from all different kinds of organizations and different industries related to cybersecurity. And then I've got  a walkthrough of the risk management framework process from the perspective of an information system, security officer. I've got a deeper dive into that, of how to actually do the documentation piece and downloadable templates that you can use. And I'm sharing essentially my experience in this field so that you're not lost and you know where to go and how to upgrade yourself and how to make more income.  Let's keep it real.  This is about taking care of your family and taking care of your being, having some stability, financial stability. I'm talking about how I've been able to secure  my life and my family using this career field. So that's what I'm talking about in there. And tons of it's free. So you should, at least you should sign up. Check out the free stuff. If you like it. Now, if you do sign up, I do answer any of your questions. You I'm gonna set up communities there. There's lots more to come in 20 21, 20 22, 20 23  plan to be in around for a long time  and offering as much help as possible for people. My wife's calling me. Sorry, let me  just turn that off real quick.   Okay. So yeah.  So yeah, I do not do mentoring just yet. Maybe I have a full time job. I love my job. I love, I know that's a weird thing to say, but I'm really having fun, like learning different things. And my, when I'm at work, I'm like really at work  I don't have time to do anything else. I'm  really doing stuff. And  I'm doing, I'm just learning so much.  I do have a discord channel if you have, if. Anytime you want to question have que, especially if you happen to be a member of the site, if you happen to be a paying member of the site, I'm gonna go outta my way to help you out  in, in very deep ways    stuff that I, we wouldn't be able to share on here, obviously      if it's more personal or if it's more  related to specific things at your job, then of course I'm not gonna make a video about that.  So  that's the kind of stuff that I do offer, and those are things that I can do on the weekends, like when I'm off work and things like that,  and  there might be a time when I'm on lunch or something, or just after work or whatever, I'm on, I'm off that day and I can call and we can have  a  I've talked to my students before on the phone, like we're just back and forth talking about stuff that's tailored  to their  life. But as far as mentoring on a regular basis, I would take it extremely seriously. And I just, I'm not ready. I don't have the time and the day to, to dedicate to that.  To that.  So yeah, so that's where we're at with that.  Let me see  thank you guys for watching. Appreciate everybody. I got another question that someone asked me. They said, let me switch this screen here so you can see what I'm seeing. They said, hello, Bruce. I'm interested in becoming an information system, security officer and was interested in your course and what guidance you can provide on what courses on your site I should start with. I was using Darrell Gibson, but  I think he's a real popular security plus trainer, but I know  the 5 0 1 expires on July 21st, 2021. What books should I get for the risk management framework for the cap? Okay. So first of all, I am. Developing a cap course.  But that's not gonna be out for a while now, if you wanna know what book that I would use right now for the cap course, I can share that with you. I'm gonna bring that up real quick. The one that I think is a really good one, it's not cheap. And     it's so expensive. I wanna apologize for how expensive it is.  but there's no  real op  alternatives to this book  that I've seen.  There's  there's just not a lot on the cap  and that's why a lot of people follow me cuz there's, that's not a lot of people talking about risk management framework. And this is one of the few books that  that are out there that I think  are worth your time.  I have this book and it's, and  I'm reading through it and  it's really good.    As far as taking the cap, it's really good. I don't believe it's super practical. But I think it's a good book for the actual test. When I say practical, there's a difference between if you're an it guy  this there's a difference between actually taking the test. There's a difference between taking the test and doing the work. And they're just two separate things. So that book right there is really good for the official guide to the cap.  Common body of knowledge is a good book for taking the test.  Cuz they're hitting all the objectives line by line, they're hitting objectives. So that's what you want in a good certification book.  Objectives, if you didn't know, typically. What certifications I used to teach certifications.  So  what certifications do is  they have different domains, right? Each domain has a different category, a broad category, like for example, C I  S P has, I don't know, seven categories. I don't know if this should changed. I took it a long time ago, so I apologize for my ignorance.  in advance.    Yeah. And I'm a CI  S P but  the, it has say crypto  crypto cryptography  domain. And it has another one  that's related to security compliance.  Let's just use those as examples. So the cryptography one is gonna have different objectives that it's gonna hit. Like it's gonna have different things that they expect you to know.  And those objectives will be different. From  the security compliance domain, which will have its own objectives that go deeper into the details of the concepts behind that domain. And when you take the test, what they do is they stick to those objectives. So if you know the objectives very well, you should be able to pass the test. And if you don't pass the test, you should be able to take it the second time and pass it.  So yeah, that's a good book. And  and what was your other question part of your question? That's the book that I would recommend  for the cap, and then you said, was interested in your course and guidance. Okay. So for the course, for my course, I would recommend if you're trying to get, become an ISSO, the book is not gonna be enough to become an ISSO. And this is the reason why I did, I started doing this online stuff is because. Nobody's really teaching this.  It's just, I guess  if you pay 3000 to somebody come out to your job and actually show you that way. Yeah. But no, there's just not a lot of courses that tell you, give your practical guidance on this stuff. If you are going into it for the first time, I would highly recommend risk management framework, information, security officer foundations, which tells you what you need to know.    For the course.   Not for cap, it's not focused on cap, but for the actual work for ISSO work. So if you want a free preview to see if this is worth your time, worth your money, then just go ahead and log in.  And this first part is free. So there you go. And then  there's just. Lots and lots of stuff on each one of the categories of the risk management framework process. So yeah  it's good for somebody who's just starting out who wants to learn this for the first time and maybe  you're an it person, but you're trying to get into risk management, but you are like, man, this I'm reading through the nest 837. It just doesn't make any sense. I'm speaking to you in plain English and translating by the time you're done with the course. When you read through 853, when you read through risk management framework, 37,  you're gonna understand what they're saying. They just use a certain language that is just very cumbersome.   I, myself, after years of this have to reread, sometimes I gotta read it over and over again. Cuz the language is not, they're not using every day speak like we're talking right now.  It's just, they use all this different, these different words that you don't normally see. And so you're having to reread it. yeah. Okay.  Answered those two questions and I got a few people talking to me. Let me see, let me read a few of those and somebody's messaging me. Let me just make sure that this is not something important real quick. Okay. All right. So it looks like I'm gonna have to end this session pretty soon.  I got a honey do list to attend to. Okay. I'm gonna read through these as fast as I can. As fast as my dyslexic brain can allow me to process this information.  okay. Says  spade says  I'm maybe five months into my first industry position  as a  tier one. Oh yeah. Tier one security operation center analyst. I guess I'm not exactly entry level, but I'm looking to make more, some more money.    Yeah, I would. So one of the things that I did looking for a junior security analyst role. Oh, okay. So one of the things that I did that immediately made me more valuable and is  there's certain certifications. Now, one of my courses actually talks about this, but I can mention a couple right now, the certain certifications that lend themselves to making more money, like just off the top of my head, a CIS S P certification.  And then there's certain skills  certain skills.    Actually let me name a couple other certifications, any kind of professional level certification  is going to get you more money. CI  S P the CASP CI SM C I S a CCNP. Those are our professional level certifications, entry level security certifications would be like security plus  and there's a few other ones, but    okay, so those are certifications.  And then for skills, if you're in a sock that would be seam,  if    Splunk, if    arch site's not as hot anymore, but Splunk is super hot.  If  some of the IDSS on IPSS  if you're deep in the firewalls  if you can configure them hot  if you're Palo, Alto's a hot one.  But if you're  it's security analyst works. So you're looking at more stuff. That's looking at logs.  McAfee products NEIS  is a good one.  But the top ones right now is still on fire would be  Splunk. Yeah, Splunk. And then another hot one, like it's getting more hot, I would say, would be cyber security.  Cyber security, threat intelligence stuff is getting pretty hot.  Cloud computing. If you know that one, like more and more organizations are using it. So they need people who know some of the vulnerabilities of cloud technology.  What kind of gotchas that organizations fall into is another good thing to know. So those skill sets are immediately get you in another bracket of pay.  I have to warn you though. Once you get to another bracket of pay, you gotta deal with the IRS, but that's a whole nother conversation. Okay. JJ says  I got hit up for a cyber security risk management framework engineer, long term remote W2 contract position. I have no experience with the risk management framework. I'm guessing I got hit up because of my cyber security experience, clearance tips, and tricks. Do I have any tips and tricks for this?  You okay. Do you said I have no risk management framework. Okay. So if you ha don't have any experience in it  yeah, that's gonna be, I  if you want the job  I would talk to 'em about  taking you on as a, as somebody who's learning it.  Just be honest with them and say, no, I don't have experience with this, but I do have risk. I do have cybersecurity knowledge and I have read through  the risk management framework, 853, I've read through 837. I'm familiar with it. I've worked with  Compliance officers before I've worked with information system security officers before I've worked with security assessors before whichever one of those is true for you. If none of 'em are true, of course don't say that, but , if you, so the thing is  if you have experienced from cyber security, you have an advantage in that    the basic concept  of security, which is to protect the CIA  protect the confidentiality, integrity, and availability. You can just tell them you have a very strong foundation, explain to them that you have a very found strong foundation in your respective cyber security role, and then build from there. So if you have a solid skill set in cyber security, even if you're a system administrator, just what you need to do is dig into your archives of all the times you've done. Implementation of security features on a system. I guarantee you have a solid set of skills, right? So with those skills, you wanna tell them, Hey, I know how to secure systems. I know what to look for. And by the way, I know the risk management framework process. I've not done it before, but I know it now, if you don't know it, go learn it.  I have a course that you can go through, check that out that you can add, to be honest with you, you can probably just Google it and read through the risk management framework, 837. I would highly recommend my course because I'm telling you exactly what you're gonna see and what they're gonna say to you and what they're expecting.    And I'd be willing to help you out. So just keep those kind of things in mind, tips and tricks. Number one. Build on what you already know as a cyber security person  confidentiality, integrity, availability, you've secured systems of before, more than likely you've worked with assessors and auditors before, more than likely you've worked with compliance people before you've done documentation before you wanna highlight all of those skills that you already have, and then tell 'em Hey, another  tip is to learn the risk management framework process. Learn it by my course. Go ahead and learn, read through it.  Watch all the videos. You'll get a solid understanding of what the foundations of risk management framework are. Okay. I'm gonna move on to the next thing.  I'm paid member at the first as a first timer. How do I get a job? Because most of the jobs are looking for five years of experience. So one of the things that I would highly recommend Cobi is to. Look for entry level positions. Okay.  Entry level positions, you gotta start somewhere and that start is entry level. Okay. So let me just show you what I mean by that. It's very simple. If you go, if you could follow along with me, if you want go tod.com, this is just one site, by the way, I use this one all the time, cuz  it's just so vanilla. It's so vanilla and so easy to understand and so straightforward that it's feel like it's a really good teaching tool. Okay. So first off here I am in indeed, indeed.com. You're gonna follow along with me. Okay. Put your location wherever you're from wherever you're from. Put that in there. Next thing put  there's a couple things you can do here. You can put ISSO there's a ton of key words you can use for this job. ISSO  entry level, none  in this area.  Okay. Let me search somewhere all over the United States. Wow. It's just really going to town here. All right. So look at this information system, security officer work, most of the jobs, if you happen to be on the east coast, you should know that  you guys have all the jobs  you guys have  70% of all the risk management framework jobs. I'm not even messing around with you, but  yeah. So you notice how all of these are Virginia. You can find a job, especially if you have a clearance. There's a couple of things that you have. You may have an advantage. If you happen to live on the east coast, you have an advantage. If you happen to have a security clearance, watch this. If I put security  clearance, if you have a security clearance, you have an advantage. Cause sometimes they're looking for a person with a security clearance and they're they just get desperate, cuz there's just not that many people who have it. So they'll actually  pull you in and teach you if you have this. Now, if you don't have a security clearance, another thing is you got, you could be eligible. For a security clearance.    Eligible means  you are a  a us citizen BLE. I cannot spell what the damn eligible.  my first and only language and I can't spell  eligible. Yeah. Now all I did was type in eligible and  and they, it immediately knows I'm looking for eligible active. Oh wait, no, I'm looking for eligible.  Security eligible for security clearance is what I'm looking for, but it's coming up with active duty  okay. But a bunch of, so stuff came up eligible security clearance is what I'm looking for. Eligible security officer. Now these are physical security roles. Okay. Here we go.  Principle means like you're a boss, so you don't want that.  information security specialists in an airport. That's physical security. Okay. This is mixing a bunch of stuff up here. Eligible security clearance. Yeah, here we go. So  if you're eligible for security clearance, if this is another  another thing that's gonna make it so that you have a better chance of getting a job, the best thing you can have, of course, I'm not even gonna, I'm not gonna BSU  is experience. There's no replacement for it, but how do you get experience if you don't have it? So you gotta go to entry level positions. Now, if you have zero.  if you have no it experience that is different. If you have some, listen, let me just be very Frank with you. If you have some it experience, meaning you are a system administrator, you worked on databases, you worked on cryptography, you worked on, you have some it experience. You worked on workstations, whatever  you have a very good chance of getting in, into risk management framework. Okay. You have a very good chance. If you have zero, it experience, meaning you've never held a role at a company or a university or a private    or a government or anywhere that is different. That is different. And the reason why is because risk management framework and security is typically not entry level. It's not like literally walking the door and start flipping burgers. Okay. That's not that this is not that kind of a job.  there's too much stuff at stake. There's too much trust that's involved.  There's just, you're gonna be trusted with other people's information and assets. You're gonna be entrusted to know the secrets of that organization    where the vulnerabilities are. You're gonna know where they are. They have to trust you. So for that, they need a professional who has something to lose. All right. That's why cyber security is typically not an entry level position.  I'm sure somebody out there right now is watching this saying, Bruce, what are you talking about? I'm an entry level.  I'm walking off the street and I'm a cyber security person. Okay. That's fine. But I'm just telling you typically, it's not something you walk off the street and you can do this. That's don't lose hope. Okay. If you don't have it experience, if you don't, if you've never done any of this stuff before, there's a couple things you can do. People contact me all the time and what  the last time I did a couple weeks ago, somebody an educator contacted me and she said, Hey, Bruce  I really wanna get into it. I want to be getting a risk management framework. I like what you're saying. It sounds cool to me. I wanna do it. She's an educator. She had a master's degree in education. She has very little or no it skills. And I said, Hey, you might wanna consider becoming a program manager, okay. Program managers work with it. They, and in some cases they have to know our, they gotta know what we're talking about. They have to know some of our jargon. They don't have to know how to configure a server. They don't have, they don't have to know how to stand up a Linux box. They don't have to know how to reduce threats on a.  on a weapon system,  they don't have to do all that, but what they do have to do  is they have to have a certain level of maturity to manage a project and they have to have a certain level of  technical know how with things like office   so those are some of the things that you would, what I would suggest if you were trying to get in a high paying, very high, skilled, high paying job in it. One of the things you can do is get a parallel job, which is a project manager position. It pays six figures by the way. Okay.  It's not a joke. It's no joke. Program management is no joke.    You can actually, even without an it experience, you can get in there and you can make upwards of six figures. Look it up. Look it up.  It's a damn good job.  So yeah, number one, if you don't have any it experience at all, you gotta get it experience. You got, you have to, whether you're volunteering at your church, volunteering at your job. If let's say  you're a system administrator  you're a non system administrator. You're HR, you're in the HR department, right? You work with people's w two S and stuff. You wanna get an it, but you don't know what to do. You don't wanna do a program management work. You don't wanna do that. You wanna do it. Okay.  Then you gotta start from the bottom. Imagine somebody walking in your job in your profession, off the streets, not knowing anything and wanting the keys to the castle. Okay. With cyber security. That's what we're talking about.    You gotta, you, if you have no experience, you gotta get it. That means you gotta become, go to  help desk entry level position is what I would suggest if you have zero it experience, but you wanna get technical. Yes. Go into, try to entry level positions, volunteer, do it for free. Cause that work that you're gonna put in for free fixing somebody's laptops at some corporation is not indentured servitude. It's. That you're building up experience. It's experience. You're slowly building up and putting on your resume, building up experience, putting it on your resume. Then that'll allow you to level up to another job, a higher level it job. You do that by the way, while you're working on your security. Plus, while you're working on your a plus certification, a entry level position with an entry level certification, then once you have those things, now we're talking about months and years worth of work. This is hard work. This is not something you walk off the street and then suddenly you do it. People are gonna entrust think, imagine your bank. Okay. LIS if you don't think it's fair, just imagine your bank, whatever, wherever you bank in the back, they have a security person who D who a cybersecurity person who has no experience, but they know where all the SU they know where all the vulnerabilities of the bank are. They know.   Where the threats, they don't even know what threats are. They don't know what threats are, but they know there's vulnerabilities. They ran the scan. Do you want that person at your bank as a cyber security person who doesn't know what they're doing, who has no experience with it? No, you don't. So I, when you're talking about cyber security, you're talking about somebody who's entrusted with the keys to the castle. They have to have something at stake. And that means you have to put in the work as an it for me to you. If you're an it professional, if you are trying to get cyber security, like we ha we are entrusted with something, with a lot of information  so you have to have something, you have to have some skin in the game. That means time. That means you, you invested your own time and money to get to the skill set and the skill level that you're at. And you're not willing to risk it by making a mistake or doing something stupid. And I everybody makes mistakes, but. As you get to learn how to troubleshoot as you get to learn how these systems work, how to do backups  you begin to learn how to manage your own risk for your own profession. You manage the risk to yourself and ran, manage the risk to your organization and the risk to the organization's information.  I hope that makes sense to everybody out there listening.  Let me see.  And I'm gonna, I gotta do a couple guys.  I gotta get going here.  I apologize for cutting this one short, but  let me see. Can you get a ISSO job with a green card as a green card holder?   That is a good question.  Yes, you, you can, however  There.  Not, maybe not an it's gonna be harder to get an so job. Okay. But let me show you, let me show you my screen here. Let me show you how you can get a compliance job, a security compliance job with a green card. So there are security, cybersecurity jobs  that have a  public trust clearance. It's a type of clearance, public trust clearance. It's a type of clearance that doesn't require you to be a us citizen. If I'm not mistaken.  Yeah, let me see, let me try this one here. And usually they'll say, Hey, you must be a us citizen. They'll tell you right on there.  This one might not be, and it's not giving me that information. So this is a public trust. I think.  but it's not okay. How about this? Let's do this. Let's just be straightforward here. Let's just say, watch this cyber security    green card. They usually put GC as a green card, by the way. Let's see cloud strike.  Let's look at this one. It will say in here. Yep. There you go. Right there. See this that's the keyword right there. See it says green card for clearance, us citizen or  green card for clearance. There you go. That's what you wanna look for when you're looking for positions now, do they do this for ISLs? Let's see, let's just type in ISL. I don't, I've not seen a lot of green card holders be ISLs, but I could be wrong. Senior    chemist, see that see  is so  usually in ISSOs working for a high level government agency and they require that you be a us citizen. So that's why you, I just don't I off the top of my head, I don't know if any ISSOs, but I know that there's actually, I take that back. So there's some corporations  there's some corporations who do ISSO work and they will hire a green card holder. But what I would do if I were you, is I would just senior associate cyber risk. See I'm currently working in an organization that  we have people from all over the world working with us. So I know for sure you can do cyber security, cyber risk in the us    without being a us citizen.  I know several people who that work on our team who are in that exact position, but are they ISSOs  we're not doing  those kinds of, we're not doing D O D type stuff. So let me see here. I'm looking for, did I just pass it? Yeah, it's in here must be a us citizen or green car holder.  And most of these are gonna be, must be a us citizen, an our green car holder jobs. Yeah.   We couldn't find an ISSO position. That's green card, but you can find. All right, guys.  I have to go. I gotta get going here.  Thank you so much for watching me. If you have any other questions, if you look in the description below, there'll be a place where you can actually join me all times of the day on holidays and weekends and stuff  on discord, you have any kind of questions. I'll answer. 'em when I can also  you can always email me.    It's, cyberware 2020 gmail.com and  we can talk about  any kind, and I'll actually make a video sometimes about people ask me really great questions that I think could help many people. And you'd be surprised sometimes people ask me a question, but several other people ask me that exact same question. So I know it's something that is relevant and I know it's something that needs to be addressed. So then I'll just go ahead and make a whole video about it. All right, guys. Thank you for all your questions. Thanks a lot.  Copy. If I didn't answer your question, please answer, ask me on discord in the linked description below    spades. Thank you so much for that. I hope that's how you pronounce your name.  Marcus, thank you for your comments. I did not get to your comments, but    let, what I'll do is I will copy this and use this for another time. Another video. Thank you guys so much for watching. Join me on discord. If you have any, if you have a pressing question and we will talk.

The Nonlinear Library
EA - Fundraising Campaigns at Your Organization: A Reliable Path to Counterfactual Impact by High Impact Professionals

The Nonlinear Library

Play Episode Listen Later Sep 7, 2022 8:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fundraising Campaigns at Your Organization: A Reliable Path to Counterfactual Impact, published by High Impact Professionals on September 7, 2022 on The Effective Altruism Forum. TL;DR Running a fundraising campaign at your workplace can be a highly effective way to multiply your impact. In 2021, High Impact Professionals (“HIP”) supported EAs in organizing 8 fundraising campaigns at 8 different companies, counterfactually raising 240,000 USD for effective charities. The average amount raised per event was about 30,000 USD (median 3,900 USD), with most donations coming from a few events. On average, it took about 25 hours to organize and run a campaign (20 hours by organizers and 5 hours by HIP). The events generated an average of 786 USD per hour of counterfactual donations to effective charities. This makes fundraising campaigns a very cost effective means of counterfactual impact; as a comparison, direct work that generates 1,000,000 USD of impact equivalent per year equates to around 500 USD per hour. If you're interested in exploring what a fundraising campaign could look like at your organization, please contact us and check out our step-by-step fundraising guide. If you're interested in learning more about the 2021 campaigns' data and methodology, please keep reading. We would love to get feedback on our data and methodology, so don't hesitate to reach out here or in the comments. Intro Running a fundraising campaign at your workplace can be a highly effective way to multiply your impact as a working professional. Getting colleagues to donate money to effective charities not only increases your donation leverage but also has the potential to get others involved in the EA movement. It can also help you build relevant EA career capital. During the 2021 giving season, HIP supported EAs in organizing 8 fundraising campaigns at 8 different companies. Essentially, the workplace campaigns encouraged those EAs' colleagues to donate to effective charities. The results detailed below point to these events being a strong way to enhance one's impact. Results In total, the 8 campaigns counterfactually raised about 240,000 USD for effective charities. The average amount raised per event was about 30,000 USD, with a 90% Confidence Interval (“CI”) of 80 USD to 140,000 USD. The graph below shows the distribution in logarithmic scale. The main cost of the initiative is the time taken to organize the campaign. On average, organizers spent 20 hours on their events (CI 6 hours to 42 hours) and received 5 hours of support from HIP (CI 2 hours to 10 hours). The distributions are shown below. Computing the ratio of money raised to time spent, we arrive at an average of 786 USD per hour (CI 7 USD to 3,100 USD). To put this into perspective, let's assume that direct work generates between 100,000 USD and 1,000,000 USD of impact equivalent per year and that people work between 1,800 and 2,200 hours per year. This computes to an average of 290 USD per hour (CI 57 USD to 530 USD), which means that organizing fundraising campaigns can be about 2.7x as effective as direct work on an hourly basis. Of course, we are not saying that people should stop considering direct work and do fundraising events instead since, for example, the impact of doing fundraising campaign work year-round may not scale as well. Still, strategically timed campaigns (e.g., around giving season, your organization's raise/bonus season, or another liquidity event) could be a very high impact seasonal side gig. Heavy Tail One could raise a fair argument that the donation distribution above is quite heavily tailed, and because of the limited data points, the result may be anomalous. Though we have already been quite conservative in the estimation of the counterfactual impact of the events, we could take an even more conse...

The Nonlinear Library
EA - 13 background claims about EA by Akash

The Nonlinear Library

Play Episode Listen Later Sep 7, 2022 5:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 13 background claims about EA, published by Akash on September 7, 2022 on The Effective Altruism Forum. I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs. In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub. Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are "Akash's opinions" as opposed to "background claims.") Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community. General 1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years). 2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity's long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/future will be shaped by the extent to which these systems are aligned with human values. 3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what's going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.) 4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.) AI Safety 5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly). 6. There is widespread disagreement in AI safety about which research agendas are promising, what the core problems in AI alignment are, and how people should get started in AI safety. 7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List). 8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you've learned, and visiting an EA hub). 9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project...

Environmental Professionals Radio (EPR)
Controversial Projects, Environmental Justice, and Consulting with Emily Gulick

Environmental Professionals Radio (EPR)

Play Episode Listen Later Sep 7, 2022 44:21


Welcome back to Environmental Professionals Radio, Connecting the Environmental Professionals Community Through Conversation, with your hosts Laura Thorne and Nic Frederick! On today's episode, we talk with Emily Gulick, Environmental Planner at Jacobs Engineering Group about Controversial Projects, Environmental Justice and Consulting.   Read her full bio below.Help us continue to create great content! If you'd like to sponsor a future episode hit the support podcast button or visit www.environmentalprofessionalsradio.com/sponsor-form Showtimes: 2:16  Nic & Laura Talk about Dressing for Success5:50  Interview with Emily Gulick Starts10:44  Consulting15:07  Controversial Projects23:13  Environmental JusticePlease be sure to ✔️subscribe, ⭐rate and ✍review. This podcast is produced by the National Association of Environmental Professions (NAEP). Check out all the NAEP has to offer at NAEP.org.Connect with Emily Gulick at https://www.linkedin.com/in/emily-gulick-cep-it-60a941b6/Guest Bio:Emily is an Environmental Planner at Jacobs Engineering Group. Although located in California, most of her environmental experience is at the federal level implementing NEPA.  Emily has supported a wide range of projects including large-scale and highly controversial EISs to small-scale expedited EAs for a variety of federal agencies including NASA, DoD, and NSF. Emily also leads the National Association of Environmental Professionals (NAEP) Environmental Justice Working Group and is a CEP-IT. Emily has a B.A. in Environmental Studies and a B.A. in Geography from the University of Colorado Boulder.Music CreditsIntro: Givin Me Eyes by Grace MesaOutro: Never Ending Soul Groove by Mattijs MullerSupport the show

The Nonlinear Library
EA - Say “nay!” to the Bay (as the default)! by Kaleem

The Nonlinear Library

Play Episode Listen Later Sep 6, 2022 10:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Say “nay!” to the Bay (as the default)!, published by Kaleem on September 6, 2022 on The Effective Altruism Forum. & move to the East Coast instead TLDR: The Bay Area isn't a great place to centre the EA community in the US. The East coast, between Boston and Washington DC, is a much better place because of the number of top universities, its proximity and accessibility to other EA-dense spots, its importance with respect to biosecurity and US policy, and how much money and time it would save the EA community overall. Views are my own and not those of my employer. Epistemic status: 70 sure %? Based on a comment I made on a different post. Intro: Whilst institutions and individuals who are already based in the Bay may be best placed there, especially those whose focal area is AI safety, those creating new programs, new organisations, and new events should seriously consider instead choosing an East Coast city as their homebase. The current paradigm was not established intentionally or strategically and there are strong reasons to pause, reevaluate, and shift forthcoming resources and institutions to other locations. My Claim: The American EA community should be centred around the East Coast 0. Context: The EA community, broadly speaking, has two hubs - the Bay Area (which has Constellation, Lightcone, a number of EA org headquarters, and multiple all-EA living facilities) and Oxford (which has Trajan House, Whytham Abbey, and ). According to the 2020 EA survey, 52.3% of the EA community live in the US and UK, so this makes sense. Furthermore, the Bay Area was the most EA-populated ‘city', with 100 (of 1163) respondents living there. However Oxford only had ~50 respondents living there, compared to London, which was the second most populated city with ~80 respondents. How these hubs came into existence was largely not strategic in terms of EA community-building in a global sense: Oxford is a hub because that is where the philosophers who formalised EA were living at the time, and subsequently where a large proportion of of EAs were found early on in the movement's history. The Bay is where some American EAs were when they learned about EA, and became a gathering point which then got more attractive as EAs started focusing on AI, as the Bay is a global hotbed for AI research. So, here are my 5 reasons (in decreasing strength) that support my claim: 1. The East Coast is better for University outreach EA community builders have historically thought (and continue to think) that universities are the best place to do EA outreach. Furthermore, we'd argue that extremely prestigious, highly ranked universities are especially good places to do this. I think it is very likely that moving to the East Coast is likely to be much more impactful for anyone interested in on-the-ground EA and longtermist community building which focuses on top-universities. Here is Juan's cerebral case for Cambridge, Massachusetts being important. Here is a table with top 100 ranking universities at undergraduate level in the Bay compared to the east coast: Global University Ranking (undergrad only) University Name US News '22QS '23THE '22Mean The East Coast has: Harvard 1523rdMIT 2153rdYale1218913thPrinceton1616713thColumbia6221113thJohns Hopkins9241315thCornell22202221stNYU30392632ndBoston University 651086278th Brown112636480thWhereas the West Coast has: Stanford 3343rdUC Berkeley 427813th There are too many different specialties to do a subject-by-subject comparison between the two cities when it comes to graduate programs, so I will also just present “overall global graduate program rankings” and “US law school” rankings. US Law school ranking University Name Ranking The East Coast has: Harvard 4MIT NAYale1PrincetonNAColumbia4Johns HopkinsNACornell12NYU7Boston University 17BrownNAWhereas the West C...

The Nonlinear Library
EA - Selfish Reasons to Move to DC by Anonymous EA

The Nonlinear Library

Play Episode Listen Later Sep 5, 2022 4:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Selfish Reasons to Move to DC, published by Anonymous EA on September 5, 2022 on The Effective Altruism Forum. You've probably heard the arguments for working in DC policy. Basically, the US government has lots of power and money, and it's possible to get into influential roles surprisingly quickly. But many in the EA community have a negative view of the quality of life of a DC policymaker: the hours are long; the policy wins are big but rare; the pay is terrible; the bureaucracy is even worse; and you can't wear jeans to work. Doing policy work in DC is sometimes framed as a noble sacrifice that EAs make in order to have a higher impact. In the run-up to EAG DC, I want to offer some reasons that DC is not only a great place from an impact perspective, but also a really nice place to live (for a certain sort of person). I think these selfish factors matter quite a bit for whether a career in the federal government is going to be sustainable for you over the long term. Note: I'm deliberately including some items on this list that will sound terrible to some readers, as long as they're perks from my perspective. The EA community in DC rocks. This honestly bears like 90% of the weight for me (and probably most EAs in DC). I've found the DC EA community to be by far the most warm and welcoming of any EA community I've encountered. One possible reason is that EAs who decide to come to DC are self-selected for being more extroverted than the median EA. There's also a strong culture of networking and making introductions in DC, which makes it easier to get integrated quickly. DC is beautiful. DC has tons of parks (the most of any major US city?), lots of beautiful architecture and lots of interesting trails to explore. It's a particularly great place for runners and bikers – you can feel like you're out in nature pretty quickly on a regular basis without needing to actually leave the city. The dating market is good. I haven't been on the market while living in DC, but friends tell me there's an abundance of young, fit, single, highly-educated, socially skilled, do-gooding people in the area. Non-EAs living in DC tend to be pretty impact-oriented and ambitious. It's not uncommon for my non-EA peers in DC to be really excited about the work they're doing and enjoy talking about their ambitions for impacting the world. It's not huge. Most places and people I want to visit in DC are at most 30 minutes away. In fact, much of the EA community in DC is very concentrated in just a couple neighborhoods. If you live in one of those places, you'll live within walking distance of lots of cool people. And because DC is beautiful (see point 2), the walks are pleasant. The food scene is great, especially for vegns. I honestly don't care much about food, but I hear that DC's restaurants are actually a huge draw for my foodie friends, particularly vegans and vegetarians. I've definitely appreciated the abundance of really high-quality and relatively affordable “fast casual” vegan-friendly spots around DC. Free museums! The Smithsonian museums are awesome, and it's pretty great to be able to walk in any day of the week without paying anything. Lots of free activities make it easier to live on the modest salaries of early policy jobs. DC is cool. Alright, maybe only according to me. But I think between the grand architecture, the security clearances, proximity to the “halls of power,” and wearing suits all the time, my life in DC is just a lot cooler than it was in other cities. Movies like “All the President's Men” and “The Report” convey some of the coolness of DC. You don't have to work in government. This isn't exactly a benefit — more of a PSA: it's not crazy to move to DC, even if you don't see yourself having a long career in government. It could still make sense to come to DC and wor...

Välismääraja
Välismääraja: Venemaa vastaste sanktsioonide tegelik mõju

Välismääraja

Play Episode Listen Later Sep 4, 2022


Ukraina sõjast kantuna on Lääs kehtestanud Venemaa vastu ulatuslikud majanduslikud piirangud, mille eest ise nüüd kõrget hinda makstakse. Välisministeeriumi juriidiliste ja konsulaarküsimuste asekantsler Erki Kodar räägib, millist mõju need sanktsioonid Venemaale avaldavad ja kuidas mõned riigid neist mööda proovivad hiilida. Eesti kui ühe sanktsioonide eestvedaja kuvandit välisinvestorite silmis kommenteerib EAS-i ja KredExi ühendasutuse välisinvesteeringute keskuse juht Joonas Vänto. Saadet juhib Peeter Raudsik.

ProMarketer's Podcast
One of Many Marketing Principles with Jassen Bowman

ProMarketer's Podcast

Play Episode Listen Later Sep 3, 2022 46:09


We hate to break it to you, but…    The internet is not special. It's just one of many marketing principles to lay hold of (albeit - a necessary one in this time).   At least, that's what our guest Jassen Bowman, EA, asserts in this episode of the ProMarketer Podcast. He says it's no different from any other offline marketing medium.    Of course, tax and accounting firms still need a strong online presence and a plan for marketing to new and existing clientele.    But despite all the fancy tips and tricks for how to market online, a lot of the tried-and-true marketing principles still apply to the internet. After all, why reinvent the wheel when it's rolling just fine?   In this episode, Jassen Bowman joins TaxProMarketer CEO Nate Hagerty to discuss how fundamental marketing principles are more applicable than ever for both your online and offline marketing. As an Enrolled Agent, Jassen has presented over 500 live seminars and webinars to CPAs, EAs, and attorneys on the subjects of tax firm marketing, practice management, ethics, real estate, and taxpayer representation. He has authored over a dozen books for tax professionals and written for NSA's Main Street Practitioner, NAEA's EA Journal, AccountingWeb, and CPA Trendlines.    Check out Jassen's latest book, Profit Optimizers: 12 Big Ideas for a More Profitable Tax Firm, here: https://jassenbowman.com/books/

The Nonlinear Library
EA - Chesterton Fences and EA's X-risks by jehan

The Nonlinear Library

Play Episode Listen Later Sep 3, 2022 4:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chesterton Fences and EA's X-risks, published by jehan on September 2, 2022 on The Effective Altruism Forum. TL;DR: EA lacks the protective norms nearly universal in mature institutions, leaving it vulnerable to the two leading causes of organizational sudden death: corruption and sex scandals. A foundational belief in EA is the importance of unintended negative consequences; a famous actionable heuristic around this is Chesterton's Fence: Let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it. It's not to say that you can never take down fences, only that you recognize what you're losing so that you can weigh it against what you stand to gain. However, I believe EA has carelessly dismantled valuable fences while gaining comparatively little. At the very least there are low-hanging fruit fences that could have outsized returns with few costs. 1) Dismantled Fence: Undervaluing Outside Experience This has been discussed extensively, so I won't comment further except to note that outside experience is a rite of passage in a wide range of communities.It may be that this general lack of outside experience has prevented EA from adopting hard-won norms found elsewhere: 2) Dismantled Fence: Excessive Fraternization Meme in EA: All your friends, roommates and partners are EAs. This is not a good thing. Many have commented, including Will MacAskill, that this is personally suboptimal and have tried to correct it. The EA-consistent reason for this seems to either be work-life balance or viewpoint diversity. These reasons are valid, but not the reason nearly every successful organization has rules around this: because it's an existential risk for The Mission.Not in some second-order way either, excessively close and incestuous relationships cause dysfunction directly and in particular leave it open to perhaps the modern era's leading cause of organizational sudden death: sex scandals.Successful institutions have rules around fraternization and power relationships, EA seems to largely lack explicit norms here-if anything the norm seems to actually go the other way from the mainstream.I understand that relationships are what makes a community, but bad relationships will make a bad community while also endangering the mission. 3) Dismantled Fence: Lack of Explicit Anti-Corruption Rules You may read “corruption” and think “graft”, but that's typically the last rung on the corruption ladder. It's not so overt or conscious at first, so more like self-dealing, conflict of interest, and nepotism.Few explicit rules address the problems above, and there is at least the perception that within EA they exist (but this perception could be mitigated via visible norms). Mature institutions have rules preventing these sorts of failure modes. EA however is now combining fraternization with few rules around self-dealing and favoritism to create a massive opportunity for organizational death with no comparable upside. Fences as an Immune system These fences not only prevent existing members from inadvertently sliding into scandal, but also act as an immune system for bad actors. EA now has an enormous amount of power, money, and influence. When it was small and unappealing it may have gotten away with security through obscurity, but it's now far too big of a prize to exist so immunocompromised.(Cross-posted here: www.atvbt.com/eacrit) Examples: Mormon missionaries, family businesses, New Zealand "But...

The Nonlinear Library
EA - Young EAs should choose projects more carefully by sabrinac

The Nonlinear Library

Play Episode Listen Later Sep 2, 2022 3:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Young EAs should choose projects more carefully, published by sabrinac on September 2, 2022 on The Effective Altruism Forum. Thanks to Emma Williamson for her helpful feedback. Epistemic status: I wrote this up pretty quickly so take what I say with a grain of salt. A lot of this post is based on an intuition I've developed after talking to lots of young EAs. I also think this post is most relevant for people working within community-building and meta-EA stuff. I wish someone would've given me this advice a year ago, so I hope it's useful for other people. The problem Lots of young EAs are uncertain about their career paths and enjoy working within the EA community, so they often default to working on hopping between independent projects that have the EA stamp of approval. (e.g. doing ops for Atlas, independent community-building projects, going to the Bay to learn about AI safety, running one-off retreats, etc.) While hopping between independent projects can allow you to quickly test your fit for something, I think many young EAs should do this less and/or think more carefully about which projects they're working on. There are a bunch of reasons why I think this: Independent, unrelated projects often lack good mentorship and learning structures. You don't get to work at an established org where you get lots of feedback from your boss/mentor. You also usually don't build up expertise in a field. It encourages young people to stay within the EA bubble, instead of leaving, acquiring diverse skill-sets, and getting feedback from the real world. This also exacerbates talent bottlenecks further down the funnel once the community lacks expertise in specific fields/career paths. Young people often equate “working on an EA project” with “doing work that substantially improves the world/reduces x-risk.” While projects might seem valuable in the short term, I think it damages young people's longer-term impact. (i.e. by delaying you from developing deep expertise and career capital.) Instead of grappling with the complexity of the problems you want to solve, hopping between projects gives you a shallow understanding of several different things. I think my claim mostly applies to a certain subset of young people in EA, particularly community-builders and people with a lack of direction/concrete career plans. There's another related trend where I see community-builders incentivized to put out fires and solve the small-scale problems immediately in front of them. They get the reputation as someone who can “get shit done” but in practice, they're usually solving ops bottlenecks at the cost of building harder-to-acquire skills. I'm concerned that conscientious young women with high executive functioning disproportionately get trapped here. Caveats This post doesn't apply to the majority of young EAs. Some independent projects are useful for figuring out if you enjoy specific kinds of work, allowing you to quickly eliminate career options. I'm not arguing that young people should spend less time working on projects, but rather that they should choose projects more carefully, prioritizing ones with (a) good mentorship/learning opportunities and (b) a narrow focus within the fields they're interested in. I don't think this trend is indicative across the entire community. I think when people want to work within a specific cause area—e.g. AI safety, biosecurity, animal welfare, etc.—they have a lot more direction and this phenomenon happens less. This problem overlaps with community-builders spending too much time community-building, and not engaging enough with the problems they want to solve. Suggestions Don't work on a project just because it has the EA stamp of approval—make sure you have a clear theory of change for why you're working on something. (Gently and kindly) tell your frie...

The Nonlinear Library
EA - Funding for new projects: A short talk by Charity Entrepreneurship. by SteveThompson

The Nonlinear Library

Play Episode Listen Later Sep 2, 2022 2:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for new projects: A short talk by Charity Entrepreneurship., published by SteveThompson on September 2, 2022 on The Effective Altruism Forum. Joey Savoie: September 13, 6:00 PM UK time (10:00am SF, 1:00pm NYC). Over the coming months, we will be hosting a series of talks aimed at explaining our research findings, our key decisions, and how these may be relevant to EAs and their career decisions. The first talk in this series will be about our experience in finding funding for early-stage projects. Securing funding is among the most time-consuming and limiting factors for most project founders. At Charity Entrepreneurship our aim is for more highly-impactful charities to exist in the world. In the past five years we've tapped into over 30 sources of funding, ranging from individual donors to some of the larger and more evidence-oriented philanthropic foundations. Furthermore, we stay close to our past incubatees and have been learning from their funding experiences. We've recently seen increased attention in EA on funding entrepreneurial projects. With the intensification of funding, the sheer variety, complexity and ever changing landscape; for example, FTX funding ~4% of over 1700 applications, we suspect there may be some value in sharing our perspective on how to achieve early stage funding for impactful projects. In this 75-min online talk with Q&A, our Co-founder & Director of Strategy Joey Savoie will share: What we've learned about the Funding Landscape (available funding around EA) How to get your project funded; what works, what doesn't Why and when to turn down funding We intend to limit spaces to enable Q&A (subject to how many sign up) and we are also interested in exploring your fit for our programs. You can sign up below and tell us a little about yourself and, if appropriate, your project. [Sign up for the talk here] Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - EA is about maximization, and maximization is perilous by Holden Karnofsky

The Nonlinear Library

Play Episode Listen Later Sep 2, 2022 10:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is about maximization, and maximization is perilous, published by Holden Karnofsky on September 2, 2022 on The Effective Altruism Forum. This is not a contest submission; I don't think it'd be appropriate for me to enter this contest given my position as a CEA funder. This also wasn't really inspired by the contest - I've been thinking about writing something like this for a little while - but I thought the contest provided a nice time to put it out. This piece generally errs on the concise side, gesturing at intuitions rather than trying to nail down my case thoroughly. As a result, there's probably some nuance I'm failing to capture, and hence more ways than usual in which I would revise my statements upon further discussion. For most of the past few years, I've had the following view on EA criticism: Most EA criticism is - and should be - about the community as it exists today, rather than about the “core ideas.” The core ideas are just solid. Do the most good possible - should we really be arguing about that? Recently, though, I've been thinking more about this and realized I've changed my mind. I think “do the most good possible” is an intriguing idea, a powerful idea, and an important idea - but it's also a perilous idea if taken too far. My basic case for this is that: If you're maximizing X, you're asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren't X, which may be important in ways you're not seeing. Maximizing X conceptually means putting everything else aside for X - a terrible idea unless you're really sure you have the right X. (This idea vaguely echoes some concerns about AI alignment, e.g., powerfully maximizing not-exactly-the-right-thing is something of a worst-case event.) EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we're conceptually confused about, can't reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble. The upshot is that I think the core ideas of EA present constant temptations to create problems. Fortunately, I think EA mostly resists these temptations - but that's due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls. As EA grows, this could be a fragile situation. I think it's a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That's a deep challenge for a community to have - a constant current that we need to swim against. How things would go if we were maximally “hard-core” The general conceptual points behind my critique - “maximization is perilous unless you're sure you have the right maximand” and “EA is centrally about maximizing something that we can't define or measure and have massive disagreements about” - are hopefully reasonably clear and sufficiently explained above. To make this more concrete, I'll list just some examples of things I think would be major problems if being “EA” meant embracing the core ideas of EA without limits or reservations. We'd have a bitterly divided community, with clusters having diametrically opposed goals. For example: Many EAs think that “do the most good possible” ends up roughly meaning “Focus on the implications of your actions for the long-run future.” Within this set, some EAs essentially endorse: “The more persons there are in the long-run future, the better it is” while others endorse something close to the opposite: “The more persons there are in the long-run future, the worse it is.”1 In practice, it seems that people in these two camps try hard to find comm...

The Nonlinear Library
EA - Students interested in US policy: Consider the Boren Awards by US Policy Careers

The Nonlinear Library

Play Episode Listen Later Sep 1, 2022 29:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Students interested in US policy: Consider the Boren Awards, published by US Policy Careers on August 31, 2022 on The Effective Altruism Forum. Summary This post summarizes why and how to apply to the Boren Awards, a prestigious language program for U.S. undergraduates (“Boren Scholarship”) and graduate students (“Boren Fellowship”) lasting 2 to 12 months. The Boren Awards present a great opportunity for EAs to gain career capital for U.S. policy work, particularly in the federal government, by developing regional expertise regarding countries such as China, Russia, and India. To be eligible, applicants must be U.S. citizens and be currently enrolled in an accredited undergraduate or graduate degree program located within the United States. Application deadlines for this year are listed below and are typically in January/February: Graduate students: January 25th, 2023, for the Boren Fellowship Undergrads: February 1st, 2023, for the Boren Scholarship This post is informed by my (Grant Fleming's) experience in 2016-2017 as a Boren Scholar in Shanghai, which I did after completing my degree requirements—while nominally still enrolled as a fifth-year undergraduate—at the University of South Carolina. If you are interested in applying for the Boren Awards—even if you are still unsure or plan to apply in future years—please fill out this form to receive support for your application and potentially be connected with former Boren Awardees. Program details The Boren Awards provide U.S. citizens up to $25,000 in funding to study abroad for up to a year, learn a language critical to U.S. national security (e.g., Chinese, Russian, Hindi, or Arabic), and complete other (non-language) academic credits of the student's choosing. Boren awardees must be willing to seek and hold a job relevant to national security as a government employee or federal contractor for at least one year after returning to the United States. Note that China and Russia have recently been unavailable as Boren countries (though China is available again for 2023), so awardees studied Chinese in Taiwan or Singapore and Russian in Kazakhstan, Kyrgyzstan, Latvia, Republic of Moldova, or Ukraine. Rather than selecting their own study abroad program, applicants may also apply to one of the Regional Flagship Language Initiatives (FLI), which can have very favorable admission rates. These programs involve significant language study, beginning in the summer with a mandatory language course domestically prior to a semester of mandatory language study overseas in the fall. Interested applicants can opt to continue their award with self-organized study overseas for the spring semester. FLI students receive more structure and logistical support than "regular" Boren awardees, but they're subject to more rules and are not able to choose their own city and program of study. After completing their time abroad, Boren awardees receive career support from the National Security Education Program (NSEP), including access to special hiring privileges, private government job boards, and online alumni groups to help them get a public sector job or a national security-oriented job in private industry. Jobs sought after program completion do not have to be directly relevant to an awardee's language of study, country of award, or academic major, making the Boren awards a good opportunity to pursue for anyone who is seeking a career as a: Public sector employee of the U.S. government Private sector employee of a public policy firm, think tank, or advocacy group working with the U.S. government on projects dealing with national security Private sector consultant specializing in public sector clients In general, the Boren Awards present a good opportunity for students interested in working for, or with, the U.S. government in any capacity....

The Nonlinear Library
EA - Effective altruism in the garden of ends by tyleralterman

The Nonlinear Library

Play Episode Listen Later Sep 1, 2022 41:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism in the garden of ends, published by tyleralterman on August 31, 2022 on The Effective Altruism Forum. Huge thanks to Alex Zhu, Anders Sandberg, Andrés Gómez Emilsson, Andrew Roberts, Anne-Lorraine Selke (who I've subbed in entire sentences from), Crichton Atkinson, Ellie Hain, George Walker, Jesper Östman, Joe Edelman, Liza Simonova, Kathryn Devaney, Milan Griffes, Morgan Sutherland, Nathan Young, Rafael Ruiz, Tasshin Fogelman, Valerie Zhang, and Xiq for reviewing or helping me develop my ideas here. Further thanks to Allison Duettmann, Anders Sandberg, Howie Lempel, Julia Wise, and Tildy Stokes, for inspiring me through their lived examples. I did not believe that a Cause which stood for a beautiful ideal [.] should demand the denial of life & joy. – Emma Goldman, Living My Life This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs: Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help. Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits. Particularly so for those working outside of organized religion on huge, consuming causes. I suggest such a community might practice something like “fractal altruism,” taking the good life at the scale of its individuals out of competition with impact at the scale of the world. If you're a Blinkist or Sparknotes person, you can stop here. If you read on, you might find that everything written has been said already in the history of philosophy. This is a less rigorous experiential account that came from five years of personal reckoning. I thought my own story might be more relatable for friends with a history of devotion – unusual people who've found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It's the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life]. There's also an implicit critique of the Effective Altruist movement here. As far as I can tell, my dark night experience represents a common one for especially devoted, or “hardcore” EAs – the type who end up in mission-critical roles. If my experience is in fact representative, then I suspect it is productive (even on consequentialist grounds) for the movement to confront its dark night problem. Of course, there has been much discussion about burnout and mental health in EA. The movement has responded: there are support groups and more. But I doubt that ordinary mental health interventions – therapy, coaching, etc – are sufficient for hardcore EAs. Instead, I think that hardcore segments of the movement, who are bound by philosophy, might be helped by examining whether their philosophy is, in fact, in agreement, rathe...

The Nonlinear Library
EA - Open EA Global by Scott Alexander

The Nonlinear Library

Play Episode Listen Later Sep 1, 2022 8:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open EA Global, published by Scott Alexander on September 1, 2022 on The Effective Altruism Forum. I think EA Global should be open access. No admissions process. Whoever wants to go can. I'm very grateful for the work that everyone does to put together EA Global. I know this would add much more work for them. I know it is easy for me, a person who doesn't do the work now and won't have to do the extra work, to say extra work should be done to make it bigger. But 1,500 people attended last EAG. Compare this to the 10,000 people at the last American Psychiatric Association conference, or the 13,000 at NeurIPS. EAG isn't small because we haven't discovered large-conference-holding technology. It's small as a design choice. When I talk to people involved, they say they want to project an exclusive atmosphere, or make sure that promising people can find and network with each other. I think this is a bad tradeoff. ...because it makes people upset This comment (seen on Kerry Vaughan's Twitter) hit me hard: A friend describes volunteering at EA Global for several years. Then one year they were told that not only was their help not needed, but they weren't impressive enough to be allowed admission at all. Then later something went wrong and the organizers begged them to come and help after all. I am not sure that they became less committed to EA because of the experience, but based on the look of delight in their eyes when they described rejecting the organizers' plea, it wouldn't surprise me if they did. Not everyone rejected from EAG feels vengeful. Some people feel miserable. This year I came across the Very Serious Guide To Surviving EAG FOMO: Part of me worries that, despite its name, it may not really be Very Serious... ...but you can learn a lot about what people are thinking by what they joke about, and I think a lot of EAs are sad because they can't go to EAG. ...because you can't identify promising people. In early 2020 Kelsey Piper and I gave a talk to an EA student group. Most of the people there were young overachievers who had their entire lives planned out, people working on optimizing which research labs they would intern at in which order throughout their early 20s. They expected us to have useful tips on how to do this. Meanwhile, in my early 20s, I was making $20,000/year as an intro-level English teacher at a Japanese conglomerate that went bankrupt six months after I joined. In her early 20s, Kelsey was taking leave from college for mental health reasons and babysitting her friends' kid for room and board. If either of us had been in the student group, we would have been the least promising of the lot. And here we were, being asked to advise! I mumbled something about optionality or something, but the real lesson I took away from this is that I don't trust anyone to identify promising people reliably. ...because people will refuse to apply out of scrupulosity. I do this. I'm not a very good conference attendee. Faced with the challenge of getting up early on a Saturday to go to San Francisco, I drag my feet and show up an hour late. After a few talks and meetings, I'm exhausted and go home early. I'm unlikely to change my career based on anything anyone says at EA Global, and I don't have any special wisdom that would convince other people to change theirs. So when I consider applying to EAG, I ask myself whether it's worth taking up a slot that would otherwise go to some bright-eyed college student who has been dreaming of going to EAG for years and is going to consider it the highlight of their life. Then I realize I can't justify bumping that college student, and don't apply. I used to think I was the only person who felt this way. But a few weeks ago, I brought it up in a group of five people, and two of them said they had also stopped applying to EA...

Building the Premier Accounting Firm
Going Beyond Tax Planning w/ Dominique Molina

Building the Premier Accounting Firm

Play Episode Listen Later Aug 31, 2022 52:17


This week, Roger has a chat with certified tax planner and tax issues expert Dominique Molina.  They talk about how you can realize your full potential as a tax planner, getting beyond tax preparation and tax planning for your clients and how you can save your clients money through advisory services. Your Host: Roger Knecht, president of Universal Accounting Center Guest Name: Dominique Molina Co-Founder and President of the American Institute of Certified Tax Planners Editor-In-Chief, Thinking Outside The Tax Box Dominique is an accomplished keynote speaker, teacher, best-selling author, and mentor to tax professionals across the United States. She frequently appears in print, television, and radio programs, including CNN Money, and was named one of the 40 Most Influential Accountants by CPA Practice Advisor Magazine. Dominique is the co-founder and President of the American Institute of Certified Tax Planners. She trains and certifies CPAs according to her own high standards for advisory excellence and tax reduction goals. In 2009, Dominique began to create an elite network of tax professionals including CPAs, EAs, attorneys and financial service providers who are trained to help their clients proactively plan and implement tax strategies that can rescue thousands of dollars in wasted tax. AICTP's growing network of tax professionals successfully reaches over 300,000 entrepreneurial businesses throughout the country with Dominique having licensed over 800 tax planners, both nationally and internationally. Prior to founding the American Institute of Certified Tax Planners, Dominique successfully established her own practice, a San Diego-based, full-service tax, accounting and business consulting firm, serving hundreds of business owners and investors across the country. Preceding this, Dominique assisted a variety of clients for the largest independently owned CPA firm in San Diego. Sponsors: Universal Accounting Center Helping accounting professionals confidently and competently offer quality accounting services to get paid what they are worth.   Offers:   Check out the American Institute of Certified Tax Planners   Turnkey Business Plan, Strategic Selling & Selling With Confidence   Webinar: Universal CRM     For Additional FREE Resources for accounting professionals check out this collection HERE!   Be sure to join us for GrowCon, the LIVE event for accounting professionals to work ON their business. This is a conference you don't want to miss.   Remember this, Accounting Success IS Universal. Listen to our next episode and be sure to subscribe.   Also, let us know what you think of the podcast and please share any suggestion you may have.  We look forward to your input: Podcast Feedback   For more information on how you can apply these principles in your business please visit us at www.universalaccountingschool.com or call us at 801.265.3777

The Nonlinear Library
EA - Mexico EA Fellowship by Sandra Malagon

The Nonlinear Library

Play Episode Listen Later Aug 30, 2022 4:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mexico EA Fellowship, published by Sandra Malagon on August 30, 2022 on The Effective Altruism Forum. TL;DR: Come work remotely in Mexico City from November to January while you help us support the existing EA community! Sign up here. We are excited to announce the 2022 EA fellowship program in Mexico City (CDMX). We will be hosting ~35 people at any given time from 1 November 2022 to 30 January 2023: including 20 permanent fellows for the whole duration, with the remaining 15 places reserved for rotating visitors. The program will convene seasoned professionals from the international EA community and promising Spanish speakers. We believe this is an exciting opportunity to meet and support the community in Latin America! This program is for EAs from all over the world, though we especially welcome Spanish speakers and people from underrepresented backgrounds. We think this will be especially exciting for people who can work remotely and want to try living in an EA hub. You are welcome to apply as a fellow for the full length of the program (we will grant around 20 fellowships), or as a visitor between two to four weeks. During the fellowship, we will support participants in their day-to-day lives. Accommodation and office space will be provided free-of-charge, near the center of the city. The coworking space is fully equipped for remote work and breakfast and lunch will be provided for free during weekdays. We will organise some social activities for participants, potentially including dancing, Spanish classes, and community dinners. Also, we could offer travel support in some cases. Our goal is to keep everyone as productive and happy as they can be! In exchange, we will ask fellows to take part in some community-building activities. Examples of such activities include talks at universities, mentorship of the younger fellows, and participating in camps for university students. We expect a commitment of 4-8 hours per month to this endeavor. The program overlaps with the dates of EAGx Mexico 6, 7, and 8 of January 2023, also in CDMX. Fellows will be automatically invited! You can apply to participate here. The last day to submit your application is 20th of September. Applications received before the 10th of September will be replied to by the 12th of September. The remaining applications will receive an answer by the 24th of September. If you are already living in Mexico City or want to join us without accommodation can apply just for space in the coworking office here. About Mexico City Mexico City is a friendly and vibrant city with many amenities and attractions. It was considered a top contender for a place to set up a new EA hub. Immigration to Mexico is relatively easy, and we expect participants from US, Europe, UK and most of LATAM to be able to enter the country and work remotely without requiring a visa. We will provide support and guidance to accepted participants who require a visa. Food is cheap and varied, including many vegan options. Transport in the city is easy and cheap thanks to Uber and the underground network.Some people we spoke to about this program were concerned about the apparently high crime rate in CDMX. However, in reality the amount of crime in CDMX is comparable to US cities like Chicago or Los Angeles. None of the EAs living in the relevant areas of CDMX consider safety to be an issue. A bigger downside is language. Only 10% of Mexico inhabitants speak English, though we expect the proportion to be higher in CDMX especially in the Universities and in the neighbourhoods where we will live and work. You can apply to participate in the program here. In terms of violence (number of homicides per 100,000 residents), CDMX borough Cuahtémoc had a rate of 16.6, Miguel Hidalgo stood at 6.6 and Benito Juárez at 4.1. These numbers are comparable...

The Nonlinear Library
EA - EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22') by Zoe Williams

The Nonlinear Library

Play Episode Listen Later Aug 30, 2022 20:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22'), published by Zoe Williams on August 30, 2022 on The Effective Altruism Forum. Supported by Rethink Priorities Sunday August 21st - Saturday August 27th The amount of content on the EA and LW forums has been accelerating. This is awesome, but makes it tricky to keep up with! The below hopes to help by summarizing popular (>40 karma) posts each week. It also includes announcements and ideas from Twitter that this audience might find interesting. This will be a regular series published weekly - let me know in the comments if you have any feedback on what could make it more useful!If you'd like to receive these summaries via email, you can subscribe here. Methodology This series originated from a task I did as Peter Wildeford's executive research assistant at Rethink Priorities, to summarize his weekly readings. If your post is in the ‘Didn't Summarize' list, please don't take that as a judgment on its quality - it's likely just a topic less relevant to his work. I've also left out technical AI posts because I don't have the background knowledge to do them justice. My methodology has been to use this and this link to find the posts with >40 karma in a week for the EA forum and LW forum respectively, read / skim each, and summarize those that seem relevant to Peter. Those that meet the karma threshold as of Sunday each week are considered (sometimes I might summarize a very popular later-in-the-week post in the following week's summary, if it doesn't meet the bar until then). For twitter, I skim through the following lists: AI, EA, Forecasting, National Security (mainly nuclear), Science (mainly biosec). I'm going through a large volume of posts so it's totally possible I'll get stuff wrong. If I've misrepresented your post, or you'd like a summary edited, please let me know (via comment or DM). EA Forum Philosophy and Methodologies Critque's of MacAskill's ‘Is it Good to Make Happy People?' Discusses population asymmetry, the viewpoint that a new life of suffering is bad, but a new life of happiness is neutral or only weakly positive. Post is mainly focused on what these viewpoints are and that they have many proponents vs. specific arguments for them. Mentions that they weren't well covered in Will's book and could affect the conclusions there. Presents evidence that people's intuitions tend towards needing significantly more happy people than equivalent level of suffering people for a tradeoff to be ‘worth it' (3:1 to 100:1 depending on question specifics), and that therefore a big future (which would likely have more absolute suffering, even if not proportionally) could be bad. EAs Underestimate Uncertainty in Cause Prioritization Argues that EAs work across too narrow a distribution of causes given our uncertainty in which are best, and that standard prioritizations are interpreted as more robust than they really are.As an example, they mention that 80K states “some of their scores could easily be wrong by a couple of points” and this scale of uncertainty could put factory farming on par with AI. The Repugnant Conclusion Isn't The repugnant conclusion (Parfit, 1984) is the argument that enough lives ‘barely worth living' are better than a much smaller set of super duper awesome lives. In one description of it, Parfit said the barely worth it lives had ‘nothing bad in them' (but not much good either). The post argues that actually makes those lives pretty awesome and non-repugnant, because nothing bad is a high bar. A Critical Review of Givewell's 2022 Cost-effectiveness Model NB: longer article - only skimmed it so I may have missed some pieces. Suggestions for cost-effectiveness modeling in EA by a health economist, with Givewell as a case study. The author believes the overall approach to be good, with the follow...

The YANApodcast with NAMI Philly
Nazhah Khawaja Part 2

The YANApodcast with NAMI Philly

Play Episode Listen Later Aug 30, 2022 43:28


Nazhah Khawaja is a Pakistani-American author from Chicago and a mom to two creative, brilliant minds! Nazhah works in the Behavioral Health Industry as an Outreach Specialist with Early Autism Services and hosts the company's Podcast - Life at EASe. Nazhah is also an event planner for a few non-profit organizations in the Chicago-land area. Nazhah's true passion is expression which can be best exemplified through her work as a poet/spoken word artist and the number of Op-ed articles she has written for various online publications. She partnered with the creative mastermind of theDemureist and held the title of ‘Women Editor' for a few years as well as hosted events in Chicago and NY with the intention of inspiring women through engaging workshops and empowering talks. Nazhah's debut novel, The Other Side of Life, can be purchased on Amazon and can be found in Salt Lake County Libraries. NAZHAH: Instagram: @nazhah_k The Other Side of Life on Amazon: https://amzn.to/3A2NHKd Life at EASe Podcast on Spotify: https://spoti.fi/3SUUv5l Early Autism Services: https://earlyautismservices.com/ EAS on Instagram: @early_autism_services The YANApodcast: Instagram: @the_yanapodcast Website: www.theyanapodcast.com New episodes every Tuesday! NAMI PHILLY: Instagram: @NAMIPhiladelphia Website: www.namiphilly.org NAMI Philadelphia Warmline: 844.PHL.HOPE CRISIS RESOURCES: Suicide and Crisis Lifeline: 988 Crisis Text Line: Text “NAMI” to 741-741 Philadelphia Suicide and Crisis Intervention Hotline (215) 686-4420 National Eating Disorder Association (NEDA) Help Line (CALL OR TEXT): (800) 931-2237 SAMHSA (Substance Abuse and Mental Health Services Administration): (800) 662-HELP (4357) to get connected to help for substance use disorders. The Trevor Project (Suicide/Crisis Support for LGBTQ+ Youth): (866) 488-7386, Text “Start” to 678-678 --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/yanapodcast/support

The Nonlinear Library
LW - EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22') by Zoe Williams

The Nonlinear Library

Play Episode Listen Later Aug 30, 2022 20:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22'), published by Zoe Williams on August 30, 2022 on LessWrong. This is also posted on the EA forum: see here.Supported by Rethink Priorities Sunday August 21st - Saturday August 27th The amount of content on the EA and LW forums has been accelerating. This is awesome, but makes it tricky to keep up with! The below hopes to help by summarizing popular (>40 karma) posts each week. It also includes announcements and ideas from Twitter that this audience might find interesting. This will be a regular series published weekly - let me know in the comments if you have any feedback on what could make it more useful!If you'd like to receive these summaries via email, you can subscribe here. Methodology This series originated from a task I did as Peter Wildeford's executive research assistant at Rethink Priorities, to summarize his weekly readings. If your post is in the ‘Didn't Summarize' list, please don't take that as a judgment on its quality - it's likely just a topic less relevant to his work. I've also left out technical AI posts because I don't have the background knowledge to do them justice. My methodology has been to use this and this link to find the posts with >40 karma in a week for the EA forum and LW forum respectively, read / skim each, and summarize those that seem relevant to Peter. Those that meet the karma threshold as of Sunday each week are considered (sometimes I might summarize a very popular later-in-the-week post in the following week's summary, if it doesn't meet the bar until then). For twitter, I skim through the following lists: AI, EA, Forecasting, National Security (mainly nuclear), Science (mainly biosec). I'm going through a large volume of posts so it's totally possible I'll get stuff wrong. If I've misrepresented your post, or you'd like a summary edited, please let me know (via comment or DM). EA Forum Philosophy and Methodologies Critque's of MacAskill's ‘Is it Good to Make Happy People?' Discusses population asymmetry, the viewpoint that a new life of suffering is bad, but a new life of happiness is neutral or only weakly positive. Post is mainly focused on what these viewpoints are and that they have many proponents vs. specific arguments for them. Mentions that they weren't well covered in Will's book and could affect the conclusions there. Presents evidence that people's intuitions tend towards needing significantly more happy people than equivalent level of suffering people for a tradeoff to be ‘worth it' (3:1 to 100:1 depending on question specifics), and that therefore a big future (which would likely have more absolute suffering, even if not proportionally) could be bad. EAs Underestimate Uncertainty in Cause Prioritization Argues that EAs work across too narrow a distribution of causes given our uncertainty in which are best, and that standard prioritizations are interpreted as more robust than they really are.As an example, they mention that 80K states “some of their scores could easily be wrong by a couple of points” and this scale of uncertainty could put factory farming on par with AI. The Repugnant Conclusion Isn't The repugnant conclusion (Parfit, 1984) is the argument that enough lives ‘barely worth living' are better than a much smaller set of super duper awesome lives. In one description of it, Parfit said the barely worth it lives had ‘nothing bad in them' (but not much good either). The post argues that actually makes those lives pretty awesome and non-repugnant, because nothing bad is a high bar. A Critical Review of Givewell's 2022 Cost-effectiveness Model NB: longer article - only skimmed it so I may have missed some pieces. Suggestions for cost-effectiveness modeling in EA by a health economist, with Givewell as a case study. The author believes the overall approach ...

The Nonlinear Library
EA - What domains do you wish some EA was an expert in? by Jack R

The Nonlinear Library

Play Episode