Decision support tool
POPULARITY
This episode dives more into decision-making in agriculture, focusing on a powerful tool that can help farmers navigate the complexities and uncertainties they face. Jay Parsons, a professor in Nebraska's Department of Agricultural Economics and Director of the Center for Agricultural Profiability, joins to discuss his article "Branching Out: Harnessing the Power of Decision Trees." It is co-authored with John Hewlett at the University of Wyoming and Jeff Tranel at Colorado State University, and first published in the July 2024 edition of RightRisk News, which you can find at rightrisk.org. The article delves into how decision trees can aid agricultural producers in making more informed choices amidst risk and uncertainty. More: https://cap.unl.edu/management/decision-trees-branching-out
What is health economic evaluation and which modelling approaches can be used to support market access activities?In this episode, our expert health economists explain the key strengths and limitations of the most common health economic modelling classifications and structures. Here, Hannah Gillies (Consultant – Health Economics) and Daniel MacDonald (Associate Consultant – Health Economics) de-mystify the different approaches to health economic modelling.Our specialists explore six of the most common health economic modelling structures used to support market access activities; - Decision trees - Markov models - Semi-Markov models - Partitioned survival models - Cox regression models - Discrete event simulationsThis episode was first broadcast as a live webinar in January 2024. For more information or to request a copy of the slides used, please visit: https://mtechaccess.co.uk/de-mystifying-health-economic-models/Work with our health economists: https://mtechaccess.co.uk/health-economics/Subscribe to our newsletter to hear more news, insights and events from Mtech Access.
Shea and Anders dive into tree-based algorithms, starting with the most fundamental variety, the single decision tree. We cover the mechanics of a decision tree and provide a comparison to linear models. A solid understanding of how a decision tree works is critical to fully grasp the nuances of the more powerful ensemble models, the Random Forest and Gradient Boosting Machine. In addition, single decision trees can still be useful either as a starting point for building more complex models or for situations where interpretability is paramount.
Putting those bike pedals to work with a comprehensive exploratory data analysis, navigating through a near-inferno of namespace and dependency issues in package development, and how you can ensure bragging rights during your next play of Guess My Name using decision trees. Episode Links This week's curator: Tony Elhabr - @TonyElHabr (https://twitter.com/TonyElHabr) (Twitter) & @tonyelhabr@skrimmage.com (https://mastodon.skrimmage.com/@tonyelhabr) (Mastodon) My Year of Riding Danishly (https://www.gregdubrow.io/posts/my-year-of-riding-danishly/) Tame your namespace with a dash of suggests (https://rtask.thinkr.fr/tame-your-namespace-with-a-dash-of-suggests/) Guess My Name with Decision Trees (https://mhoehle.github.io/blog/2024/02/12/decisiontree.html) Entire issue available at rweekly.org/2024-W08 (https://rweekly.org/2024-W08.html) Supplement Resources {fusen} - Inflate your package from a simple flat Rmd https://thinkr-open.github.io/fusen/ R Packages Second Edition https://r-pkgs.org/ {usethis} - Automate package and project setup https://usethis.r-lib.org/ Supporting the show Use the contact page at https://rweekly.fireside.fm/contact to send us your feedback R-Weekly Highlights on the Podcastindex.org (https://podcastindex.org/podcast/1062040) - You can send a boost into the show directly in the Podcast Index. First, top-up with Alby (https://getalby.com/), and then head over to the R-Weekly Highlights podcast entry on the index. A new way to think about value: https://value4value.info Get in touch with us on social media Eric Nantz: @theRcast (https://twitter.com/theRcast) (Twitter) and @rpodcast@podcastindex.social (https://podcastindex.social/@rpodcast) (Mastodon) Mike Thomas: @mike_ketchbrook (https://twitter.com/mike_ketchbrook) (Twitter) and @mike_thomas@fosstodon.org (https://fosstodon.org/@mike_thomas) (Mastodon) Music credits powered by OCRemix (https://ocremix.org/) Swing Indigo - The Legend of Zelda: Majora's Mask - sschafi1 - https://ocremix.org/remix/OCR04560 What Lurks Behind the Door - Final Fantasy V - Lucas Guimaraes, Andrew Steffen - https://ocremix.org/remix/OCR04542
Software Engineering Radio - The Podcast for Professional Software Developers
Sean Moriarity, creator of the Axon deep learning framework, co-creator of the Nx library, and author of Machine Learning in Elixir and Genetic Algorithms in Elixir, published by the Pragmatic Bookshelf, speaks with SE Radio host Gavin Henry about what deep learning (neural networks) means today. Using a practical example with deep learning for fraud detection, they explore what Axon is and why it was created. Moriarity describes why the Beam is ideal for machine learning, and why he dislikes the term “neural network.” They discuss the need for deep learning, its history, how it offers a good fit for many of today's complex problems, where it shines and when not to use it. Moriarity goes into depth on a range of topics, including how to get datasets in shape, supervised and unsupervised learning, feed-forward neural networks, Nx.serving, decision trees, gradient descent, linear regression, logistic regression, support vector machines, and random forests. The episode considers what a model looks like, what training is, labeling, classification, regression tasks, hardware resources needed, EXGBoost, Jax, PyIgnite, and Explorer. Finally, they look at what's involved in the ongoing lifecycle or operational side of Axon once a workflow is put into production, so you can safely back it all up and feed in new data. Brought to you by IEEE Computer Society and IEEE Software magazine. This episode sponsored by Miro.
Join us in this episode of the Mob Mentality Show where we venture into the realm of fostering deep listening and understanding within mob programming. In this discussion, we unveil the profound 5th Habit of a Highly Effective Mobber: Empathetic Listening, drawing inspiration from Stephen R. Covey's timeless "7 Habits" book.
This week on AZAAZ we answered: ❓The Questions❓
Dive into this episode of The AI Frontier podcast, where we explore Ensemble Learning techniques like Boosting, Bagging, and Random Forests in Machine Learning. Learn about their applications, advantages, and limitations, and discover real-world success stories. Enhance your understanding of these powerful methods and stay ahead in the world of data science.Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
Jordan Backes (@JordanBackes33) takes a look at trying to predict how the 2023 Rookie QB class will perform their first year in the NFL through multiple regression models - Decision Trees and Linear Regression. JB also takes a look at the year 2 model for the 2022 QBs through the same methods. Which method is superior, how to these signal callers look going forward and who are some buys and sells? All Gas Newsletter - https://allgas.beehiiv.com/ Patreon - https://Patreon.com/AllGas Learn more about your ad choices. Visit megaphone.fm/adchoices
Jordan Backes (@JordanBackes33) takes a look at trying to predict how the 2023 rookie WR class will perform their first year in the NFL through multiple regression models - Decision Trees and Linear Regression. JB also takes a look at the year 2 model for the 2022 WR class through the same methods. Which method is superior and how do these WRs project going forward? All Gas Newsletter - https://allgas.beehiiv.com/ Patreon - https://Patreon.com/AllGas Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of the Pacey Performance Podcast, Rob is speaking to Director of Sport Science at the University of Louisville, Ernie Rimer. Ernie joined Rob and JB Morin a few weeks ago for a roundtable on force velocity profiling but because of sprint profiling being such a huge topic, we got him back to go into more details. In the first half of this episode we talk sprint modelling. Ernie gives us a detailed analysis of how we can measure an athlete sprint and then use that data to understand where their weaknesses are and create meaningful interventions to improve them. He uses football (soccer) positions as a great example of how we can use one sprint model to look into different areas of the sprint to create benchmarks and compare player against player. In the second half of the episode we chat about strategic periodisation. This was a topic discussed a long time ago with Sam Robertson after he and David Joyce published a paper on this. How can we work with technical coaches to assess each opponent and manipulate training load in the weeks leading up to and following each match? If we rank an opponent down, are we able to train that bit harder leading up to the game, or can we ease off and give the players extra recovery time? We have a look at the factors which cannot be controlled like the weather, playing surface or game time and marry that up with the factors we can control. This is a fascinating, unique episode with Ernie which listeners will get lots out of. Main talking points - Sprint modelling and ways to assess sprinting From low budget options to larger budget options Decision tree implementation Strategic periodisation - what is it? How to quantifying opponent strength Evaluating strategies against outcomes Fixed factors and dynamics fixed factors
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
There are many algorithms that can be used for classification and it's important to at least know them at a high level. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Support Vector Machine & Kernel Method, and explain how they relate to AI and why it's important to know about them. Continue reading AI Today Podcast: AI Glossary Series – Decision Trees at AI & Data Today.
Case Interview Preparation & Management Consulting | Strategy | Critical Thinking
For this episode, let's revisit a Case Interview & Management Consulting classic where we discuss how build hypotheses with decision trees. Building hypotheses is very difficult. Most candidates in a McKinsey, BCG et al interview would not know when to build the hypothesis, what comprises the hypothesis, how to test if it is MECE etc. This simple technique is one way to build hypotheses and used on real consulting engagements. It was developed to help candidates prioritize their analyzes and ensure the hypotheses are MECE. When practicing this technique note that the development of the decision tree must be done quickly and cleanly. Enjoying our podcast? Get access to sample advanced training episodes here: www.firmsconsulting.com/promo
Matt Freeman gives a primer in how to uses Bayesian decision making in normal life, via decision trees. We also discuss utilitarianism, the Guild of the Rose, and recent AI advances. We are now in the early singularity. Guild of … Continue reading →
Tom & Colin provide an update on their preview of the Improbable Defence Skyral platform, at their HQ in London. Colin has returned from I/ITSEC in Orlando and was able to see the Improbable Defence demo and provides a summary of the main discussion themes from the show. We discuss how there is a greater demand for realism and scale across simulation and training, which is being supported across a number of UK, US and Nato projects in the coming years.Our guest on this show is Chris Covert, who is an Executive Producer for Microsoft, working on Gaming, Exercising, Modeling, and Simulation projects across a wide range of customers. Chris brings a wealth of experience and subject matter knowledge from his career in developing simulation applications to solve some of our most challenging engineering problems.We ask Chris to break down AI concepts and explain them to people of below average intelligence (like your hosts). In this engaging discussion, Chris breaks down some of the misconceptions and assumptions around AI, and provides a quick reference overview for the different technologies and techniques that come under the banner of AI. We cover aspects of AI such as Machine Learning, Decision Trees, Deep Learning, Computer Vision and Natural Language Processing. At the end Chris provides our listeners with some great tips on how to address projects that might be seeking to leverage AI technologies.As ever, we are joined by Andy Fawkes who provides a digest of the recent modelling & simulation news, with some discussion around the more interesting topics. We're looking for our audience to get involved and send us stories of interest from the Simulation & Training world that we might not be aware of. Guest:Chris Covert: https://www.linkedin.com/in/christopher-covert/ Episode Sponsor:Improbable Defence: https://defence.improbable.io/ Improbable Defence is a mission focused technology company working to transform the national security of our nations and their allies in the face of increasing global competition and evolving threats.Today, national security is defined by technological superiority. We believe that software more than any other capability will redefine how war is fought and who will be on the winning side. Those entrusted with the preservation of our freedom, prosperity and safety deserve the best software-defined capabilities available.Since the end of the Cold War, the UK, US and their allies have been unchallenged in military technological dominance. Today, we are facing a different reality: our adversaries are seizing the technological edge.Improbable Defence chooses to stand up and not stand by. We are building cutting-edge software products to help our nations retake the technological advantage. We believe in defending our democratic values against those who seek to undermine them. Supporting those tasked with this mission is at the heart of all we do. We seek to radically transform the mission outcomes of those whose responsibility it is to keep us safe.Hosts:Tom Constable: https://www.linkedin.com/in/tom-constable/Colin Hillier: https://www.linkedin.com/in/colinhillier/Links:Website: https://www.warfighterpodcast.com/LinkedIn:
#neuralnetworks #machinelearning #ai Alexander Mattick joins me to discuss the paper "Neural Networks are Decision Trees", which has generated a lot of hype on social media. We ask the question: Has this paper solved one of the large mysteries of deep learning and opened the black-box neural networks up to interpretability? OUTLINE: 0:00 - Introduction 2:20 - Aren't Neural Networks non-linear? 5:20 - What does it all mean? 8:00 - How large do these trees get? 11:50 - Decision Trees vs Neural Networks 17:15 - Is this paper new? 22:20 - Experimental results 27:30 - Can Trees and Networks work together? Paper: https://arxiv.org/abs/2210.05189 Abstract: In this manuscript, we show that any feedforward neural network having piece-wise linear activation functions can be represented as a decision tree. The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is. We believe that this work paves the way to tackle the black-box nature of neural networks. We share equivalent trees of some neural networks and show that besides providing interpretability, tree representation can also achieve some computational advantages. The analysis holds both for fully connected and convolutional networks, which may or may not also include skip connections and/or normalizations. Author: Caglar Aytekin Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
If you want to know one of the most fundamental models in data science and machine learning, you need to know how decision trees work. This episode will help you understand decision trees and how they are used today in everyday applications.If you have any questions or comments, please reach out to us at podcast@arcosanalytics.comSTAY CONNECTED■ Twitter: https://twitter.com/arcosanalytics■ LinkedIn: https://www.linkedin.com/company/arcosanalytics
10 - Decision Trees & Ensemble Methods
Choices can be complicated, but decision-making tools can help! Dennis and Tom talk through the concept of “decision trees” and explain a variety of methods for modeling out choices and outcomes to help you make better decisions. Then, the guys revisit their “Hot or Not?” segment to hash out their thoughts on the Apple Watch. As always, stay tuned for the parting shots, that one tip, website, or observation you can use the second the podcast ends. Have a technology question for Dennis and Tom? Call their Tech Question Hotline at 720-441-6820 for the answers to your most burning tech questions. Special thanks to our sponsors, Posh Virtual Receptionists, Clio, and Embroker.
Choices can be complicated, but decision-making tools can help! Dennis and Tom talk through the concept of “decision trees” and explain a variety of methods for modeling out choices and outcomes to help you make better decisions. Then, the guys revisit their “Hot or Not?” segment to hash out their thoughts on the Apple Watch. As always, stay tuned for the parting shots, that one tip, website, or observation you can use the second the podcast ends. Have a technology question for Dennis and Tom? Call their Tech Question Hotline at 720-441-6820 for the answers to your most burning tech questions. Special thanks to our sponsors, Posh Virtual Receptionists, Clio, and Embroker.
Today's podcast guest is Nik Thakorlal, founder of quiz and data-generation software LeadsHook. Nik knows more about marketing than just about anyone I've ever talked to, listened to or heard of. And he has years of experience in what works and what doesn't in both the online and offline marketing world.Nik has some interesting and provocative things to say … things like why you shouldn't make page speed the first thing you optimise in your lead generation funnel. And why tyre kickers should be treated like gold, instead of annoying time wasters.In fact, there is so much in this episode, that I've included timestamps so that if you don't have time to listen to the whole thing, you can jump to the part that interests you the most.[00:02:45] Using decision trees to give your SEO optimised content an edge[00:04:01] Creating interactive PDF's that send people straight to the part of the PDF they need[00:07:57] Pay per lead as a potential business model for your business[00:13:32] A quick example of what the whole lead generation and selling process is like for an agency that sells high volumes of leads[00:19:49] How a smaller pay-per-lead agency can still find a way to compete[00:21:52] Why you should stop buying courses and focus on running experiments instead[00:25:33] Why page speed is not the first thing you should optimise in your lead gen funnels[00:30:14] Why you should be focusing on the front end of your funnel and not the back end and why it's more important to get people to consume what they've already purchased, rather than sell them the next thing[00:37:19 Nik identifies an underserved market and an opportunity to use quizzes to fill it[00:41:15] Why you should treasure tyre kickers instead of trying to get rid of them[00:43:06 Why you should think about selling your leads when that lead doesn't quite need what you have to offer[00:45:44] An example of a couple of uncommon quiz-based business models that might be an interesting addition or change for your own business model[00:49:20] Why data is the new oil and why you should plan on becoming your own mini-Facebook this year
Peter Herz is a serial entrepreneur turned venture capitalist. Currently, he is general partner of 1st Course Capital, a VC firm focused on early-stage food and agricultural companies. When Peter faces tough decisions—like selling a concentrated stock position, considering investments, or choosing from a variety of paths for his portfolio companies—Peter uses a powerful tool called a decision tree. In this episode, he explains how he assigns probabilities to the various branches of the decision tree, and how this helpful tool keeps him from losing everything. You can create your own decision tree using the resources in the show notes. https://treeplan.com (Here's a plug-in for decision trees) and other elements to support decision analysis that works on top of Excel. The link for 1st Course Capital's decision modeling for investments is available here. https://www.linkedin.com/in/jpeterherz/ (Peter Herz's LinkedIn) https://www.linkedin.com/in/joyce-franklin-0423a91 (Joyce Franklin's LinkedIn) — https://www.jlfwealth.com/podcast/ (Decision tree graphic from Startup Wealth) Decision tree model templates https://treeplan.com/ (Decision Tree add-in for Excel) — https://www.jlfwealth.com/podcast/ (Request a copy of The Four Phases of Startup Life and the Entrepreneur's Wheel of Life) https://www.amazon.com/Startup-Wealth-Entrepreneurs-Financial-Long-Term/dp/0991617223/ref=sr_1_1?keywords=joyce+franklin+startup+wealth&qid=1638891970&sr=8-1 (Read Joyce's book, Startup Wealth: The Entrepreneur's Guide to Personal Financial Success and Long-Term Security)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Introduction to Game Theory, Part 1: Backward Reasoning Over Decision Trees, published by Scott Alexander. Game theory is the study of how rational actors interact to pursue incentives. It starts with the same questionable premises as economics: that everyone behaves rationally, that everyone is purely self-interested1, and that desires can be exactly quantified - and uses them to investigate situations of conflict and cooperation. Here we will begin with some fairly obvious points about decision trees, but by the end we will have the tools necessary to explain a somewhat surprising finding: that giving a US president the additional power of line-item veto may in many cases make the president less able to enact her policies. Starting at the beginning: The basic unit of game theory is the choice. Rational agents make choices in order to maximize their utility, which is sort of like a measure of how happy they are. In a one-person game, your choices affect yourself and maybe the natural environment, but nobody else. These are pretty simple to deal with: Here we visualize a choice as a branching tree. At each branch, we choose the option with higher utility; in this case, going to the beach. Since each outcome leads to new choices, sometimes the decision trees can be longer than this: Here's a slightly more difficult decision, denominated in money instead of utility. If you want to make as much money as possible, then your first choice - going to college or starting a minimum wage job right Now - seems to favor the more lucrative minimum wage job. But when you take Later into account, college opens up more lucrative future choices, as measured in the gray totals on the right-hand side. This illustrates the important principle of reasoning backward over decision trees. If you reason forward, taking the best option on the first choice and so on, you end up as a low-level manager. To get the real cash, you've got to start at the end - the total on the right - and then examine what choice at each branch will take you there. This is all about as obvious as, well, not hitting yourself on the head with a hammer, so let's move on to where it really gets interesting: two-player games. I'm playing White, and it's my move. For simplicity I consider only two options: queen takes knight and queen takes rook. The one chess book I've read values pieces in number of pawns: a knight is worth three pawns, a rook five, a queen nine. So at first glance, it looks like my best move is to take Black's rook. As for Black, I have arbitrarily singled out pawn takes pawn as her preferred move in the current position, but if I play queen takes rook, a new option opens up for her: bishop takes queen. Let's look at the decision tree: If I foolishly play this two player game the same way I played the one-player go-to-college game, I note that the middle branch has the highest utility for White, so I take the choice that leads there: capture the rook. And then Black plays bishop takes queen, and I am left wailing and gnashing my teeth. What did I do wrong? I should start by assuming Black will, whenever presented with a choice, take the option with the highest Black utility. Unless Black is stupid, I can cross out any branch that requires Black to play against her own interests. So now the tree looks like this: The two realistic options are me playing queen takes rook and ending up without a queen and -4 utility, or me playing queen takes knight and ending up with a modest gain of 2 utility. (my apologies if I've missed some obvious strategic possibility on this particular chessboard; I'm not so good at chess but hopefully the point of the example is clear.) This method of alternating moves in a branching tree matches both our intuitive thought processes during a chess game (“Okay, if I do this, then B...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Introduction to Game Theory, Part 1: Backward Reasoning Over Decision Trees, published by Scott Alexander. Game theory is the study of how rational actors interact to pursue incentives. It starts with the same questionable premises as economics: that everyone behaves rationally, that everyone is purely self-interested1, and that desires can be exactly quantified - and uses them to investigate situations of conflict and cooperation. Here we will begin with some fairly obvious points about decision trees, but by the end we will have the tools necessary to explain a somewhat surprising finding: that giving a US president the additional power of line-item veto may in many cases make the president less able to enact her policies. Starting at the beginning: The basic unit of game theory is the choice. Rational agents make choices in order to maximize their utility, which is sort of like a measure of how happy they are. In a one-person game, your choices affect yourself and maybe the natural environment, but nobody else. These are pretty simple to deal with: Here we visualize a choice as a branching tree. At each branch, we choose the option with higher utility; in this case, going to the beach. Since each outcome leads to new choices, sometimes the decision trees can be longer than this: Here's a slightly more difficult decision, denominated in money instead of utility. If you want to make as much money as possible, then your first choice - going to college or starting a minimum wage job right Now - seems to favor the more lucrative minimum wage job. But when you take Later into account, college opens up more lucrative future choices, as measured in the gray totals on the right-hand side. This illustrates the important principle of reasoning backward over decision trees. If you reason forward, taking the best option on the first choice and so on, you end up as a low-level manager. To get the real cash, you've got to start at the end - the total on the right - and then examine what choice at each branch will take you there. This is all about as obvious as, well, not hitting yourself on the head with a hammer, so let's move on to where it really gets interesting: two-player games. I'm playing White, and it's my move. For simplicity I consider only two options: queen takes knight and queen takes rook. The one chess book I've read values pieces in number of pawns: a knight is worth three pawns, a rook five, a queen nine. So at first glance, it looks like my best move is to take Black's rook. As for Black, I have arbitrarily singled out pawn takes pawn as her preferred move in the current position, but if I play queen takes rook, a new option opens up for her: bishop takes queen. Let's look at the decision tree: If I foolishly play this two player game the same way I played the one-player go-to-college game, I note that the middle branch has the highest utility for White, so I take the choice that leads there: capture the rook. And then Black plays bishop takes queen, and I am left wailing and gnashing my teeth. What did I do wrong? I should start by assuming Black will, whenever presented with a choice, take the option with the highest Black utility. Unless Black is stupid, I can cross out any branch that requires Black to play against her own interests. So now the tree looks like this: The two realistic options are me playing queen takes rook and ending up without a queen and -4 utility, or me playing queen takes knight and ending up with a modest gain of 2 utility. (my apologies if I've missed some obvious strategic possibility on this particular chessboard; I'm not so good at chess but hopefully the point of the example is clear.) This method of alternating moves in a branching tree matches both our intuitive thought processes during a chess game (“Okay, if I do this, then B...
This week Cole, Alan, and Will discuss the new over-sleeve ruling, and break down all the complex decisions we make through out a game of Vanguard!Procyon GamingSpotify PlaylistOfficial overDress UpdatesEmailTwitter: @drive_checktiktok: drivecheckpodcastOur DiscordSpecial Thanks to our Patreon backers:Jared B.Greg L.Andy T.Bobbie M.Alex B.Jevon W.Andrew W.G. C. GrainEric G.Ed Jr.AliceBevPhilip B.MewwillAlex S.James B.Curtis M.Alan C.PKMNcastWesley W.Nicklol2Alec M.William A.Kay
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Backward Reasoning Over Decision Trees , published y Scott Alexander on the AI Alignment Forum. Game theory is the study of how rational actors interact to pursue incentives. It starts with the same questionable premises as economics: that everyone behaves rationally, that everyone is purely self-interested1, and that desires can be exactly quantified - and uses them to investigate situations of conflict and cooperation. Here we will begin with some fairly obvious points about decision trees, but by the end we will have the tools necessary to explain a somewhat surprising finding: that giving a US president the additional power of line-item veto may in many cases make the president less able to enact her policies. Starting at the beginning: The basic unit of game theory is the choice. Rational agents make choices in order to maximize their utility, which is sort of like a measure of how happy they are. In a one-person game, your choices affect yourself and maybe the natural environment, but nobody else. These are pretty simple to deal with: Here we visualize a choice as a branching tree. At each branch, we choose the option with higher utility; in this case, going to the beach. Since each outcome leads to new choices, sometimes the decision trees can be longer than this: Here's a slightly more difficult decision, denominated in money instead of utility. If you want to make as much money as possible, then your first choice - going to college or starting a minimum wage job right Now - seems to favor the more lucrative minimum wage job. But when you take Later into account, college opens up more lucrative future choices, as measured in the gray totals on the right-hand side. This illustrates the important principle of reasoning backward over decision trees. If you reason forward, taking the best option on the first choice and so on, you end up as a low-level manager. To get the real cash, you've got to start at the end - the total on the right - and then examine what choice at each branch will take you there. This is all about as obvious as, well, not hitting yourself on the head with a hammer, so let's move on to where it really gets interesting: two-player games. I'm playing White, and it's my move. For simplicity I consider only two options: queen takes knight and queen takes rook. The one chess book I've read values pieces in number of pawns: a knight is worth three pawns, a rook five, a queen nine. So at first glance, it looks like my best move is to take Black's rook. As for Black, I have arbitrarily singled out pawn takes pawn as her preferred move in the current position, but if I play queen takes rook, a new option opens up for her: bishop takes queen. Let's look at the decision tree: If I foolishly play this two player game the same way I played the one-player go-to-college game, I note that the middle branch has the highest utility for White, so I take the choice that leads there: capture the rook. And then Black plays bishop takes queen, and I am left wailing and gnashing my teeth. What did I do wrong? I should start by assuming Black will, whenever presented with a choice, take the option with the highest Black utility. Unless Black is stupid, I can cross out any branch that requires Black to play against her own interests. So now the tree looks like this: The two realistic options are me playing queen takes rook and ending up without a queen and -4 utility, or me playing queen takes knight and ending up with a modest gain of 2 utility. (my apologies if I've missed some obvious strategic possibility on this particular chessboard; I'm not so good at chess but hopefully the point of the example is clear.) This method of alternating moves in a branching tree matches both our intuitive thought processes during a chess game (“Okay, if I do this, then Black's goi...
How are decision trees trained and what is entropy?
This week, first up, we welcome Kelly Shortridge, Senior Principal Product Technologist at Fastly, to talk about “Deciduous”, Decision Trees, and Security Chaos Engineering! Then, Deb Radcliff, Strategic Analyst and Author from CyberRisk Alliance Joins to discuss “Penning a Cyber Thriller”! Finally, In the Enterprise News Guardicore Centra lets teams stop ransomware and lateral movement, Netskope streamlines procedures with improved attribution models and collaboration, Cloudflare claims they blocked the ‘greatest DDoS attack in history', SecurityScorecard partners up with Tenable to improve Risk Management, Sumo Logic delivers on SOAR promise by acquiring DFLabs, SCAR invests in cyber startup Hook Security, Hunters raises $30 Million in Series B, and more! Show Notes: https://securityweekly.com/esw240 Segment Resources: - https://www.deciduous.app/ - https://swagitda.com/blog/posts/rick-morty-thanksploitation-decision-tree/ - https://swagitda.com/blog/posts/deciduous-attack-tree-app/ - https://learning.oreilly.com/library/view/security-chaos-engineering/9781492080350/ - The book is available at https://www.amazon.com/Breaking-Backbones-Information-Hacker-Trilogy/dp/1665701080/ ; and her articles, speaking engagements and more information is available at www.debradcliff.com Visit https://www.securityweekly.com/esw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly
Deciduous is an app Kelly built with Ryan Petrich that simplifies the process of creating security decision trees. Security decision trees are valuable aids in threat modeling and prioritizing mitigations, harnessing the power of belief prompting from the realm of behavioral game theory. Segment Resources: - https://www.deciduous.app/ - https://swagitda.com/blog/posts/rick-morty-thanksploitation-decision-tree/ - https://swagitda.com/blog/posts/deciduous-attack-tree-app/ - https://learning.oreilly.com/library/view/security-chaos-engineering/9781492080350/ Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw240
This week, first up, we welcome Kelly Shortridge, Senior Principal Product Technologist at Fastly, to talk about “Deciduous”, Decision Trees, and Security Chaos Engineering! Then, Deb Radcliff, Strategic Analyst and Author from CyberRisk Alliance Joins to discuss “Penning a Cyber Thriller”! Finally, In the Enterprise News Guardicore Centra lets teams stop ransomware and lateral movement, Netskope streamlines procedures with improved attribution models and collaboration, Cloudflare claims they blocked the ‘greatest DDoS attack in history', SecurityScorecard partners up with Tenable to improve Risk Management, Sumo Logic delivers on SOAR promise by acquiring DFLabs, SCAR invests in cyber startup Hook Security, Hunters raises $30 Million in Series B, and more! Show Notes: https://securityweekly.com/esw240 Segment Resources: - https://www.deciduous.app/ - https://swagitda.com/blog/posts/rick-morty-thanksploitation-decision-tree/ - https://swagitda.com/blog/posts/deciduous-attack-tree-app/ - https://learning.oreilly.com/library/view/security-chaos-engineering/9781492080350/ - The book is available at https://www.amazon.com/Breaking-Backbones-Information-Hacker-Trilogy/dp/1665701080/ ; and her articles, speaking engagements and more information is available at www.debradcliff.com Visit https://www.securityweekly.com/esw for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly
Deciduous is an app Kelly built with Ryan Petrich that simplifies the process of creating security decision trees. Security decision trees are valuable aids in threat modeling and prioritizing mitigations, harnessing the power of belief prompting from the realm of behavioral game theory. Segment Resources: - https://www.deciduous.app/ - https://swagitda.com/blog/posts/rick-morty-thanksploitation-decision-tree/ - https://swagitda.com/blog/posts/deciduous-attack-tree-app/ - https://learning.oreilly.com/library/view/security-chaos-engineering/9781492080350/ Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw240
This week in the AppSec News: Security from code comments, visualizing decision trees, bypassing Windows Hello, security analysis of Telegram, paying for patient bug bounty programs, cloud risks, & more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw158
This week in the AppSec News: Security from code comments, visualizing decision trees, bypassing Windows Hello, security analysis of Telegram, paying for patient bug bounty programs, cloud risks, & more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw158
Now is time to discuss non linear classifiers, which provide flexibility in modeling more complex patterns in data. We start by a conceptual intro to decision trees in this short episode
This week the Worst Coast Children play TTS and discover how many decisions there are in a game! Isaiahs List: https://bit.ly/3act0zYElijah's List : https://bit.ly/3agr9Kz HI TOM!Worst Coast Discord: https://discord.gg/wpb8SqFhRN Worst Coast Facebook Group: https://www.facebook.com/groups/285929289439230 Worst Coast Facebook Page: https://www.facebook.com/worstcoastchildrentakeoverthekrayts/ The Worst Coast Children is sponsored by District Foundry! https://www.etsy.com/shop/DistrictFoundryTo listen to past deleted episodes of The Worst Coast Children you can visit: https://drive.google.com/drive/folders/1i_Fk8pqvFNr11jjJ65IS95Suz4n-9aXZSupport the show (http://www.buzzsprout.com/320417)
This week the Worst Coast Children discuss how to make decisions in x-wing and a new way of thinking about them in general. Oddball list: https://raithos.github.io/?f=Galactic%20Republic&d=v8ZsZ200Z436XWWWY436XWWWY436XWWWY436XWWWY435X127WWW337W&sn=Unnamed%20Squadron&obs=yes i know i said its episode 59 i lost count sorry HI TOM! Discord: https://discord.gg/wpb8SqFhRN Worst Coast Children Facebook Group: https://www.facebook.com/groups/285929289439230 Worst Coast Children Facebook Page: https://www.facebook.com/worstcoastchildrentakeoverthekrayts/ The Worst Coast Children is sponsored by District Foundry! https://www.etsy.com/shop/DistrictFoundryTo listen to past deleted episodes of The Worst Coast Children you can visit: https://drive.google.com/drive/folders/1i_Fk8pqvFNr11jjJ65IS95Suz4n-9aXZSupport the show (http://www.buzzsprout.com/320417)
There is a glaring discrepancy between how we monitor the earth and how we respond to signs of the biosphere’s collapse. After decades of political indifference, a growing number of advocates have sought ways to automate environmentalism to bypass institutional resistance to urgent change. But it's not clear how to "optimize" for a “winning” ecosystem, or who decides what that should look like. Read more essays on living with technology at reallifemag.com and follow us on Twitter @_reallifemag.
Random Forest is one of the best out-of-the-shelf algorithms. In this episode we try to understand the intuition behind the Random Forest and how it tries to leverage the capabilities of Decision Trees by aggregating them using a very smart trick called “bagging”. Variable Importance and out-of-bag error are two of the nice capabilities of Random Forest which allow us to find the most important predictors and compute a good generalization error, respectively.
We talk about Decision Trees as one of the most basic statistical learning algorithms out there that all Data Scientist should know. Decision Trees are one of a few machine learning models which are easy to interpret which makes them a favorite when it is desired to understand the logic behind a certain decision. Decision Trees naturally handle all types of variables without the need to create dummy variables, no need to scale or normalize and they are also very robust against outliers.
Dr. Jerz creates decision trees in Excel and illustrates TreePlan.
Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders
Today Felipe gives us a brief but holistic introduction into Data Science. We discuss myths, Felipe’s journey into the data science world, and demystify some of the field's elements. When Felipe started his career, he did not know anything about data science but found himself in the area of machine learning at the Australia New Zealand Banking Group (ANZ). Eventually, Felipe found himself pioneering the first data-driven strategy at ANZ. We learn that one of the biggest myths about data science is the fear that AI will eventually replace humans. Dispelling this myth is possible when we develop a wider understanding of what machine learning actually can and cannot do. First, we need to understand how machine learning works. We learn that machine learning updates the foundational data science process of input (data), algorithm (instructions), and output. Instead, it takes the job of developing the instructions, algorithm, or recipe off of the data scientist. Instead, an algorithm is created based on the data and feedback that is given to the machine. Felipe then dives into an explanation of the two basic types of algorithms, classification, and regression algorithms. Classification algorithms organize categories, and regressions deal with the likelihood of outcomes by shooting out a number from 0 to 1. Felipe spends some time breaking down concepts like AI and decision trees and shares some history of the development of algorithms. Bias data and the importance of understanding how algorithms can be biased are discussed. The power of the decision-making capacity of humans working with machines and data is highlighted. Felipe sees the potential of marrying human judgment and experience with algorithms and data as a game-changer in many areas including the medical field. We close with a Q&A and resources for people hoping to get started in data science, including programs that equip you with the skills and knowledge to get started in the field and include a mentorship program. Enjoy the show! We speak about: [01:25] About Felipe [04:55] What Can Data Science Do & How Does It Work? [17:00] Algorithms [32:00] Key Terms and Decision Trees [46:00] Coupling Machine Learning and Humans [60:00] Q&A and Resources To Get Started Quotes: “In today’s world, everyone should know how to read and write, in tomorrow’s world, everyone should know how algorithms work” “Machine learning can supplement thinking, show you things you haven’t considered, and give you a better perspective that allows you to make better decisions”. “Are humans going to be replaced? Will it always be a combination? That’s up to you.” Thank you to our sponsors: Fyrebox - Make Your Own Quiz! RMIT Online Master of Data Science Strategy and Leadership Gain the advanced strategic, leadership and data science capabilities required to influence executive leadership teams and deliver organisation-wide solutions. We are RUBIX. - one of Australia’s leading pure data consulting companies delivering project outcomes for some of the world’s leading brands. Visit online.rmit.edu.au for more information And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show! --- Send in a voice message: https://anchor.fm/datafuturology/message
Kaggle 勢を賑わす GBDT なるものがなんなのか森田が遠巻きに調べます。
Mindset: IT’S COMPLICATED (But it doesn’t have to be) This episode of the Checklist Legal Podcast covers: • How to create a mind map and my favourite application for creating mind maps • How to create Swimlane process maps for more complicated contracts with lots of moving parts Head to the website https://www.checklistlegal.com/podcast and click on episode 10 for: • an example mind map for a basic contract process, from application form to approval, contract execution and welcome pack. • a basic swim lane diagram template you can use. Key Takeaways • Find out who owns the contract process steps that are slowing everyone down • Check in process step owners and ask how can you make life easier or speed things up for them. • Reduce the number of people in the process. • Gather clear and accurate information as early as possible • Try to link in with an existing customer database to reduce data entry (which people find boring and often put off). • Make it a firm requirement to get all information from clients or customer before the contract process kicks off (i.e. in an application form or other collection method). • Add hard stops that don’t allow people to proceed without entering required information. • Use drop down menus or check boxes instead of free text fields to eliminate errors and speed up data entry. Start simple, avoid those process pigs! Start out your contract mapping career with some easy… low hanging fruit process. Stick with documents that have linear work flows (i.e. I send to Justin, Justin fills in information signs and sends to Brittany, Brittany signs and we are done!). **Actionable Challenge** Pick a basic contract you work on regularly - it doesn’t have to be a hard one or one you dislike - go to Coggle (or use any mind map tool or pens and paper) and map that contract out step by step. **LINKS** Get the Swim Lane Diagram template and other free resources via https://www.checklistlegal.com/resources Checklist Legal – Article about Legal Process Mapping https://www.checklistlegal.com/2017/11/legal-process-mapping-guide/ Swim Lane Diagram template download: https://www.checklistlegal.com/example-legal-process-mapping-swim-lane-diagram-2/ **Decision Trees and mapping** Zingtree: Zingtree is an incredibly powerful way to give guided advice or troubleshoot issues (https://zingtree.com/?aid=17173) Coggle: Coggle is an amazing mind mapping tool (https://coggle.it/recommend/560e2a38066647fa5777db5e) Stationary: Or just use Post its or Pen and paper (from your drawer!) Post its Whiteboard and markers For the 5 Why technique straight from Toyota, see toyota-global.com/company/toyota_traditions/quality/mar_apr_2006.html, accessed 27 May 2017. TD Barton, H Haapio and T Borisova, ‘Flexibility and Stability in Contracts’© (2014) Retrieved via ulapland.fi/loader.aspx?id=5a80d6cd-83dd-4126-bb16-a069b85533d2, accessed 10 June 2017. Think Buzan, ‘Mind Mapping Evidence Report’ Retrieved via b701d59276e9340c5b4d-ba88e5c92710a8d62fc2e3a3b5f53bbb.ssl.cf2.rackcdn.com/docs/Mind%20Mapping%20Evidence%20Report.pdf, accessed 10 June 2017. Mind Mapping, ‘What is a Mind Map?’(2017) Retrieved via mindmapping.com/mind-map.php, accessed 10 June 2017. Head to https://www.checklistlegal.com/podcast for show notes, resources links, and templates. Music: 'Sway this way' by @SilentPartner
Ruben is head of data at VSCO, a creative platform and a community of expression. His focus is on designing the data strategy, understanding user behavior, and designing metrics for the entire organization. Prior to VSCO, Ruben was the head of Content Analytics at Udemy. And before that he had a completely different career developing and troubleshooting chip manufacturing technology. Ruben’s interests span statistics, machine learning, economics, and photography. Interviewer: Rajib Bahar, Shabnam Khan Agenda: - In your role, You're in charge of Data Governance at VSCO... Would you like to elaborate on that? - According to Gregory's article in KDNuggets, top 3 most heavily used Data Science algorithms were Regression, Clustering, Decision Trees. In your projects, how useful were they? - What are some of your favorite KDNuggets like online resources, where you find good code samples, articles, podcasts etc? - Are you applying evolving branches of Artificial Intelligence such as Machine Learning, Deep Learning, or Reinforced Learning in your organization? - Many people are aware of Instagram... How is VSCO more unique compared to that platform? - Now, on to some art related question... Are you a portrait, landscape, or travel photographer? Or do you experiment with different techniques? - Please tell us where we can find you in social media? Music: www.freesfx.co.uk
Speed-run of some shallow algorithms: K Nearest Neighbors (KNN); K-means; Apriori; PCA; Decision Trees ocdevel.com/mlg/12 for notes and resources
Matthew Jones from Columbia University delivers a talk titled “Random Forests and Decision Trees: Machine Learning, Empirical Statistics, and the Challenge of Interpretability.” This talk was included in the session titled “Methods and Ambiguities in the Contemporary Age.” Part of “Histories of Data and the Database,” a conference held at The Huntington Nov. 18–19, 2016.
My perpetually tricky friend told me that while she was walking through town she saw four particularly vibrant houses. There was an auburn one, a brick one, a cherry one, and one the shade of dogwood rose. She wanted me to figure out the order of the houses. She said that the the auburn came before the the brick one while the cherry one came before the dogwood rose, but the cherry and the dogwood rose were not adjacent. I told her that she hadn’t given me enough information, so she just laughed and told me that she could tell me the color of the first one or the color of the last one, but it wouldn’t help if she did either one. // What color was the second house? // Spiciness: *** out of **** // Note: Not having paper makes this one especially difficult. If you give yourself paper, I think you can rate this puzzle as two chili peppers of spiciness instead of three.
Case Interview Preparation & Management Consulting | Strategy | Critical Thinking
Building hypotheses is very difficult. Most candidates in a McKinsey, BCG et al interview would not know when to build the hypothesis, what comprises the hypothesis, how to test if it is MECE etc. This simple technique is one way to build hypotheses and used on real consulting engagements. It was developed to help candidates prioritize their analyzes and ensure the hypotheses are MECE. When practicing this technique note that the development of the decision tree must be done quickly and cleanly.
Structured English, Decision Tables, Decision Trees.
Moritz' third segment on innovative game designs. This time he talks about paragraph based game designs.